Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,339
| 6,473,237,849
|
IssuesEvent
|
2017-08-17 15:29:58
|
syndesisio/syndesis-ui
|
https://api.github.com/repos/syndesisio/syndesis-ui
|
closed
|
Documentation for UI
|
dev process research
|
Automatically generated documentation could be very useful for encouraging contribution, as we are using many third party dependencies, and many developers are new to Angular 2 on its own. For instance, we are using Angular CLI, and just going through its dependencies, as well as how Webpack works, etc. To facilitate this, it makes sense to have some type of technical overview along with automatically generated documentation.
|
1.0
|
Documentation for UI - Automatically generated documentation could be very useful for encouraging contribution, as we are using many third party dependencies, and many developers are new to Angular 2 on its own. For instance, we are using Angular CLI, and just going through its dependencies, as well as how Webpack works, etc. To facilitate this, it makes sense to have some type of technical overview along with automatically generated documentation.
|
process
|
documentation for ui automatically generated documentation could be very useful for encouraging contribution as we are using many third party dependencies and many developers are new to angular on its own for instance we are using angular cli and just going through its dependencies as well as how webpack works etc to facilitate this it makes sense to have some type of technical overview along with automatically generated documentation
| 1
|
12,309
| 14,859,802,579
|
IssuesEvent
|
2021-01-18 19:12:44
|
neuropoly/ukbiobank-spinalcord-csa
|
https://api.github.com/repos/neuropoly/ukbiobank-spinalcord-csa
|
closed
|
No use of manual disc label for T2w in process_data.sh
|
process_data
|
In `process_data.sh`, the manual C2-C3 disc label is only used in the function `label_if_does_not_exist` called for T1w disc labeling :
https://github.com/sandrinebedard/Projet3/blob/5bf91fb4531cc7e4543e281b53825333e5b8d8ee/process_data.sh#L32-L41
But if the subject moved between T1w et T2w as discussed in issue #13 , we will add manual identification of the disc for T2w. The problem is that file manual_label for T2w will never be used in `process_data.sh` as it is right now, the function `label_if_does_not_exist` is not used here:
https://github.com/sandrinebedard/Projet3/blob/5bf91fb4531cc7e4543e281b53825333e5b8d8ee/process_data.sh#L145-L151
It will only use labeling from T1w `label_T1w/template/PAM50_levels.nii.gz` even if T2w manual disc label exists.
|
1.0
|
No use of manual disc label for T2w in process_data.sh - In `process_data.sh`, the manual C2-C3 disc label is only used in the function `label_if_does_not_exist` called for T1w disc labeling :
https://github.com/sandrinebedard/Projet3/blob/5bf91fb4531cc7e4543e281b53825333e5b8d8ee/process_data.sh#L32-L41
But if the subject moved between T1w et T2w as discussed in issue #13 , we will add manual identification of the disc for T2w. The problem is that file manual_label for T2w will never be used in `process_data.sh` as it is right now, the function `label_if_does_not_exist` is not used here:
https://github.com/sandrinebedard/Projet3/blob/5bf91fb4531cc7e4543e281b53825333e5b8d8ee/process_data.sh#L145-L151
It will only use labeling from T1w `label_T1w/template/PAM50_levels.nii.gz` even if T2w manual disc label exists.
|
process
|
no use of manual disc label for in process data sh in process data sh the manual disc label is only used in the function label if does not exist called for disc labeling but if the subject moved between et as discussed in issue we will add manual identification of the disc for the problem is that file manual label for will never be used in process data sh as it is right now the function label if does not exist is not used here it will only use labeling from label template levels nii gz even if manual disc label exists
| 1
|
21,380
| 29,202,228,868
|
IssuesEvent
|
2023-05-21 00:37:00
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Hibrido / Barueri, São Paulo, Brazil] Database Consultant (Oracle) na Coodesh
|
SALVADOR BANCO DE DADOS MYSQL SQL REQUISITOS Telecomunicações FIREWALL PROCESSOS GITHUB BACKUP SEGURANÇA UMA C QUALIDADE NEGÓCIOS MONITORAMENTO ALOCADO Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/database-consultant-oracle-175423911?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Grupo Telefônica</strong> está em busca de <strong><ins>Database Consultant (Oracle)</ins></strong> para compor seu time!</p>
<p></p>
<p>A Unidade de Negócio de Cloud da Telefónica Tech chega ao Brasil para habilitar as empresas com soluções de cloud híbrida e multi-cloud através de um portfólio de serviços ágeis e inovadores. </p>
<p></p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Atendimento de incidentes, câmbios e requisições de serviço;</li>
<li>Instalação, configuração, upgrade e migração de versões de BD;</li>
<li>Aplicação de patch em BD;</li>
<li>Será responsável pela realização de análises preventivas e corretivas de bancos de dados relacionais, especialmente em banco de dados Oracle e SQL Server;</li>
<li>Planejar e estruturar o banco de dados, garantindo a melhor arquitetura, integridade dos dados, segurança de acesso, identificando oportunidades de melhoria na construção e utilização;</li>
<li>Efetuar monitoramento de banco de dados, análise e tunning de performance;</li>
<li>Administração replicas via dataguard e golden gate;</li>
<li>Auxílio em pontos de contenção ou consumo excessivo de recursos;</li>
<li>Apoiar/melhorar rotinas de backup/restore;</li>
<li>Análise de infra-estrutura e capacity planning;</li>
<li>Habilidade com tunning de queries;</li>
<li>Atuar em ambientes de alta disponibilidade;</li>
<li>Criação e automatizar a criação de relatórios técnicos para compartilhar com nossos clientes.</li>
</ul>
## Telefônica :
<p>Somos uma empresa do Grupo Telefônica, líder em telecomunicações no Brasil. Trabalhamos com o propósito de Digitalizar para Aproximar pessoas, negócios e toda sociedade, construindo uma nação mais conectada e transformando a vida dos brasileiros. </p>
<p>Buscamos ampliar a autonomia, a personalização e as escolhas em tempo real dos nossos clientes, colocando-os no comando da sua vida digital, com segurança e confiabilidade – tudo isso com a qualidade que só a Vivo tem.</p><a href='https://coodesh.com/empresas/telefonica'>Veja mais no site</a>
## Habilidades:
- Oracle
- Microsoft SQL Server
- Banco de dados relacionais (SQL)
## Local:
Barueri, São Paulo, Brazil
## Requisitos:
- Experiência sólida com ambientes produtivos críticos e de alta disponibilidade;
- Experiência e vivência comprovada com engines de bancos de dados Oracle, SQL Server e MySQL;
- Sólidos conhecimentos em performance tunning;
- Perfil Analítico, Hands on e Proativo;
- Conhecimento e experiência com processos Ágeis;
- Conhecimento na maioria das ferramentas: Dataguard, Golden Gate, Real Application Test (RAT), RMAN, Enterprise Manager, Database Firewall, Data Masking;
- Superior Completo - Certificado Conclusão.
## Diferenciais:
- Certificações Oracle e SQL serão diferenciais.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Database Consultant (Oracle) na Telefônica ](https://coodesh.com/vagas/database-consultant-oracle-175423911?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Regime
CLT
#### Categoria
Banco de Dados
|
1.0
|
[Hibrido / Barueri, São Paulo, Brazil] Database Consultant (Oracle) na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/database-consultant-oracle-175423911?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Grupo Telefônica</strong> está em busca de <strong><ins>Database Consultant (Oracle)</ins></strong> para compor seu time!</p>
<p></p>
<p>A Unidade de Negócio de Cloud da Telefónica Tech chega ao Brasil para habilitar as empresas com soluções de cloud híbrida e multi-cloud através de um portfólio de serviços ágeis e inovadores. </p>
<p></p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Atendimento de incidentes, câmbios e requisições de serviço;</li>
<li>Instalação, configuração, upgrade e migração de versões de BD;</li>
<li>Aplicação de patch em BD;</li>
<li>Será responsável pela realização de análises preventivas e corretivas de bancos de dados relacionais, especialmente em banco de dados Oracle e SQL Server;</li>
<li>Planejar e estruturar o banco de dados, garantindo a melhor arquitetura, integridade dos dados, segurança de acesso, identificando oportunidades de melhoria na construção e utilização;</li>
<li>Efetuar monitoramento de banco de dados, análise e tunning de performance;</li>
<li>Administração replicas via dataguard e golden gate;</li>
<li>Auxílio em pontos de contenção ou consumo excessivo de recursos;</li>
<li>Apoiar/melhorar rotinas de backup/restore;</li>
<li>Análise de infra-estrutura e capacity planning;</li>
<li>Habilidade com tunning de queries;</li>
<li>Atuar em ambientes de alta disponibilidade;</li>
<li>Criação e automatizar a criação de relatórios técnicos para compartilhar com nossos clientes.</li>
</ul>
## Telefônica :
<p>Somos uma empresa do Grupo Telefônica, líder em telecomunicações no Brasil. Trabalhamos com o propósito de Digitalizar para Aproximar pessoas, negócios e toda sociedade, construindo uma nação mais conectada e transformando a vida dos brasileiros. </p>
<p>Buscamos ampliar a autonomia, a personalização e as escolhas em tempo real dos nossos clientes, colocando-os no comando da sua vida digital, com segurança e confiabilidade – tudo isso com a qualidade que só a Vivo tem.</p><a href='https://coodesh.com/empresas/telefonica'>Veja mais no site</a>
## Habilidades:
- Oracle
- Microsoft SQL Server
- Banco de dados relacionais (SQL)
## Local:
Barueri, São Paulo, Brazil
## Requisitos:
- Experiência sólida com ambientes produtivos críticos e de alta disponibilidade;
- Experiência e vivência comprovada com engines de bancos de dados Oracle, SQL Server e MySQL;
- Sólidos conhecimentos em performance tunning;
- Perfil Analítico, Hands on e Proativo;
- Conhecimento e experiência com processos Ágeis;
- Conhecimento na maioria das ferramentas: Dataguard, Golden Gate, Real Application Test (RAT), RMAN, Enterprise Manager, Database Firewall, Data Masking;
- Superior Completo - Certificado Conclusão.
## Diferenciais:
- Certificações Oracle e SQL serão diferenciais.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Database Consultant (Oracle) na Telefônica ](https://coodesh.com/vagas/database-consultant-oracle-175423911?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Regime
CLT
#### Categoria
Banco de Dados
|
process
|
database consultant oracle na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a grupo telefônica está em busca de database consultant oracle para compor seu time a unidade de negócio de cloud da telefónica tech chega ao brasil para habilitar as empresas com soluções de cloud híbrida e multi cloud através de um portfólio de serviços ágeis e inovadores nbsp responsabilidades atendimento de incidentes câmbios e requisições de serviço instalação configuração upgrade e migração de versões de bd aplicação de patch em bd será responsável pela realização de análises preventivas e corretivas de bancos de dados relacionais especialmente em banco de dados oracle e sql server planejar e estruturar o banco de dados garantindo a melhor arquitetura integridade dos dados segurança de acesso identificando oportunidades de melhoria na construção e utilização efetuar monitoramento de banco de dados análise e tunning de performance administração replicas via dataguard e golden gate auxílio em pontos de contenção ou consumo excessivo de recursos apoiar melhorar rotinas de backup restore análise de infra estrutura e capacity planning habilidade com tunning de queries atuar em ambientes de alta disponibilidade criação e automatizar a criação de relatórios técnicos para compartilhar com nossos clientes telefônica somos uma empresa do grupo telefônica líder em telecomunicações no brasil trabalhamos com o propósito de digitalizar para aproximar pessoas negócios e toda sociedade construindo uma nação mais conectada e transformando a vida dos brasileiros nbsp buscamos ampliar a autonomia a personalização e as escolhas em tempo real dos nossos clientes colocando os no comando da sua vida digital com segurança e confiabilidade – tudo isso com a qualidade que só a vivo tem habilidades oracle microsoft sql server banco de dados relacionais sql local barueri são paulo brazil requisitos experiência sólida com ambientes produtivos críticos e de alta disponibilidade experiência e vivência comprovada com engines de bancos de dados oracle sql server e mysql sólidos conhecimentos em performance tunning perfil analítico hands on e proativo conhecimento e experiência com processos ágeis conhecimento na maioria das ferramentas dataguard golden gate real application test rat rman enterprise manager database firewall data masking superior completo certificado conclusão diferenciais certificações oracle e sql serão diferenciais como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação alocado regime clt categoria banco de dados
| 1
|
308,751
| 9,449,471,994
|
IssuesEvent
|
2019-04-16 02:03:53
|
PMEAL/OpenPNM
|
https://api.github.com/repos/PMEAL/OpenPNM
|
closed
|
scaling on plot_connections is messed up in 2D
|
Priority - Low bug
|
When plotting coordinates in 2D, it scale from 0,0 but when plotting connections, it zooms in to 0.5, 0.5
|
1.0
|
scaling on plot_connections is messed up in 2D - When plotting coordinates in 2D, it scale from 0,0 but when plotting connections, it zooms in to 0.5, 0.5
|
non_process
|
scaling on plot connections is messed up in when plotting coordinates in it scale from but when plotting connections it zooms in to
| 0
|
203,906
| 15,890,724,780
|
IssuesEvent
|
2021-04-10 16:28:27
|
veg-share/frontend-ui
|
https://api.github.com/repos/veg-share/frontend-ui
|
closed
|
Dev Issue : Initial SetUp
|
Developer Phase 1 documentation set-up
|
**What is the issue?**
> Set up initial build of React app
install updated dependencies
establish local/remote repository connections for all members
-[ ] https://www.npmjs.com/package/react-modal
|
1.0
|
Dev Issue : Initial SetUp - **What is the issue?**
> Set up initial build of React app
install updated dependencies
establish local/remote repository connections for all members
-[ ] https://www.npmjs.com/package/react-modal
|
non_process
|
dev issue initial setup what is the issue set up initial build of react app install updated dependencies establish local remote repository connections for all members
| 0
|
759,494
| 26,597,801,969
|
IssuesEvent
|
2023-01-23 13:46:29
|
stratosphererl/stratosphere
|
https://api.github.com/repos/stratosphererl/stratosphere
|
opened
|
[SPIKE] - Consolidating CSS and Components for a unified and responsive design system
|
type: spike priority: medium work: complicated [2] area: infra
|
# Context
To improve the maintainability and consistency of our design system, we want to consolidate our CSS and components and make them responsive. Our current system uses tailwind components, but there are issues with code smell and lack of generic components. We want to research options for unifying the design and feel, such as using a component library or creating an in-house solution.
# Timebox
3 days
# Tasks
- [ ] Research and evaluate different CSS and component consolidation techniques and tools
- [ ] Audit the current CSS and components, identifying areas for improvement and consolidation
- [ ] Research and evaluate different responsive design techniques and tools
- [ ] Research and evaluate different component libraries and in-house solutions for creating a unified design system
- [ ] Create a plan for consolidating the CSS and components and making them responsive
# Notes
- Storybook is a good component testing library that isolates components.
|
1.0
|
[SPIKE] - Consolidating CSS and Components for a unified and responsive design system - # Context
To improve the maintainability and consistency of our design system, we want to consolidate our CSS and components and make them responsive. Our current system uses tailwind components, but there are issues with code smell and lack of generic components. We want to research options for unifying the design and feel, such as using a component library or creating an in-house solution.
# Timebox
3 days
# Tasks
- [ ] Research and evaluate different CSS and component consolidation techniques and tools
- [ ] Audit the current CSS and components, identifying areas for improvement and consolidation
- [ ] Research and evaluate different responsive design techniques and tools
- [ ] Research and evaluate different component libraries and in-house solutions for creating a unified design system
- [ ] Create a plan for consolidating the CSS and components and making them responsive
# Notes
- Storybook is a good component testing library that isolates components.
|
non_process
|
consolidating css and components for a unified and responsive design system context to improve the maintainability and consistency of our design system we want to consolidate our css and components and make them responsive our current system uses tailwind components but there are issues with code smell and lack of generic components we want to research options for unifying the design and feel such as using a component library or creating an in house solution timebox days tasks research and evaluate different css and component consolidation techniques and tools audit the current css and components identifying areas for improvement and consolidation research and evaluate different responsive design techniques and tools research and evaluate different component libraries and in house solutions for creating a unified design system create a plan for consolidating the css and components and making them responsive notes storybook is a good component testing library that isolates components
| 0
|
20,725
| 27,425,740,748
|
IssuesEvent
|
2023-03-01 20:08:32
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
opened
|
Cooling Fan Power Updates
|
bug Calculator Process Cooling
|
* move Water Flow Rate and rename to "Rated Cooling Tower Water Flow Rate" (if name is too long - just go with "Rated Water Flow Rate")
* Y-axis label unit is hp not %
* Add to end of calculator help text (like the note in Process heating's "O2 Enrichment"):
``` NOTE: This calculator assumes the cooling tower has a constant water flow rate and cannot estimate the fan power for variable water flow conditions.```

|
1.0
|
Cooling Fan Power Updates - * move Water Flow Rate and rename to "Rated Cooling Tower Water Flow Rate" (if name is too long - just go with "Rated Water Flow Rate")
* Y-axis label unit is hp not %
* Add to end of calculator help text (like the note in Process heating's "O2 Enrichment"):
``` NOTE: This calculator assumes the cooling tower has a constant water flow rate and cannot estimate the fan power for variable water flow conditions.```

|
process
|
cooling fan power updates move water flow rate and rename to rated cooling tower water flow rate if name is too long just go with rated water flow rate y axis label unit is hp not add to end of calculator help text like the note in process heating s enrichment note this calculator assumes the cooling tower has a constant water flow rate and cannot estimate the fan power for variable water flow conditions
| 1
|
106,435
| 4,272,326,717
|
IssuesEvent
|
2016-07-13 14:13:11
|
chocolatey/choco
|
https://api.github.com/repos/chocolatey/choco
|
closed
|
64bit 7z.exe on 32bit system in chocolatey\tools
|
3 - Done Bug Priority_HIGH
|
I'm working on a 32bit system. If I want to install a zip package, choco install fails with the following message:
`ERROR: Exception calling "Start" with "0" argument(s): "The specified executable is not a valid application for this OS platform."`
I checked the C:\ProgramData\chocolatey\tools\7z.exe and it is really a 64bit version.
If I change it to a 32bit build, the install works well.
### Output Log
```
2016-06-28 09:07:36,817 [INFO ] - ============================================================
2016-06-28 09:07:36,818 [INFO ] - Chocolatey v0.9.10.3
2016-06-28 09:07:36,822 [DEBUG] - Chocolatey is running on Windows v 6.1.7601.65536
2016-06-28 09:07:36,824 [DEBUG] - Attempting to delete file "C:/ProgramData/chocolatey/choco.exe.old".
2016-06-28 09:07:36,827 [DEBUG] - Attempting to delete file "C:\ProgramData\chocolatey\choco.exe.old".
2016-06-28 09:07:36,836 [DEBUG] - Command line: "C:\ProgramData\chocolatey\choco.exe" install vim-tux.portable -fdv -s C:\ProgramData\chocolatey\lib-bad\vim-tux.portable
2016-06-28 09:07:36,838 [DEBUG] - Received arguments: install vim-tux.portable -fdv -s C:\ProgramData\chocolatey\lib-bad\vim-tux.portable
2016-06-28 09:07:36,875 [DEBUG] - RemovePendingPackagesTask is now ready and waiting for PreRunMessage.
2016-06-28 09:07:36,879 [DEBUG] - Sending message 'PreRunMessage' out if there are subscribers...
2016-06-28 09:07:36,883 [DEBUG] - [Pending] Removing all pending packages that should not be considered installed...
2016-06-28 09:07:36,936 [DEBUG] - The source 'C:\ProgramData\chocolatey\lib-bad\vim-tux.portable' evaluated to a 'normal' source type
2016-06-28 09:07:36,939 [DEBUG] -
NOTE: Hiding sensitive configuration data! Please double and triple
check to be sure no sensitive data is shown, especially if copying
output to a gist for review.
2016-06-28 09:07:36,945 [DEBUG] - Configuration: CommandName='install'|
CacheLocation='C:\Users\xxxxx\AppData\Local\Temp\chocolatey'|
ContainsLegacyPackageInstalls='True'|
CommandExecutionTimeoutSeconds='2700'|WebRequestTimeoutSeconds='30'|
Sources='C:\ProgramData\chocolatey\lib-bad\vim-tux.portable'|
SourceType='normal'|Debug='True'|Verbose='True'|Force='True'|
Noop='False'|HelpRequested='False'|RegularOutput='True'|
QuietOutput='False'|PromptForConfirmation='False'|
AcceptLicense='False'|
AllowUnofficialBuild='False'|Input='vim-tux.portable'|
AllVersions='False'|SkipPackageInstallProvider='False'|
PackageNames='vim-tux.portable'|Prerelease='False'|ForceX86='False'|
OverrideArguments='False'|NotSilent='False'|IgnoreDependencies='False'|
AllowMultipleVersions='False'|AllowDowngrade='False'|
ForceDependencies='False'|Information.PlatformType='Windows'|
Information.PlatformVersion='6.1.7601.65536'|
Information.PlatformName='Windows 7'|
Information.ChocolateyVersion='0.9.10.3'|
Information.ChocolateyProductVersion='0.9.10.3'|
Information.FullName='choco, Version=0.9.10.3, Culture=neutral, PublicKeyToken=79d02ea9cad655eb'|
Information.Is64Bit='False'|Information.IsInteractive='True'|
Information.IsUserAdministrator='True'|
Information.IsProcessElevated='True'|
Information.IsLicensedVersion='False'|Features.AutoUninstaller='True'|
Features.CheckSumFiles='True'|Features.FailOnAutoUninstaller='False'|
Features.FailOnStandardError='False'|Features.UsePowerShellHost='True'|
Features.LogEnvironmentValues='False'|Features.VirusCheck='False'|
Features.FailOnInvalidOrMissingLicense='False'|
Features.IgnoreInvalidOptionsSwitches='True'|
Features.UsePackageExitCodes='True'|
Features.UseFipsCompliantChecksums='False'|
ListCommand.LocalOnly='False'|
ListCommand.IncludeRegistryPrograms='False'|ListCommand.PageSize='25'|
ListCommand.Exact='False'|ListCommand.ByIdOnly='False'|
ListCommand.IdStartsWith='False'|ListCommand.OrderByPopularity='False'|
ListCommand.ApprovedOnly='False'|
ListCommand.DownloadCacheAvailable='False'|
ListCommand.NotBroken='False'|UpgradeCommand.FailOnUnfound='False'|
UpgradeCommand.FailOnNotInstalled='False'|
UpgradeCommand.NotifyOnlyAvailableUpgrades='False'|
NewCommand.AutomaticPackage='False'|
NewCommand.UseOriginalTemplate='False'|SourceCommand.Command='unknown'|
SourceCommand.Priority='0'|FeatureCommand.Command='unknown'|
ConfigCommand.Command='unknown'|PinCommand.Command='unknown'|
...
Executing command ['C:\ProgramData\chocolatey\tools\7z.exe' x -aoa -bd -bb1 -o"C:\ProgramData\chocolatey\lib\vim-tux.portable\tools\vim74" -y "C:\Users\xxxxx\AppData\Local\Temp\chocolatey\vim-tux.portable\7.4.1949\complete-x86.7z"]
ERROR: Exception calling "Start" with "0" argument(s): "The specified executable is not a valid application for this OS platform."
at Get-ChocolateyUnzip, C:\ProgramData\chocolatey\helpers\functions\Get-ChocolateyUnzip.ps1: line 159
```
|
1.0
|
64bit 7z.exe on 32bit system in chocolatey\tools - I'm working on a 32bit system. If I want to install a zip package, choco install fails with the following message:
`ERROR: Exception calling "Start" with "0" argument(s): "The specified executable is not a valid application for this OS platform."`
I checked the C:\ProgramData\chocolatey\tools\7z.exe and it is really a 64bit version.
If I change it to a 32bit build, the install works well.
### Output Log
```
2016-06-28 09:07:36,817 [INFO ] - ============================================================
2016-06-28 09:07:36,818 [INFO ] - Chocolatey v0.9.10.3
2016-06-28 09:07:36,822 [DEBUG] - Chocolatey is running on Windows v 6.1.7601.65536
2016-06-28 09:07:36,824 [DEBUG] - Attempting to delete file "C:/ProgramData/chocolatey/choco.exe.old".
2016-06-28 09:07:36,827 [DEBUG] - Attempting to delete file "C:\ProgramData\chocolatey\choco.exe.old".
2016-06-28 09:07:36,836 [DEBUG] - Command line: "C:\ProgramData\chocolatey\choco.exe" install vim-tux.portable -fdv -s C:\ProgramData\chocolatey\lib-bad\vim-tux.portable
2016-06-28 09:07:36,838 [DEBUG] - Received arguments: install vim-tux.portable -fdv -s C:\ProgramData\chocolatey\lib-bad\vim-tux.portable
2016-06-28 09:07:36,875 [DEBUG] - RemovePendingPackagesTask is now ready and waiting for PreRunMessage.
2016-06-28 09:07:36,879 [DEBUG] - Sending message 'PreRunMessage' out if there are subscribers...
2016-06-28 09:07:36,883 [DEBUG] - [Pending] Removing all pending packages that should not be considered installed...
2016-06-28 09:07:36,936 [DEBUG] - The source 'C:\ProgramData\chocolatey\lib-bad\vim-tux.portable' evaluated to a 'normal' source type
2016-06-28 09:07:36,939 [DEBUG] -
NOTE: Hiding sensitive configuration data! Please double and triple
check to be sure no sensitive data is shown, especially if copying
output to a gist for review.
2016-06-28 09:07:36,945 [DEBUG] - Configuration: CommandName='install'|
CacheLocation='C:\Users\xxxxx\AppData\Local\Temp\chocolatey'|
ContainsLegacyPackageInstalls='True'|
CommandExecutionTimeoutSeconds='2700'|WebRequestTimeoutSeconds='30'|
Sources='C:\ProgramData\chocolatey\lib-bad\vim-tux.portable'|
SourceType='normal'|Debug='True'|Verbose='True'|Force='True'|
Noop='False'|HelpRequested='False'|RegularOutput='True'|
QuietOutput='False'|PromptForConfirmation='False'|
AcceptLicense='False'|
AllowUnofficialBuild='False'|Input='vim-tux.portable'|
AllVersions='False'|SkipPackageInstallProvider='False'|
PackageNames='vim-tux.portable'|Prerelease='False'|ForceX86='False'|
OverrideArguments='False'|NotSilent='False'|IgnoreDependencies='False'|
AllowMultipleVersions='False'|AllowDowngrade='False'|
ForceDependencies='False'|Information.PlatformType='Windows'|
Information.PlatformVersion='6.1.7601.65536'|
Information.PlatformName='Windows 7'|
Information.ChocolateyVersion='0.9.10.3'|
Information.ChocolateyProductVersion='0.9.10.3'|
Information.FullName='choco, Version=0.9.10.3, Culture=neutral, PublicKeyToken=79d02ea9cad655eb'|
Information.Is64Bit='False'|Information.IsInteractive='True'|
Information.IsUserAdministrator='True'|
Information.IsProcessElevated='True'|
Information.IsLicensedVersion='False'|Features.AutoUninstaller='True'|
Features.CheckSumFiles='True'|Features.FailOnAutoUninstaller='False'|
Features.FailOnStandardError='False'|Features.UsePowerShellHost='True'|
Features.LogEnvironmentValues='False'|Features.VirusCheck='False'|
Features.FailOnInvalidOrMissingLicense='False'|
Features.IgnoreInvalidOptionsSwitches='True'|
Features.UsePackageExitCodes='True'|
Features.UseFipsCompliantChecksums='False'|
ListCommand.LocalOnly='False'|
ListCommand.IncludeRegistryPrograms='False'|ListCommand.PageSize='25'|
ListCommand.Exact='False'|ListCommand.ByIdOnly='False'|
ListCommand.IdStartsWith='False'|ListCommand.OrderByPopularity='False'|
ListCommand.ApprovedOnly='False'|
ListCommand.DownloadCacheAvailable='False'|
ListCommand.NotBroken='False'|UpgradeCommand.FailOnUnfound='False'|
UpgradeCommand.FailOnNotInstalled='False'|
UpgradeCommand.NotifyOnlyAvailableUpgrades='False'|
NewCommand.AutomaticPackage='False'|
NewCommand.UseOriginalTemplate='False'|SourceCommand.Command='unknown'|
SourceCommand.Priority='0'|FeatureCommand.Command='unknown'|
ConfigCommand.Command='unknown'|PinCommand.Command='unknown'|
...
Executing command ['C:\ProgramData\chocolatey\tools\7z.exe' x -aoa -bd -bb1 -o"C:\ProgramData\chocolatey\lib\vim-tux.portable\tools\vim74" -y "C:\Users\xxxxx\AppData\Local\Temp\chocolatey\vim-tux.portable\7.4.1949\complete-x86.7z"]
ERROR: Exception calling "Start" with "0" argument(s): "The specified executable is not a valid application for this OS platform."
at Get-ChocolateyUnzip, C:\ProgramData\chocolatey\helpers\functions\Get-ChocolateyUnzip.ps1: line 159
```
|
non_process
|
exe on system in chocolatey tools i m working on a system if i want to install a zip package choco install fails with the following message error exception calling start with argument s the specified executable is not a valid application for this os platform i checked the c programdata chocolatey tools exe and it is really a version if i change it to a build the install works well output log chocolatey chocolatey is running on windows v attempting to delete file c programdata chocolatey choco exe old attempting to delete file c programdata chocolatey choco exe old command line c programdata chocolatey choco exe install vim tux portable fdv s c programdata chocolatey lib bad vim tux portable received arguments install vim tux portable fdv s c programdata chocolatey lib bad vim tux portable removependingpackagestask is now ready and waiting for prerunmessage sending message prerunmessage out if there are subscribers removing all pending packages that should not be considered installed the source c programdata chocolatey lib bad vim tux portable evaluated to a normal source type note hiding sensitive configuration data please double and triple check to be sure no sensitive data is shown especially if copying output to a gist for review configuration commandname install cachelocation c users xxxxx appdata local temp chocolatey containslegacypackageinstalls true commandexecutiontimeoutseconds webrequesttimeoutseconds sources c programdata chocolatey lib bad vim tux portable sourcetype normal debug true verbose true force true noop false helprequested false regularoutput true quietoutput false promptforconfirmation false acceptlicense false allowunofficialbuild false input vim tux portable allversions false skippackageinstallprovider false packagenames vim tux portable prerelease false false overridearguments false notsilent false ignoredependencies false allowmultipleversions false allowdowngrade false forcedependencies false information platformtype windows information platformversion information platformname windows information chocolateyversion information chocolateyproductversion information fullname choco version culture neutral publickeytoken information false information isinteractive true information isuseradministrator true information isprocesselevated true information islicensedversion false features autouninstaller true features checksumfiles true features failonautouninstaller false features failonstandarderror false features usepowershellhost true features logenvironmentvalues false features viruscheck false features failoninvalidormissinglicense false features ignoreinvalidoptionsswitches true features usepackageexitcodes true features usefipscompliantchecksums false listcommand localonly false listcommand includeregistryprograms false listcommand pagesize listcommand exact false listcommand byidonly false listcommand idstartswith false listcommand orderbypopularity false listcommand approvedonly false listcommand downloadcacheavailable false listcommand notbroken false upgradecommand failonunfound false upgradecommand failonnotinstalled false upgradecommand notifyonlyavailableupgrades false newcommand automaticpackage false newcommand useoriginaltemplate false sourcecommand command unknown sourcecommand priority featurecommand command unknown configcommand command unknown pincommand command unknown executing command error exception calling start with argument s the specified executable is not a valid application for this os platform at get chocolateyunzip c programdata chocolatey helpers functions get chocolateyunzip line
| 0
|
4,115
| 7,058,976,002
|
IssuesEvent
|
2018-01-04 22:44:39
|
Southclaws/pawn
|
https://api.github.com/repos/Southclaws/pawn
|
closed
|
Recursive relative include doesn't change CWD when using forward slash
|
state: stale type: pre-processor
|
Assume a matryoshka of kinds:
```
- test.pwn
- level_1
-- test.inc
-- level_2
--- test.inc
--- level_3
---- test.inc
```
When using backslash to specify path to included file, everything is nice. However, only that setup will work:
_test.pwn_
```
#include "level_1/test"
```
_level_1/test.inc_
```
#include "level_1\level_2/test"
```
_level_2/test.inc_
```
#include "level_1\level_2\level_3/test"
```
Is this intended behaviour?
|
1.0
|
Recursive relative include doesn't change CWD when using forward slash - Assume a matryoshka of kinds:
```
- test.pwn
- level_1
-- test.inc
-- level_2
--- test.inc
--- level_3
---- test.inc
```
When using backslash to specify path to included file, everything is nice. However, only that setup will work:
_test.pwn_
```
#include "level_1/test"
```
_level_1/test.inc_
```
#include "level_1\level_2/test"
```
_level_2/test.inc_
```
#include "level_1\level_2\level_3/test"
```
Is this intended behaviour?
|
process
|
recursive relative include doesn t change cwd when using forward slash assume a matryoshka of kinds test pwn level test inc level test inc level test inc when using backslash to specify path to included file everything is nice however only that setup will work test pwn include level test level test inc include level level test level test inc include level level level test is this intended behaviour
| 1
|
13,438
| 15,882,013,135
|
IssuesEvent
|
2021-04-09 15:29:29
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Auth][UI] Sign in screen > Error messages are overlapping with the screen elements
|
Auth server Bug P2 Process: Fixed Process: Tested QA Process: Tested dev
|
Sign in screen > Error messages are overlapping with the screen elements
[Note: Issue should be fixed for all the error messages]


|
3.0
|
[Auth][UI] Sign in screen > Error messages are overlapping with the screen elements - Sign in screen > Error messages are overlapping with the screen elements
[Note: Issue should be fixed for all the error messages]


|
process
|
sign in screen error messages are overlapping with the screen elements sign in screen error messages are overlapping with the screen elements
| 1
|
81,449
| 15,729,646,611
|
IssuesEvent
|
2021-03-29 15:04:18
|
danmar/testissues
|
https://api.github.com/repos/danmar/testissues
|
opened
|
"The scope of the variable XXX can be limited" not detected when variable is initilialized during declaration (Trac #272)
|
Improve check Incomplete Migration Migrated from Trac enhancement php-coderrr
|
Migrated from https://trac.cppcheck.net/ticket/272
```json
{
"status": "closed",
"changetime": "2009-08-16T19:17:41",
"description": "cppcheck is raising a warning style with the code above.\n\ncppcheck -q -a -s .\n[./main.c:8]: (style) The scope of the variable var can be limited\n\nBut when the variable \"var\" is initialized during declaration the warning is no more raised\n\n{{{\n#include <stdio.h>\n#include <stdlib.h>\n\nint main(void)\n{\n int j =0;\n int i;\n int var;\n int condition = 0;\n puts(\"!!! Hello World in C !!!\");\n if (condition) {\n puts(\"!!!Not possible!!!\");\n } else {\n do\n {\n for (var = 0; var < 10; ++var) {\n i++;\n }\n } while (0);\n }\n printf(\"\\n ===> i = %d\\n\", i);\n return EXIT_SUCCESS;\n}\n\n}}}\n",
"reporter": "paskalad",
"cc": "",
"resolution": "fixed",
"_ts": "1250450261000000",
"component": "Improve check",
"summary": "\"The scope of the variable XXX can be limited\" not detected when variable is initilialized during declaration",
"priority": "",
"keywords": "",
"time": "2009-05-01T08:31:52",
"milestone": "1.36",
"owner": "php-coderrr",
"type": "enhancement"
}
```
|
1.0
|
"The scope of the variable XXX can be limited" not detected when variable is initilialized during declaration (Trac #272) - Migrated from https://trac.cppcheck.net/ticket/272
```json
{
"status": "closed",
"changetime": "2009-08-16T19:17:41",
"description": "cppcheck is raising a warning style with the code above.\n\ncppcheck -q -a -s .\n[./main.c:8]: (style) The scope of the variable var can be limited\n\nBut when the variable \"var\" is initialized during declaration the warning is no more raised\n\n{{{\n#include <stdio.h>\n#include <stdlib.h>\n\nint main(void)\n{\n int j =0;\n int i;\n int var;\n int condition = 0;\n puts(\"!!! Hello World in C !!!\");\n if (condition) {\n puts(\"!!!Not possible!!!\");\n } else {\n do\n {\n for (var = 0; var < 10; ++var) {\n i++;\n }\n } while (0);\n }\n printf(\"\\n ===> i = %d\\n\", i);\n return EXIT_SUCCESS;\n}\n\n}}}\n",
"reporter": "paskalad",
"cc": "",
"resolution": "fixed",
"_ts": "1250450261000000",
"component": "Improve check",
"summary": "\"The scope of the variable XXX can be limited\" not detected when variable is initilialized during declaration",
"priority": "",
"keywords": "",
"time": "2009-05-01T08:31:52",
"milestone": "1.36",
"owner": "php-coderrr",
"type": "enhancement"
}
```
|
non_process
|
the scope of the variable xxx can be limited not detected when variable is initilialized during declaration trac migrated from json status closed changetime description cppcheck is raising a warning style with the code above n ncppcheck q a s n style the scope of the variable var can be limited n nbut when the variable var is initialized during declaration the warning is no more raised n n n include n include n nint main void n n int j n int i n int var n int condition n puts hello world in c n if condition n puts not possible n else n do n n for var var i d n i n return exit success n n n n reporter paskalad cc resolution fixed ts component improve check summary the scope of the variable xxx can be limited not detected when variable is initilialized during declaration priority keywords time milestone owner php coderrr type enhancement
| 0
|
43,179
| 12,970,495,281
|
IssuesEvent
|
2020-07-21 09:25:18
|
logzio/apollo
|
https://api.github.com/repos/logzio/apollo
|
closed
|
WS-2019-0331 (Medium) detected in handlebars-2.0.0.tgz
|
security vulnerability
|
## WS-2019-0331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-2.0.0.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-2.0.0.tgz">https://registry.npmjs.org/handlebars/-/handlebars-2.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/apollo/ui/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/apollo/ui/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- grunt-google-cdn-0.4.3.tgz (Root Library)
- google-cdn-0.7.0.tgz
- bower-1.3.12.tgz
- :x: **handlebars-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/logzio/apollo/commit/656f895f3b072d3da92ac4b9ec9c5c938f1e1c71">656f895f3b072d3da92ac4b9ec9c5c938f1e1c71</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.2</p>
</p>
</details>
<p></p>
|
True
|
WS-2019-0331 (Medium) detected in handlebars-2.0.0.tgz - ## WS-2019-0331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-2.0.0.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-2.0.0.tgz">https://registry.npmjs.org/handlebars/-/handlebars-2.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/apollo/ui/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/apollo/ui/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- grunt-google-cdn-0.4.3.tgz (Root Library)
- google-cdn-0.7.0.tgz
- bower-1.3.12.tgz
- :x: **handlebars-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/logzio/apollo/commit/656f895f3b072d3da92ac4b9ec9c5c938f1e1c71">656f895f3b072d3da92ac4b9ec9c5c938f1e1c71</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.2</p>
</p>
</details>
<p></p>
|
non_process
|
ws medium detected in handlebars tgz ws medium severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file tmp ws scm apollo ui package json path to vulnerable library tmp ws scm apollo ui node modules handlebars package json dependency hierarchy grunt google cdn tgz root library google cdn tgz bower tgz x handlebars tgz vulnerable library found in head commit a href vulnerability details arbitrary code execution vulnerability found in handlebars before lookup helper fails to validate templates attack may submit templates that execute arbitrary javascript in the system publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution handlebars
| 0
|
79,721
| 15,256,698,238
|
IssuesEvent
|
2021-02-20 21:22:47
|
cornell-dti/campus-density-android
|
https://api.github.com/repos/cornell-dti/campus-density-android
|
closed
|
Consider app architecture
|
big code in progress
|
We should try to refactor our code base so that it is more consistent with industry standards in terms of app architecture: https://developer.android.com/jetpack/docs/guide
### Overall Goals
The ultimate goal of this issue is to make our code easier to reason about, which will allow us to scale more effectively.
Implement:
- MVVM architecture
- reactive programming principles
- single source of truth for data
|
1.0
|
Consider app architecture - We should try to refactor our code base so that it is more consistent with industry standards in terms of app architecture: https://developer.android.com/jetpack/docs/guide
### Overall Goals
The ultimate goal of this issue is to make our code easier to reason about, which will allow us to scale more effectively.
Implement:
- MVVM architecture
- reactive programming principles
- single source of truth for data
|
non_process
|
consider app architecture we should try to refactor our code base so that it is more consistent with industry standards in terms of app architecture overall goals the ultimate goal of this issue is to make our code easier to reason about which will allow us to scale more effectively implement mvvm architecture reactive programming principles single source of truth for data
| 0
|
296,045
| 22,286,896,001
|
IssuesEvent
|
2022-06-11 19:28:48
|
thegooddocsproject/chronologue
|
https://api.github.com/repos/thegooddocsproject/chronologue
|
opened
|
Chronologue docs: Troubleshooting Framework
|
documentation
|
## Summary
"Troubleshooting Framework" is a *Troubleshooting that provides *technicians* with the general steps to prepare for a troubleshooting task.
### Research
- [ ] Determine user goals.
- [ ] Determine prerequisites
- [ ] Determine what steps the user has to take.
### Writing/Testing
- [ ] Create a new branch.
- [ ] Create a draft.
- [ ] Write a commentary in the document to highlight best practices.
- [ ] Take feedback notes for the template group.
- [ ] Create a PR.
### Review
- [ ] Grammatical review.
- [ ] Technical review.
### Publication
- [ ] Merge PR to `docs`.
### Resources
**Source file**: INSERT LINK IF SOURCE FILE ALREADY EXISTS
**Template**: INSERT LINK FOR THE TEMPLATE. All templates are located in: https://github.com/thegooddocsproject/templates/
**Feedback form**: INSERT LINK
|
1.0
|
Chronologue docs: Troubleshooting Framework - ## Summary
"Troubleshooting Framework" is a *Troubleshooting that provides *technicians* with the general steps to prepare for a troubleshooting task.
### Research
- [ ] Determine user goals.
- [ ] Determine prerequisites
- [ ] Determine what steps the user has to take.
### Writing/Testing
- [ ] Create a new branch.
- [ ] Create a draft.
- [ ] Write a commentary in the document to highlight best practices.
- [ ] Take feedback notes for the template group.
- [ ] Create a PR.
### Review
- [ ] Grammatical review.
- [ ] Technical review.
### Publication
- [ ] Merge PR to `docs`.
### Resources
**Source file**: INSERT LINK IF SOURCE FILE ALREADY EXISTS
**Template**: INSERT LINK FOR THE TEMPLATE. All templates are located in: https://github.com/thegooddocsproject/templates/
**Feedback form**: INSERT LINK
|
non_process
|
chronologue docs troubleshooting framework summary troubleshooting framework is a troubleshooting that provides technicians with the general steps to prepare for a troubleshooting task research determine user goals determine prerequisites determine what steps the user has to take writing testing create a new branch create a draft write a commentary in the document to highlight best practices take feedback notes for the template group create a pr review grammatical review technical review publication merge pr to docs resources source file insert link if source file already exists template insert link for the template all templates are located in feedback form insert link
| 0
|
170,816
| 14,271,007,191
|
IssuesEvent
|
2020-11-21 10:16:26
|
livelyapps/pluploader
|
https://api.github.com/repos/livelyapps/pluploader
|
closed
|
Add .gifs to README
|
documentation todo
|
As we want to show the capabilities of the pluploader, we should include gifs into the README.
Minimum:
- [ ] Add gif in the header section to show main features of pluploader (upload, list, info, etc...)
Optional:
- [ ] Add gif to every heading to show full capabilities
|
1.0
|
Add .gifs to README - As we want to show the capabilities of the pluploader, we should include gifs into the README.
Minimum:
- [ ] Add gif in the header section to show main features of pluploader (upload, list, info, etc...)
Optional:
- [ ] Add gif to every heading to show full capabilities
|
non_process
|
add gifs to readme as we want to show the capabilities of the pluploader we should include gifs into the readme minimum add gif in the header section to show main features of pluploader upload list info etc optional add gif to every heading to show full capabilities
| 0
|
23,094
| 3,992,588,022
|
IssuesEvent
|
2016-05-10 02:46:48
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
X509 ExportMultiplePrivateKeys test failed in CI
|
2 - In Progress System.Security test bug Windows
|
http://dotnet-ci.cloudapp.net/job/dotnet_corefx_windows_debug_prtest/3248/console
```
Discovering: System.Security.Cryptography.X509Certificates.Tests
Discovered: System.Security.Cryptography.X509Certificates.Tests
Starting: System.Security.Cryptography.X509Certificates.Tests
System.Security.Cryptography.X509Certificates.Tests.CollectionTests.ExportMultiplePrivateKeys [FAIL]
System.Security.Cryptography.CryptographicException : Error occurred during a cryptographic operation.
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Security.Cryptography.X509Certificates\src\Internal\Cryptography\Pal.Windows\StorePal.Export.cs(82,0): at Internal.Cryptography.Pal.StorePal.Export(X509ContentType contentType, String password)
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Security.Cryptography.X509Certificates\src\System\Security\Cryptography\X509Certificates\X509Certificate2Collection.cs(123,0): at System.Security.Cryptography.X509Certificates.X509Certificate2Collection.Export(X509ContentType contentType, String password)
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Security.Cryptography.X509Certificates\src\System\Security\Cryptography\X509Certificates\X509Certificate2Collection.cs(116,0): at System.Security.Cryptography.X509Certificates.X509Certificate2Collection.Export(X509ContentType contentType)
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Security.Cryptography.X509Certificates\tests\CollectionTests.cs(482,0): at System.Security.Cryptography.X509Certificates.Tests.CollectionTests.ExportMultiplePrivateKeys()
Finished: System.Security.Cryptography.X509Certificates.Tests
```
|
1.0
|
X509 ExportMultiplePrivateKeys test failed in CI - http://dotnet-ci.cloudapp.net/job/dotnet_corefx_windows_debug_prtest/3248/console
```
Discovering: System.Security.Cryptography.X509Certificates.Tests
Discovered: System.Security.Cryptography.X509Certificates.Tests
Starting: System.Security.Cryptography.X509Certificates.Tests
System.Security.Cryptography.X509Certificates.Tests.CollectionTests.ExportMultiplePrivateKeys [FAIL]
System.Security.Cryptography.CryptographicException : Error occurred during a cryptographic operation.
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Security.Cryptography.X509Certificates\src\Internal\Cryptography\Pal.Windows\StorePal.Export.cs(82,0): at Internal.Cryptography.Pal.StorePal.Export(X509ContentType contentType, String password)
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Security.Cryptography.X509Certificates\src\System\Security\Cryptography\X509Certificates\X509Certificate2Collection.cs(123,0): at System.Security.Cryptography.X509Certificates.X509Certificate2Collection.Export(X509ContentType contentType, String password)
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Security.Cryptography.X509Certificates\src\System\Security\Cryptography\X509Certificates\X509Certificate2Collection.cs(116,0): at System.Security.Cryptography.X509Certificates.X509Certificate2Collection.Export(X509ContentType contentType)
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Security.Cryptography.X509Certificates\tests\CollectionTests.cs(482,0): at System.Security.Cryptography.X509Certificates.Tests.CollectionTests.ExportMultiplePrivateKeys()
Finished: System.Security.Cryptography.X509Certificates.Tests
```
|
non_process
|
exportmultipleprivatekeys test failed in ci discovering system security cryptography tests discovered system security cryptography tests starting system security cryptography tests system security cryptography tests collectiontests exportmultipleprivatekeys system security cryptography cryptographicexception error occurred during a cryptographic operation stack trace d j workspace dotnet corefx windows debug prtest src system security cryptography src internal cryptography pal windows storepal export cs at internal cryptography pal storepal export contenttype string password d j workspace dotnet corefx windows debug prtest src system security cryptography src system security cryptography cs at system security cryptography export contenttype string password d j workspace dotnet corefx windows debug prtest src system security cryptography src system security cryptography cs at system security cryptography export contenttype d j workspace dotnet corefx windows debug prtest src system security cryptography tests collectiontests cs at system security cryptography tests collectiontests exportmultipleprivatekeys finished system security cryptography tests
| 0
|
631,545
| 20,153,649,164
|
IssuesEvent
|
2022-02-09 14:40:30
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Apostrophe not being unescaped in YouTube channel name
|
bug feature/rewards priority/P3 QA/Yes release-notes/exclude greaselion OS/Desktop
|
## Description
Apostrophe is appearing as `#39;` instead of an apostrophe `'` in YouTube channel name.
Example: https://www.youtube.com/c/LolasLifeLessons

|
1.0
|
Apostrophe not being unescaped in YouTube channel name - ## Description
Apostrophe is appearing as `#39;` instead of an apostrophe `'` in YouTube channel name.
Example: https://www.youtube.com/c/LolasLifeLessons

|
non_process
|
apostrophe not being unescaped in youtube channel name description apostrophe is appearing as instead of an apostrophe in youtube channel name example
| 0
|
111,600
| 24,157,714,423
|
IssuesEvent
|
2022-09-22 09:04:00
|
alibaba/nacos
|
https://api.github.com/repos/alibaba/nacos
|
closed
|
IV initial vector size is 16 bytes, take the first 16 bytes of secret Key
|
kind/code quality
|
<!-- Here is for bug reports and feature requests ONLY!
If you're looking for help, please check our mail list、WeChat group and the Gitter room.
Please try to use English to describe your issue, or at least provide a snippet of English translation.
我们鼓励使用英文,如果不能直接使用,可以使用翻译软件,您仍旧可以保留中文原文。
-->
## Issue Description
Type: *bug report* or *feature request*
### Describe what happened (or what feature you want)
### Describe what you expected to happen
### How to reproduce it (as minimally and precisely as possible)
1.
2.
3.
### Tell us your environment
### Anything else we need to know?
|
1.0
|
IV initial vector size is 16 bytes, take the first 16 bytes of secret Key -
<!-- Here is for bug reports and feature requests ONLY!
If you're looking for help, please check our mail list、WeChat group and the Gitter room.
Please try to use English to describe your issue, or at least provide a snippet of English translation.
我们鼓励使用英文,如果不能直接使用,可以使用翻译软件,您仍旧可以保留中文原文。
-->
## Issue Description
Type: *bug report* or *feature request*
### Describe what happened (or what feature you want)
### Describe what you expected to happen
### How to reproduce it (as minimally and precisely as possible)
1.
2.
3.
### Tell us your environment
### Anything else we need to know?
|
non_process
|
iv initial vector size is bytes take the first bytes of secret key here is for bug reports and feature requests only if you re looking for help please check our mail list、wechat group and the gitter room please try to use english to describe your issue or at least provide a snippet of english translation 我们鼓励使用英文,如果不能直接使用,可以使用翻译软件,您仍旧可以保留中文原文。 issue description type bug report or feature request describe what happened or what feature you want describe what you expected to happen how to reproduce it as minimally and precisely as possible tell us your environment anything else we need to know
| 0
|
17,140
| 22,678,312,039
|
IssuesEvent
|
2022-07-04 07:35:20
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Record log does not list correct starting before elements
|
kind/bug severity/low team/process-automation
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
In the `PROC_INST_CREATION` record we will log the elements which the process started at. If a process starts at 2 different element this record will log the same id twice, instead of logging both ids.
**To Reproduce**
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
Create a process instance. Start it at 2 different elements. When you look at the creation record it will say it started at the same element twice.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The records logs shows the correct elements that were started at.
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
```
C PROC_INST_CREATION CREATE - #006-> -1 -1 - new <process "process_id_27"> (starting before elements: fork_id_18, fork_id_18) with variables: {boundary_timer_id_25=PT8760H, boundary_timer_id_23=PT8760H, correlationKey=default_correlation_key}
...
C PROC_INST ACTIVATE - #012->#006 K008 - RECEIVE_TASK "id_16" in <process "process_id_27"[K004]>
C PROC_INST ACTIVATE - #013->#006 K009 - PARALLEL_GATEWAY "fork_id_18" in <process "process_id_27"[K004]>
E PROC_INST_CREATION CREATED - #014->#006 K010 - new <process "process_id_27"> (starting before elements: fork_id_18, fork_id_18) with variables: {boundary_timer_id_25=PT8760H, boundary_timer_id_23=PT8760H, correlationKey=default_correlation_key}
```
This shows we started at `fork_id_18` twice, but when looking at the activate commands we can see that we actually started at `fork_id_18` and `id_16`
**Environment:**
- OS: <!-- [e.g. Linux] -->
- Zeebe Version: <!-- [e.g. 0.20.0] -->
- Configuration: <!-- [e.g. exporters etc.] -->
\cc @korthout
|
1.0
|
Record log does not list correct starting before elements - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
In the `PROC_INST_CREATION` record we will log the elements which the process started at. If a process starts at 2 different element this record will log the same id twice, instead of logging both ids.
**To Reproduce**
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
Create a process instance. Start it at 2 different elements. When you look at the creation record it will say it started at the same element twice.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The records logs shows the correct elements that were started at.
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
```
C PROC_INST_CREATION CREATE - #006-> -1 -1 - new <process "process_id_27"> (starting before elements: fork_id_18, fork_id_18) with variables: {boundary_timer_id_25=PT8760H, boundary_timer_id_23=PT8760H, correlationKey=default_correlation_key}
...
C PROC_INST ACTIVATE - #012->#006 K008 - RECEIVE_TASK "id_16" in <process "process_id_27"[K004]>
C PROC_INST ACTIVATE - #013->#006 K009 - PARALLEL_GATEWAY "fork_id_18" in <process "process_id_27"[K004]>
E PROC_INST_CREATION CREATED - #014->#006 K010 - new <process "process_id_27"> (starting before elements: fork_id_18, fork_id_18) with variables: {boundary_timer_id_25=PT8760H, boundary_timer_id_23=PT8760H, correlationKey=default_correlation_key}
```
This shows we started at `fork_id_18` twice, but when looking at the activate commands we can see that we actually started at `fork_id_18` and `id_16`
**Environment:**
- OS: <!-- [e.g. Linux] -->
- Zeebe Version: <!-- [e.g. 0.20.0] -->
- Configuration: <!-- [e.g. exporters etc.] -->
\cc @korthout
|
process
|
record log does not list correct starting before elements describe the bug in the proc inst creation record we will log the elements which the process started at if a process starts at different element this record will log the same id twice instead of logging both ids to reproduce steps to reproduce the behavior if possible add a minimal reproducer code sample when using the java client create a process instance start it at different elements when you look at the creation record it will say it started at the same element twice expected behavior the records logs shows the correct elements that were started at log stacktrace c proc inst creation create new starting before elements fork id fork id with variables boundary timer id boundary timer id correlationkey default correlation key c proc inst activate receive task id in c proc inst activate parallel gateway fork id in e proc inst creation created new starting before elements fork id fork id with variables boundary timer id boundary timer id correlationkey default correlation key this shows we started at fork id twice but when looking at the activate commands we can see that we actually started at fork id and id environment os zeebe version configuration cc korthout
| 1
|
443,064
| 30,872,686,753
|
IssuesEvent
|
2023-08-03 12:27:42
|
cbyrohl/scida
|
https://api.github.com/repos/cbyrohl/scida
|
closed
|
info() does not exist for series
|
documentation enhancement
|
```
>>> from scida import load
>>> ds = load('/virgotng/universe/IllustrisTNG/TNG100-3/', units=True)
>>> ds.info()
```
as in the tutorial, gives an error:
```AttributeError: 'ArepoSimulation' object has no attribute 'info'```
|
1.0
|
info() does not exist for series - ```
>>> from scida import load
>>> ds = load('/virgotng/universe/IllustrisTNG/TNG100-3/', units=True)
>>> ds.info()
```
as in the tutorial, gives an error:
```AttributeError: 'ArepoSimulation' object has no attribute 'info'```
|
non_process
|
info does not exist for series from scida import load ds load virgotng universe illustristng units true ds info as in the tutorial gives an error attributeerror areposimulation object has no attribute info
| 0
|
3,194
| 6,261,026,027
|
IssuesEvent
|
2017-07-14 22:24:03
|
saguaroib/saguaro
|
https://api.github.com/repos/saguaroib/saguaro
|
closed
|
Chmod images after creation
|
Bug: Minor Image processing
|
First of all, when i run test.php it seems to work, but it remain a blank page.
Second:
when i try to make a post a bunch of errors appear.
Any solution?

|
1.0
|
Chmod images after creation - First of all, when i run test.php it seems to work, but it remain a blank page.
Second:
when i try to make a post a bunch of errors appear.
Any solution?

|
process
|
chmod images after creation first of all when i run test php it seems to work but it remain a blank page second when i try to make a post a bunch of errors appear any solution
| 1
|
85,496
| 16,670,103,669
|
IssuesEvent
|
2021-06-07 09:46:26
|
pytorch/vision
|
https://api.github.com/repos/pytorch/vision
|
closed
|
Port test/test_models.py to pytest
|
code quality good first issue module: models module: tests
|
Currently, most tests in [test/test_models.py](https://github.com/pytorch/vision/blob/master/test/test_models.py) rely on `unittest.TestCase`. Now that we support `pytest`, we want to remove the use of the `unittest` module.
For a similar issue: see https://github.com/pytorch/vision/issues/3945 and #3956
### Instructions
There are many tests in this file, but it should be possible to port all of them in one PR as they're all pretty straightforward. If you're interested in this issue, please comment below to indicate that you started working on a PR. Look below for some porting tips, and please don't hesitate to ask for help. Thanks!
In this file there are already some test functions like:
```py
def test_classification_model(model_name, dev):
ModelTester()._test_classification_model(model_name, dev)
```
For those, we should just copy the body of `_test_classification_model` into `test_classification_model` so that we can get rid of the `ModelTester` class altogether.
`@pytest.mark.parametrize('dev', _devs)` should be changed into `@pytest.mark.parametrize('dev', cpu_and_gpu())` where `cpu_and_gpu()` is in `common_utils`.
The tests that need cuda (e.g. `test_fasterrcnn_switch_devices`) should use the `@needs_cuda` decorator, also from `common_utils`. The test that *don't* need cuda should use the `@cpu_only` decorator.
### How to port a test to pytest
Porting a test from `unittest` to pytest is usually fairly straightforward. For a typical example, see https://github.com/pytorch/vision/pull/3907/files:
- take the test method out of the `Tester(unittest.TestCase)` class and just declare it as a function
- Replace `@unittest.skipIf` with `pytest.mark.skipif(cond, reason=...)`
- remove any use of `self.assertXYZ`.
- Typically `assertEqual(a, b)` can be replaced by `assert a == b` when a and b are pure python objects (scalars, tuples, lists), and otherwise we can rely on `assert_equal` which is already used in the file.
- `self.assertRaises` should be replaced with the `pytest.raises(Exp, match=...):` context manager, as done in https://github.com/pytorch/vision/pull/3907/files. Same for warnings with `pytest.warns`
- `self.assertTrue` should be replaced with a plain `assert`
- When a function uses for loops to tests multiple parameter values, one should use`pytest.mark.parametrize` instead, as done e.g. in https://github.com/pytorch/vision/pull/3907/files.
cc @pmeier
|
1.0
|
Port test/test_models.py to pytest - Currently, most tests in [test/test_models.py](https://github.com/pytorch/vision/blob/master/test/test_models.py) rely on `unittest.TestCase`. Now that we support `pytest`, we want to remove the use of the `unittest` module.
For a similar issue: see https://github.com/pytorch/vision/issues/3945 and #3956
### Instructions
There are many tests in this file, but it should be possible to port all of them in one PR as they're all pretty straightforward. If you're interested in this issue, please comment below to indicate that you started working on a PR. Look below for some porting tips, and please don't hesitate to ask for help. Thanks!
In this file there are already some test functions like:
```py
def test_classification_model(model_name, dev):
ModelTester()._test_classification_model(model_name, dev)
```
For those, we should just copy the body of `_test_classification_model` into `test_classification_model` so that we can get rid of the `ModelTester` class altogether.
`@pytest.mark.parametrize('dev', _devs)` should be changed into `@pytest.mark.parametrize('dev', cpu_and_gpu())` where `cpu_and_gpu()` is in `common_utils`.
The tests that need cuda (e.g. `test_fasterrcnn_switch_devices`) should use the `@needs_cuda` decorator, also from `common_utils`. The test that *don't* need cuda should use the `@cpu_only` decorator.
### How to port a test to pytest
Porting a test from `unittest` to pytest is usually fairly straightforward. For a typical example, see https://github.com/pytorch/vision/pull/3907/files:
- take the test method out of the `Tester(unittest.TestCase)` class and just declare it as a function
- Replace `@unittest.skipIf` with `pytest.mark.skipif(cond, reason=...)`
- remove any use of `self.assertXYZ`.
- Typically `assertEqual(a, b)` can be replaced by `assert a == b` when a and b are pure python objects (scalars, tuples, lists), and otherwise we can rely on `assert_equal` which is already used in the file.
- `self.assertRaises` should be replaced with the `pytest.raises(Exp, match=...):` context manager, as done in https://github.com/pytorch/vision/pull/3907/files. Same for warnings with `pytest.warns`
- `self.assertTrue` should be replaced with a plain `assert`
- When a function uses for loops to tests multiple parameter values, one should use`pytest.mark.parametrize` instead, as done e.g. in https://github.com/pytorch/vision/pull/3907/files.
cc @pmeier
|
non_process
|
port test test models py to pytest currently most tests in rely on unittest testcase now that we support pytest we want to remove the use of the unittest module for a similar issue see and instructions there are many tests in this file but it should be possible to port all of them in one pr as they re all pretty straightforward if you re interested in this issue please comment below to indicate that you started working on a pr look below for some porting tips and please don t hesitate to ask for help thanks in this file there are already some test functions like py def test classification model model name dev modeltester test classification model model name dev for those we should just copy the body of test classification model into test classification model so that we can get rid of the modeltester class altogether pytest mark parametrize dev devs should be changed into pytest mark parametrize dev cpu and gpu where cpu and gpu is in common utils the tests that need cuda e g test fasterrcnn switch devices should use the needs cuda decorator also from common utils the test that don t need cuda should use the cpu only decorator how to port a test to pytest porting a test from unittest to pytest is usually fairly straightforward for a typical example see take the test method out of the tester unittest testcase class and just declare it as a function replace unittest skipif with pytest mark skipif cond reason remove any use of self assertxyz typically assertequal a b can be replaced by assert a b when a and b are pure python objects scalars tuples lists and otherwise we can rely on assert equal which is already used in the file self assertraises should be replaced with the pytest raises exp match context manager as done in same for warnings with pytest warns self asserttrue should be replaced with a plain assert when a function uses for loops to tests multiple parameter values one should use pytest mark parametrize instead as done e g in cc pmeier
| 0
|
18,285
| 24,375,438,071
|
IssuesEvent
|
2022-10-04 00:11:03
|
OctopusDeploy/Issues
|
https://api.github.com/repos/OctopusDeploy/Issues
|
closed
|
All the queued server tasks are displayed when deploying for a project with a pending manual intervention task.
|
kind/enhancement size/medium feature/ops-processes area/core team/fire-and-motion
|
Related to: https://github.com/OctopusDeploy/OctopusDeploy/pull/4038
When there are pending tasks, it is not very clear to figure out which tasks to address in order to proceed with the current deployment. When there are any queued deployments for a project, it displays a list of all the queued tasks on the server rather than pointing pending tasks for that project.
Clicking on the task links do not take to a page where we can cancel/proceed to pending tasks.
There is information about existing pending tasks while changing the values of the `Blocks deployment` checkbox.
## Steps to reproduce
- Have a few projects with pending manual intervention steps.
- Create a new project with a manual intervention step and select `Prevent other deployments while awaiting intervention`
- Deploy a release. Let it wait for manual intervention.
- Now edit the step and set *Block Deployments* to `Allow another deployment to begin while awaiting intervention`.
- Create another release & deploy.
- At this point, all the queued tasks from all the projects are displayed.

# Issue 1:
1. When we deploy 0.0.2, it is queued behind 0.0.1 ONLY, but we are still showing the previous information - all tasks waiting in the queue.

# Issue 2:
On clicking the waiting tasks, it navigates to task summary page. It just says the task has not started yet. There is nothing that can be done to trigger this task.

There is nothing that can be done here to trigger these tasks again.
## Suggestion:
Instead of navigating to the task summary page, maybe navigate to its actionable page like

# Issue 3:
There is no information while changing the toggle value that there are any pending tasks for this (project-environment-tenant)

It will be helpful to have a link/dialog box that will navigate to pending tasks before changing the value for *block deployments*.
|
1.0
|
All the queued server tasks are displayed when deploying for a project with a pending manual intervention task. - Related to: https://github.com/OctopusDeploy/OctopusDeploy/pull/4038
When there are pending tasks, it is not very clear to figure out which tasks to address in order to proceed with the current deployment. When there are any queued deployments for a project, it displays a list of all the queued tasks on the server rather than pointing pending tasks for that project.
Clicking on the task links do not take to a page where we can cancel/proceed to pending tasks.
There is information about existing pending tasks while changing the values of the `Blocks deployment` checkbox.
## Steps to reproduce
- Have a few projects with pending manual intervention steps.
- Create a new project with a manual intervention step and select `Prevent other deployments while awaiting intervention`
- Deploy a release. Let it wait for manual intervention.
- Now edit the step and set *Block Deployments* to `Allow another deployment to begin while awaiting intervention`.
- Create another release & deploy.
- At this point, all the queued tasks from all the projects are displayed.

# Issue 1:
1. When we deploy 0.0.2, it is queued behind 0.0.1 ONLY, but we are still showing the previous information - all tasks waiting in the queue.

# Issue 2:
On clicking the waiting tasks, it navigates to task summary page. It just says the task has not started yet. There is nothing that can be done to trigger this task.

There is nothing that can be done here to trigger these tasks again.
## Suggestion:
Instead of navigating to the task summary page, maybe navigate to its actionable page like

# Issue 3:
There is no information while changing the toggle value that there are any pending tasks for this (project-environment-tenant)

It will be helpful to have a link/dialog box that will navigate to pending tasks before changing the value for *block deployments*.
|
process
|
all the queued server tasks are displayed when deploying for a project with a pending manual intervention task related to when there are pending tasks it is not very clear to figure out which tasks to address in order to proceed with the current deployment when there are any queued deployments for a project it displays a list of all the queued tasks on the server rather than pointing pending tasks for that project clicking on the task links do not take to a page where we can cancel proceed to pending tasks there is information about existing pending tasks while changing the values of the blocks deployment checkbox steps to reproduce have a few projects with pending manual intervention steps create a new project with a manual intervention step and select prevent other deployments while awaiting intervention deploy a release let it wait for manual intervention now edit the step and set block deployments to allow another deployment to begin while awaiting intervention create another release deploy at this point all the queued tasks from all the projects are displayed issue when we deploy it is queued behind only but we are still showing the previous information all tasks waiting in the queue issue on clicking the waiting tasks it navigates to task summary page it just says the task has not started yet there is nothing that can be done to trigger this task there is nothing that can be done here to trigger these tasks again suggestion instead of navigating to the task summary page maybe navigate to its actionable page like issue there is no information while changing the toggle value that there are any pending tasks for this project environment tenant it will be helpful to have a link dialog box that will navigate to pending tasks before changing the value for block deployments
| 1
|
105,220
| 16,624,731,884
|
IssuesEvent
|
2021-06-03 08:08:36
|
opfab/operatorfabric-core
|
https://api.github.com/repos/opfab/operatorfabric-core
|
closed
|
CVE-2021-22112 (High) detected in spring-security-web-5.4.2.jar - autoclosed
|
security vulnerability
|
## CVE-2021-22112 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-web-5.4.2.jar</b></p></summary>
<p>spring-security-web</p>
<p>Library home page: <a href="https://spring.io/spring-security">https://spring.io/spring-security</a></p>
<p>Path to dependency file: operatorfabric-core/tools/spring/spring-oauth2-utilities/build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20210602073831_GWAQKZ/downloadResource_AICIKJ/20210602074208/spring-security-web-5.4.2.jar</p>
<p>
Dependency Hierarchy:
- spring-security-oauth2-resource-server-5.4.2.jar (Root Library)
- :x: **spring-security-web-5.4.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opfab/operatorfabric-core/commit/02e564fdff7e533435a5a00f051178a638cdb3d7">02e564fdff7e533435a5a00f051178a638cdb3d7</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Security 5.4.x prior to 5.4.4, 5.3.x prior to 5.3.8.RELEASE, 5.2.x prior to 5.2.9.RELEASE, and older unsupported versions can fail to save the SecurityContext if it is changed more than once in a single request.A malicious user cannot cause the bug to happen (it must be programmed in). However, if the application's intent is to only allow the user to run with elevated privileges in a small portion of the application, the bug can be leveraged to extend those privileges to the rest of the application.
<p>Publish Date: 2021-02-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22112>CVE-2021-22112</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2021-22112">https://tanzu.vmware.com/security/cve-2021-22112</a></p>
<p>Release Date: 2021-02-23</p>
<p>Fix Resolution: org.springframework.security:spring-security-web:5.2.9,5.3.8,5.4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-22112 (High) detected in spring-security-web-5.4.2.jar - autoclosed - ## CVE-2021-22112 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-web-5.4.2.jar</b></p></summary>
<p>spring-security-web</p>
<p>Library home page: <a href="https://spring.io/spring-security">https://spring.io/spring-security</a></p>
<p>Path to dependency file: operatorfabric-core/tools/spring/spring-oauth2-utilities/build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20210602073831_GWAQKZ/downloadResource_AICIKJ/20210602074208/spring-security-web-5.4.2.jar</p>
<p>
Dependency Hierarchy:
- spring-security-oauth2-resource-server-5.4.2.jar (Root Library)
- :x: **spring-security-web-5.4.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opfab/operatorfabric-core/commit/02e564fdff7e533435a5a00f051178a638cdb3d7">02e564fdff7e533435a5a00f051178a638cdb3d7</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Security 5.4.x prior to 5.4.4, 5.3.x prior to 5.3.8.RELEASE, 5.2.x prior to 5.2.9.RELEASE, and older unsupported versions can fail to save the SecurityContext if it is changed more than once in a single request.A malicious user cannot cause the bug to happen (it must be programmed in). However, if the application's intent is to only allow the user to run with elevated privileges in a small portion of the application, the bug can be leveraged to extend those privileges to the rest of the application.
<p>Publish Date: 2021-02-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22112>CVE-2021-22112</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2021-22112">https://tanzu.vmware.com/security/cve-2021-22112</a></p>
<p>Release Date: 2021-02-23</p>
<p>Fix Resolution: org.springframework.security:spring-security-web:5.2.9,5.3.8,5.4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in spring security web jar autoclosed cve high severity vulnerability vulnerable library spring security web jar spring security web library home page a href path to dependency file operatorfabric core tools spring spring utilities build gradle path to vulnerable library tmp ws ua gwaqkz downloadresource aicikj spring security web jar dependency hierarchy spring security resource server jar root library x spring security web jar vulnerable library found in head commit a href found in base branch develop vulnerability details spring security x prior to x prior to release x prior to release and older unsupported versions can fail to save the securitycontext if it is changed more than once in a single request a malicious user cannot cause the bug to happen it must be programmed in however if the application s intent is to only allow the user to run with elevated privileges in a small portion of the application the bug can be leveraged to extend those privileges to the rest of the application publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework security spring security web step up your open source security game with whitesource
| 0
|
14,654
| 17,776,625,883
|
IssuesEvent
|
2021-08-30 20:07:42
|
OHS-Hosting-Infrastructure/infrastructure
|
https://api.github.com/repos/OHS-Hosting-Infrastructure/infrastructure
|
closed
|
[Technical Research] Evaluate and select task management system
|
state:question epic:process for:team
|
As the Hosting team, I need to use a task management system so that the team can properly plan their work.
Needs:
* Able to group tasks so that team can organize tasks in different ways
* Able to incorporate hierarchical structure of tasks so that team can understand which tasks need to be completed to complete an overall objective
* Able to incorporate timelines for tasks, both expected and actual so that team can keep any deadlines in mind
* Able to differentiate defects/bugs from new development to operational tasks so that prioritization can be easier
* Able to link to tasks within same or different system so that team understands dependencies between teams and/or work
* Able to incorporate task estimation so that team can assess what can be included in plan
* Able to incorporate attributes that are used in prioritization calculation so that team can systematically prioritize
* Able to quickly identify type of task, where it is in the workflow, group of tasks it belongs to, etc. so that they can be managed without going into individual items
* Able to differentiate access so that some users can only view while others can create, edit, and delete.
* Able to construct reports of execution so that team can analyze for retrospectives
* Able to integrate with source control systems to manage development workflow
* Able to export list and/or reports so that those without access to the system can get visibility
* Able to configure workflows of tasks so that team can work the way it needs
* Able to support sprints and kanban so that team can choose which way to work
Constraints:
* Cost is consideration
* System is already FedRAMP'd or has an ATO
(only Trello from Atlassian)
https://marketplace.fedramp.gov/
Outcome:
* Selection of a task management system that meets all the teams' and stakeholders' needs in the form of an ADR
|
1.0
|
[Technical Research] Evaluate and select task management system - As the Hosting team, I need to use a task management system so that the team can properly plan their work.
Needs:
* Able to group tasks so that team can organize tasks in different ways
* Able to incorporate hierarchical structure of tasks so that team can understand which tasks need to be completed to complete an overall objective
* Able to incorporate timelines for tasks, both expected and actual so that team can keep any deadlines in mind
* Able to differentiate defects/bugs from new development to operational tasks so that prioritization can be easier
* Able to link to tasks within same or different system so that team understands dependencies between teams and/or work
* Able to incorporate task estimation so that team can assess what can be included in plan
* Able to incorporate attributes that are used in prioritization calculation so that team can systematically prioritize
* Able to quickly identify type of task, where it is in the workflow, group of tasks it belongs to, etc. so that they can be managed without going into individual items
* Able to differentiate access so that some users can only view while others can create, edit, and delete.
* Able to construct reports of execution so that team can analyze for retrospectives
* Able to integrate with source control systems to manage development workflow
* Able to export list and/or reports so that those without access to the system can get visibility
* Able to configure workflows of tasks so that team can work the way it needs
* Able to support sprints and kanban so that team can choose which way to work
Constraints:
* Cost is consideration
* System is already FedRAMP'd or has an ATO
(only Trello from Atlassian)
https://marketplace.fedramp.gov/
Outcome:
* Selection of a task management system that meets all the teams' and stakeholders' needs in the form of an ADR
|
process
|
evaluate and select task management system as the hosting team i need to use a task management system so that the team can properly plan their work needs able to group tasks so that team can organize tasks in different ways able to incorporate hierarchical structure of tasks so that team can understand which tasks need to be completed to complete an overall objective able to incorporate timelines for tasks both expected and actual so that team can keep any deadlines in mind able to differentiate defects bugs from new development to operational tasks so that prioritization can be easier able to link to tasks within same or different system so that team understands dependencies between teams and or work able to incorporate task estimation so that team can assess what can be included in plan able to incorporate attributes that are used in prioritization calculation so that team can systematically prioritize able to quickly identify type of task where it is in the workflow group of tasks it belongs to etc so that they can be managed without going into individual items able to differentiate access so that some users can only view while others can create edit and delete able to construct reports of execution so that team can analyze for retrospectives able to integrate with source control systems to manage development workflow able to export list and or reports so that those without access to the system can get visibility able to configure workflows of tasks so that team can work the way it needs able to support sprints and kanban so that team can choose which way to work constraints cost is consideration system is already fedramp d or has an ato only trello from atlassian outcome selection of a task management system that meets all the teams and stakeholders needs in the form of an adr
| 1
|
4,262
| 7,189,088,033
|
IssuesEvent
|
2018-02-02 12:43:29
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Fail on purpose if cache has changed since the last time the scraper was run
|
apps-blockScrape status-inprocess type-enhancement
|
After every run of the blockScrape, blockAcct, miniBlock and all the monitors store the most recent value of the binary cache folder. If, the next time any of these is run, if it's different, do not proceed. The file that stores the latestCache value cannot be in the ~/.quickBlocks folder because it might be deleted.
**Question:** Where can one store persistent application data on Linux?
|
1.0
|
Fail on purpose if cache has changed since the last time the scraper was run - After every run of the blockScrape, blockAcct, miniBlock and all the monitors store the most recent value of the binary cache folder. If, the next time any of these is run, if it's different, do not proceed. The file that stores the latestCache value cannot be in the ~/.quickBlocks folder because it might be deleted.
**Question:** Where can one store persistent application data on Linux?
|
process
|
fail on purpose if cache has changed since the last time the scraper was run after every run of the blockscrape blockacct miniblock and all the monitors store the most recent value of the binary cache folder if the next time any of these is run if it s different do not proceed the file that stores the latestcache value cannot be in the quickblocks folder because it might be deleted question where can one store persistent application data on linux
| 1
|
10,150
| 13,044,162,567
|
IssuesEvent
|
2020-07-29 03:47:33
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `LastInsertIDWithID` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `LastInsertIDWithID` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `LastInsertIDWithID` from TiDB -
## Description
Port the scalar function `LastInsertIDWithID` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function lastinsertidwithid from tidb description port the scalar function lastinsertidwithid from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
21,554
| 29,868,310,605
|
IssuesEvent
|
2023-06-20 06:37:35
|
threefoldtech/zos
|
https://api.github.com/repos/threefoldtech/zos
|
closed
|
Dual configuration lost after node crashed
|
type_bug process_wontfix
|
Farmer noted that their node 5469 came back up in single interface mode after a crash, when it previously had a DUL interface set via RMB. This should be persistent, correct?
|
1.0
|
Dual configuration lost after node crashed - Farmer noted that their node 5469 came back up in single interface mode after a crash, when it previously had a DUL interface set via RMB. This should be persistent, correct?
|
process
|
dual configuration lost after node crashed farmer noted that their node came back up in single interface mode after a crash when it previously had a dul interface set via rmb this should be persistent correct
| 1
|
52,414
| 6,622,669,920
|
IssuesEvent
|
2017-09-22 01:33:03
|
lesswrong-ru/lesswrong-ru
|
https://api.github.com/repos/lesswrong-ru/lesswrong-ru
|
closed
|
Новая тема: поправить отступы для основного текста
|
design
|
Увеличить левый и уменьшить правый, как на картинке: https://lesswrongru.slack.com/files/greenochre/F3YKDSM9D/pasted_image_at_2017_01_29_04_13_pm.png
|
1.0
|
Новая тема: поправить отступы для основного текста - Увеличить левый и уменьшить правый, как на картинке: https://lesswrongru.slack.com/files/greenochre/F3YKDSM9D/pasted_image_at_2017_01_29_04_13_pm.png
|
non_process
|
новая тема поправить отступы для основного текста увеличить левый и уменьшить правый как на картинке
| 0
|
9,818
| 12,826,982,710
|
IssuesEvent
|
2020-07-06 17:34:54
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
How do I share environments across projects?
|
Pri1 devops-cicd-process/tech devops/prod product-feedback
|
**How do I share environments across projects?**
We have organized our software in many different projects. However, we deploy most of our software to the same physical machines: Development, Staging, and Production.
From the looks of it, do I need to run the PowerShell registration script for each project environment? Why can't I share environments across projects?
Thanks for clarifying!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
How do I share environments across projects? - **How do I share environments across projects?**
We have organized our software in many different projects. However, we deploy most of our software to the same physical machines: Development, Staging, and Production.
From the looks of it, do I need to run the PowerShell registration script for each project environment? Why can't I share environments across projects?
Thanks for clarifying!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
how do i share environments across projects how do i share environments across projects we have organized our software in many different projects however we deploy most of our software to the same physical machines development staging and production from the looks of it do i need to run the powershell registration script for each project environment why can t i share environments across projects thanks for clarifying document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
11,186
| 13,957,697,520
|
IssuesEvent
|
2020-10-24 08:12:09
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
BE - AGIV: Discovery Service de la Région de Bruxelles-Capitale is unavailable
|
BE - Belgium Geoportal Harvesting process
|
Dear Nathalie, since a few days the Discovery Service de la Région de Bruxelles-Capitale is unavailable
http://geobru.irisnet.be/geonetwork/srv/fr/csw?request=GetCapabilities&service=CSW&version=2.0.2
Note that http://geobru.irisnet.be/geonetwork/srv/fr/csw?request=GetCapabilities&service=CSW&version=2.0.2
returns a capabilities document describing a service on a different endpoint ("http://www.geo.irisnet.be/geonetwork/srv/fre/csw")
<ows:OperationsMetadata>
<ows:Operation name="GetCapabilities">
<ows:DCP>
<ows:HTTP>
<ows:Get xlink:href="http://www.geo.irisnet.be/geonetwork/srv/fre/csw" />
<ows:Post xlink:href="http://www.geo.irisnet.be/geonetwork/srv/fre/csw" />
</ows:HTTP>
The INSPIRE Geoportal gets back this error message:
eu.europa.ec.inspire.geoportal.exception.GeoportalException: http://geobru.irisnet.be/geonetwork/srv/fr/csw?request=GetCapabilities&service=CSW&version=2.0.2: Connection reset
And from the browser I get:
Network Error (tcp_error)
A communication error occurred: ""
The Web Server may be down, too busy, or experiencing other problems preventing it from responding to requests. You may wish to try again at a later time.
For assistance, contact your network support team.
|
1.0
|
BE - AGIV: Discovery Service de la Région de Bruxelles-Capitale is unavailable - Dear Nathalie, since a few days the Discovery Service de la Région de Bruxelles-Capitale is unavailable
http://geobru.irisnet.be/geonetwork/srv/fr/csw?request=GetCapabilities&service=CSW&version=2.0.2
Note that http://geobru.irisnet.be/geonetwork/srv/fr/csw?request=GetCapabilities&service=CSW&version=2.0.2
returns a capabilities document describing a service on a different endpoint ("http://www.geo.irisnet.be/geonetwork/srv/fre/csw")
<ows:OperationsMetadata>
<ows:Operation name="GetCapabilities">
<ows:DCP>
<ows:HTTP>
<ows:Get xlink:href="http://www.geo.irisnet.be/geonetwork/srv/fre/csw" />
<ows:Post xlink:href="http://www.geo.irisnet.be/geonetwork/srv/fre/csw" />
</ows:HTTP>
The INSPIRE Geoportal gets back this error message:
eu.europa.ec.inspire.geoportal.exception.GeoportalException: http://geobru.irisnet.be/geonetwork/srv/fr/csw?request=GetCapabilities&service=CSW&version=2.0.2: Connection reset
And from the browser I get:
Network Error (tcp_error)
A communication error occurred: ""
The Web Server may be down, too busy, or experiencing other problems preventing it from responding to requests. You may wish to try again at a later time.
For assistance, contact your network support team.
|
process
|
be agiv discovery service de la région de bruxelles capitale is unavailable dear nathalie since a few days the discovery service de la r eacute gion de bruxelles capitale is unavailable note that returns a capabilities document describing a service on a different endpoint quot lt ows operationsmetadata gt lt ows operation name quot getcapabilities quot gt lt ows dcp gt lt ows http gt lt ows get xlink href quot gt lt ows post xlink href quot gt lt ows http gt the inspire geoportal gets back this error message eu europa ec inspire geoportal exception geoportalexception connection reset and from the browser i get network error tcp error a communication error occurred quot quot the web server may be down too busy or experiencing other problems preventing it from responding to requests you may wish to try again at a later time for assistance contact your network support team
| 1
|
15,906
| 20,111,845,165
|
IssuesEvent
|
2022-02-07 15:44:35
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
[Internal Error] The Hybrid Worker Extension failed to execute: {"Message":"Authentication failed for private links"}
|
automation/svc triaged cxp duplicate product-question process-automation/subsvc Pri2
|
I am using Azure Automation set as private per the recommendation on screen when creating an Azure Automation account. The goal is to run T-SQL against Azure SQL but since Azure Automation does not support a Private Link to Azure SQL yet, I set up a Hybrid Worker group and an Hybrid Worker. It added the Hybrid Worker and seems to be trying to install the extension but upon inspecting the VM (since it is failing to run a simple runbook) I see the following error:
[Internal Error] The Hybrid Worker Extension failed to execute: {"Message":"Authentication failed for private links"}
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run Azure Automation runbooks on a Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **v-ssudhir**
|
1.0
|
[Internal Error] The Hybrid Worker Extension failed to execute: {"Message":"Authentication failed for private links"} - I am using Azure Automation set as private per the recommendation on screen when creating an Azure Automation account. The goal is to run T-SQL against Azure SQL but since Azure Automation does not support a Private Link to Azure SQL yet, I set up a Hybrid Worker group and an Hybrid Worker. It added the Hybrid Worker and seems to be trying to install the extension but upon inspecting the VM (since it is failing to run a simple runbook) I see the following error:
[Internal Error] The Hybrid Worker Extension failed to execute: {"Message":"Authentication failed for private links"}
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run Azure Automation runbooks on a Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **v-ssudhir**
|
process
|
the hybrid worker extension failed to execute message authentication failed for private links i am using azure automation set as private per the recommendation on screen when creating an azure automation account the goal is to run t sql against azure sql but since azure automation does not support a private link to azure sql yet i set up a hybrid worker group and an hybrid worker it added the hybrid worker and seems to be trying to install the extension but upon inspecting the vm since it is failing to run a simple runbook i see the following error the hybrid worker extension failed to execute message authentication failed for private links document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login sgsneha microsoft alias v ssudhir
| 1
|
6,440
| 9,545,071,552
|
IssuesEvent
|
2019-05-01 15:58:36
|
mick-warehime/sixth_corp
|
https://api.github.com/repos/mick-warehime/sixth_corp
|
closed
|
Fix circular imports
|
ai character development process
|
AI and Character have circular imports which we fix by not specifying return types. Perhaps we can use Stateful instead?
|
1.0
|
Fix circular imports - AI and Character have circular imports which we fix by not specifying return types. Perhaps we can use Stateful instead?
|
process
|
fix circular imports ai and character have circular imports which we fix by not specifying return types perhaps we can use stateful instead
| 1
|
9,793
| 12,806,551,255
|
IssuesEvent
|
2020-07-03 09:39:42
|
arcus-azure/arcus.messaging
|
https://api.github.com/repos/arcus-azure/arcus.messaging
|
closed
|
Provide capability to support multiple message handlers based on message context
|
area:message-processing feature specs-required
|
Provide capability to support multiple message handlers based on message context. This would allow users to map inbound messages to different message handlers in the same compute unit.
This is important if different message types are shared on the same queue, migrating to a new message contract, etc.
It's up to the users to map the message to the message handler that should be called, this can be as simple as the following:
```csharp
.AddServiceBusPump()
.WithMessagHandler<OrderV1MessageHandler>(messageContext => messageContext.MessageType == MessageTypes.OrderV1);
.WithMessagHandler<OrderV2MessageHandler>(messageContext => messageContext.MessageType == MessageTypes.OrderV2);
```
But this is just pseudo code to prove the point, can also be an action that is being passed when creating the pump.
|
1.0
|
Provide capability to support multiple message handlers based on message context - Provide capability to support multiple message handlers based on message context. This would allow users to map inbound messages to different message handlers in the same compute unit.
This is important if different message types are shared on the same queue, migrating to a new message contract, etc.
It's up to the users to map the message to the message handler that should be called, this can be as simple as the following:
```csharp
.AddServiceBusPump()
.WithMessagHandler<OrderV1MessageHandler>(messageContext => messageContext.MessageType == MessageTypes.OrderV1);
.WithMessagHandler<OrderV2MessageHandler>(messageContext => messageContext.MessageType == MessageTypes.OrderV2);
```
But this is just pseudo code to prove the point, can also be an action that is being passed when creating the pump.
|
process
|
provide capability to support multiple message handlers based on message context provide capability to support multiple message handlers based on message context this would allow users to map inbound messages to different message handlers in the same compute unit this is important if different message types are shared on the same queue migrating to a new message contract etc it s up to the users to map the message to the message handler that should be called this can be as simple as the following csharp addservicebuspump withmessaghandler messagecontext messagecontext messagetype messagetypes withmessaghandler messagecontext messagecontext messagetype messagetypes but this is just pseudo code to prove the point can also be an action that is being passed when creating the pump
| 1
|
17,466
| 23,290,938,352
|
IssuesEvent
|
2022-08-05 22:48:01
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Clarify desired platform attributes on Process types
|
area-System.Diagnostics.Process in-pr os-maccatalyst
|
In https://github.com/dotnet/runtime/pull/68432#discussion_r857929495 @eerhardt raised an issue about an inconsistency in the process platform attributes. Specifically we're missing `SupportedOSPlatformAttribute("maccatalyst")` on `Process.PrivilegedProcessorTime`
https://github.com/dotnet/runtime/pull/61507 added most of these @akoeplinger @simonrozsival -- perhaps you can take a look?
cc @buyaa-n
|
1.0
|
Clarify desired platform attributes on Process types - In https://github.com/dotnet/runtime/pull/68432#discussion_r857929495 @eerhardt raised an issue about an inconsistency in the process platform attributes. Specifically we're missing `SupportedOSPlatformAttribute("maccatalyst")` on `Process.PrivilegedProcessorTime`
https://github.com/dotnet/runtime/pull/61507 added most of these @akoeplinger @simonrozsival -- perhaps you can take a look?
cc @buyaa-n
|
process
|
clarify desired platform attributes on process types in eerhardt raised an issue about an inconsistency in the process platform attributes specifically we re missing supportedosplatformattribute maccatalyst on process privilegedprocessortime added most of these akoeplinger simonrozsival perhaps you can take a look cc buyaa n
| 1
|
21,812
| 30,316,512,493
|
IssuesEvent
|
2023-07-10 15:55:42
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Add comments about dwciri equivalents
|
Term - change Docs - Quick Reference Guide Docs - List of Terms non-normative Process - complete
|
For each term that has a dwciri equivalent, include in the Comments for that term an explanation that the dwciri version exists what it is for. The comments should be consistent in wording. There should be one comment pattern for a dwciri term that is expected to have an IRI that points to a value in a controlled vocabularies, and another pattern for a dwciri term that is expected to have an IRI that points to other than a controlled vocabulary. Here are to draft options two elicit comments and suggestions.
"This term accepts a string as a value and has a counterpart with the same name in the dwciri namespace that is intended to be used only with a global unique identifier (specifically, an IRI) that points to a specific value in a controlled vocabulary. For this latter purpose, the dwciri version of the term is recommended to be populated."
"This term accepts a string as a value and has a counterpart with the same name in the dwciri namespace that is intended to be used only with a global unique identifier (specifically, an IRI). For this latter purpose, the dwciri version of the term is recommended to be populated."
|
1.0
|
Add comments about dwciri equivalents - For each term that has a dwciri equivalent, include in the Comments for that term an explanation that the dwciri version exists what it is for. The comments should be consistent in wording. There should be one comment pattern for a dwciri term that is expected to have an IRI that points to a value in a controlled vocabularies, and another pattern for a dwciri term that is expected to have an IRI that points to other than a controlled vocabulary. Here are to draft options two elicit comments and suggestions.
"This term accepts a string as a value and has a counterpart with the same name in the dwciri namespace that is intended to be used only with a global unique identifier (specifically, an IRI) that points to a specific value in a controlled vocabulary. For this latter purpose, the dwciri version of the term is recommended to be populated."
"This term accepts a string as a value and has a counterpart with the same name in the dwciri namespace that is intended to be used only with a global unique identifier (specifically, an IRI). For this latter purpose, the dwciri version of the term is recommended to be populated."
|
process
|
add comments about dwciri equivalents for each term that has a dwciri equivalent include in the comments for that term an explanation that the dwciri version exists what it is for the comments should be consistent in wording there should be one comment pattern for a dwciri term that is expected to have an iri that points to a value in a controlled vocabularies and another pattern for a dwciri term that is expected to have an iri that points to other than a controlled vocabulary here are to draft options two elicit comments and suggestions this term accepts a string as a value and has a counterpart with the same name in the dwciri namespace that is intended to be used only with a global unique identifier specifically an iri that points to a specific value in a controlled vocabulary for this latter purpose the dwciri version of the term is recommended to be populated this term accepts a string as a value and has a counterpart with the same name in the dwciri namespace that is intended to be used only with a global unique identifier specifically an iri for this latter purpose the dwciri version of the term is recommended to be populated
| 1
|
18,984
| 24,975,797,550
|
IssuesEvent
|
2022-11-02 07:39:59
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
closed
|
Set requestId on acceptance test calls
|
enhancement limechain P3 process
|
### Problem
Currenlty the aceptance tests can be run against remote envs, however, when doing so it's difficult to determine which calls it's making.
### Solution
The relay supports using the provided requestid on a call when provided.
Using this we should create a requestId, log it and use that value during the call.
This will allow easier tracing of acceptance test calls
### Alternatives
_No response_
|
1.0
|
Set requestId on acceptance test calls - ### Problem
Currenlty the aceptance tests can be run against remote envs, however, when doing so it's difficult to determine which calls it's making.
### Solution
The relay supports using the provided requestid on a call when provided.
Using this we should create a requestId, log it and use that value during the call.
This will allow easier tracing of acceptance test calls
### Alternatives
_No response_
|
process
|
set requestid on acceptance test calls problem currenlty the aceptance tests can be run against remote envs however when doing so it s difficult to determine which calls it s making solution the relay supports using the provided requestid on a call when provided using this we should create a requestid log it and use that value during the call this will allow easier tracing of acceptance test calls alternatives no response
| 1
|
10,054
| 13,044,161,691
|
IssuesEvent
|
2020-07-29 03:47:25
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `AddDateDatetimeInt` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `AddDateDatetimeInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `AddDateDatetimeInt` from TiDB -
## Description
Port the scalar function `AddDateDatetimeInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function adddatedatetimeint from tidb description port the scalar function adddatedatetimeint from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
4,155
| 7,103,714,756
|
IssuesEvent
|
2018-01-16 06:53:17
|
log2timeline/plaso
|
https://api.github.com/repos/log2timeline/plaso
|
opened
|
Support per source path spec storage of system configuration
|
enhancement preprocessing
|
* [ ] support per source path spec storage of system configuration
* [ ] support per source path spec storage of GuessOS
|
1.0
|
Support per source path spec storage of system configuration - * [ ] support per source path spec storage of system configuration
* [ ] support per source path spec storage of GuessOS
|
process
|
support per source path spec storage of system configuration support per source path spec storage of system configuration support per source path spec storage of guessos
| 1
|
15,448
| 19,662,827,367
|
IssuesEvent
|
2022-01-10 18:52:33
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
The project relies on colors.js with malicious code
|
type: bug process: dependencies
|
### Current behavior
- Bugs when run cypress cli
It's a known bug of [`colors`](https://github.com/Marak/colors.js/issues/290)


[bugs here](https://github.com/DouyinFE/semi-design/runs/4756809788?check_suite_focus=true)
- cypress dependency

### Desired behavior
everything is ok when run cypress
### Test code to reproduce
run it **_MAY BE DANGER_** on local machine: `npx cypress run`
### Cypress Version
9.2.0
### Other
_No response_
|
1.0
|
The project relies on colors.js with malicious code - ### Current behavior
- Bugs when run cypress cli
It's a known bug of [`colors`](https://github.com/Marak/colors.js/issues/290)


[bugs here](https://github.com/DouyinFE/semi-design/runs/4756809788?check_suite_focus=true)
- cypress dependency

### Desired behavior
everything is ok when run cypress
### Test code to reproduce
run it **_MAY BE DANGER_** on local machine: `npx cypress run`
### Cypress Version
9.2.0
### Other
_No response_
|
process
|
the project relies on colors js with malicious code current behavior bugs when run cypress cli it s a known bug of cypress dependency desired behavior everything is ok when run cypress test code to reproduce run it may be danger on local machine npx cypress run cypress version other no response
| 1
|
6,947
| 10,113,089,583
|
IssuesEvent
|
2019-07-30 15:56:31
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
Update Material.io API docs more frequently
|
Website type:Process
|
https://material.io/develop/ios/components/bottomnavigation/api-docs/Enums/MDCBottomNavigationBarAlignment.html is a 404, for example.
### Reproduction Steps
1. Navigate to https://material.io/develop/ios/components/bottomnavigation/
2. Click on the link, "Enumeration: MDCBottomNavigationBarAlignment"
<img width="542" alt="screen shot 2018-08-16 at 12 15 23 pm" src="https://user-images.githubusercontent.com/1753199/44220971-17959480-a14e-11e8-8a59-fc39bfc716e3.png">
### Expected Result
Links to a non-404 page
### Actual Result
Links to a 404 page. (https://material.io/develop/ios/components/bottomnavigation/api-docs/Enums/MDCBottomNavigationBarAlignment.html)
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/117177380](http://b/117177380)
|
1.0
|
Update Material.io API docs more frequently - https://material.io/develop/ios/components/bottomnavigation/api-docs/Enums/MDCBottomNavigationBarAlignment.html is a 404, for example.
### Reproduction Steps
1. Navigate to https://material.io/develop/ios/components/bottomnavigation/
2. Click on the link, "Enumeration: MDCBottomNavigationBarAlignment"
<img width="542" alt="screen shot 2018-08-16 at 12 15 23 pm" src="https://user-images.githubusercontent.com/1753199/44220971-17959480-a14e-11e8-8a59-fc39bfc716e3.png">
### Expected Result
Links to a non-404 page
### Actual Result
Links to a 404 page. (https://material.io/develop/ios/components/bottomnavigation/api-docs/Enums/MDCBottomNavigationBarAlignment.html)
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/117177380](http://b/117177380)
|
process
|
update material io api docs more frequently is a for example reproduction steps navigate to click on the link enumeration mdcbottomnavigationbaralignment img width alt screen shot at pm src expected result links to a non page actual result links to a page internal data associated internal bug
| 1
|
207,930
| 23,521,133,821
|
IssuesEvent
|
2022-08-19 06:03:47
|
Vonage/vonage-cli
|
https://api.github.com/repos/Vonage/vonage-cli
|
opened
|
cli-1.1.3.tgz: 1 vulnerabilities (highest severity is: 7.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cli-1.1.3.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/@oclif/color/node_modules/ansi-regex/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-cli/commit/92c72bfb87f884e0ce9078409e78e803f9610f84">92c72bfb87f884e0ce9078409e78e803f9610f84</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-3807](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ansi-regex-4.1.0.tgz | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-3807</summary>
### Vulnerable Library - <b>ansi-regex-4.1.0.tgz</b></p>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/@oclif/color/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- cli-1.1.3.tgz (Root Library)
- plugin-plugins-1.10.1.tgz
- color-0.1.2.tgz
- strip-ansi-5.2.0.tgz
- :x: **ansi-regex-4.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-cli/commit/92c72bfb87f884e0ce9078409e78e803f9610f84">92c72bfb87f884e0ce9078409e78e803f9610f84</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p>
</p>
<p></p>
</details>
|
True
|
cli-1.1.3.tgz: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cli-1.1.3.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/@oclif/color/node_modules/ansi-regex/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-cli/commit/92c72bfb87f884e0ce9078409e78e803f9610f84">92c72bfb87f884e0ce9078409e78e803f9610f84</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-3807](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ansi-regex-4.1.0.tgz | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-3807</summary>
### Vulnerable Library - <b>ansi-regex-4.1.0.tgz</b></p>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/@oclif/color/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- cli-1.1.3.tgz (Root Library)
- plugin-plugins-1.10.1.tgz
- color-0.1.2.tgz
- strip-ansi-5.2.0.tgz
- :x: **ansi-regex-4.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-cli/commit/92c72bfb87f884e0ce9078409e78e803f9610f84">92c72bfb87f884e0ce9078409e78e803f9610f84</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p>
</p>
<p></p>
</details>
|
non_process
|
cli tgz vulnerabilities highest severity is vulnerable library cli tgz path to dependency file package json path to vulnerable library node modules oclif color node modules ansi regex package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high ansi regex tgz transitive n a details cve vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file package json path to vulnerable library node modules oclif color node modules ansi regex package json dependency hierarchy cli tgz root library plugin plugins tgz color tgz strip ansi tgz x ansi regex tgz vulnerable library found in head commit a href found in base branch main vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex
| 0
|
4,533
| 4,423,356,088
|
IssuesEvent
|
2016-08-16 08:16:19
|
polylang/polylang
|
https://api.github.com/repos/polylang/polylang
|
opened
|
Performance issue with lot of sticky posts
|
Performance
|
Polylang slows down the site if there are a lot of sticky posts (tests made with 2000 sticky posts).
|
True
|
Performance issue with lot of sticky posts - Polylang slows down the site if there are a lot of sticky posts (tests made with 2000 sticky posts).
|
non_process
|
performance issue with lot of sticky posts polylang slows down the site if there are a lot of sticky posts tests made with sticky posts
| 0
|
629,405
| 20,031,916,858
|
IssuesEvent
|
2022-02-02 07:29:29
|
bryntum/support
|
https://api.github.com/repos/bryntum/support
|
closed
|
RowCopyPaste broken
|
bug resolved high-priority
|
Reproducible on the basic demo:
Cut a row with CTRL+X, select another row, paste with CTRL+V.
It does not paste in the correct place most of the time...
|
1.0
|
RowCopyPaste broken - Reproducible on the basic demo:
Cut a row with CTRL+X, select another row, paste with CTRL+V.
It does not paste in the correct place most of the time...
|
non_process
|
rowcopypaste broken reproducible on the basic demo cut a row with ctrl x select another row paste with ctrl v it does not paste in the correct place most of the time
| 0
|
20,652
| 27,328,100,453
|
IssuesEvent
|
2023-02-25 09:05:05
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
closed
|
Potential security vulnerabilities in object_store dependencies
|
development-process
|
Tokio and hyper (dependencies of object_store) have potential security vulnerabilities.
https://deps.rs/crate/object_store/0.5.4
|
1.0
|
Potential security vulnerabilities in object_store dependencies - Tokio and hyper (dependencies of object_store) have potential security vulnerabilities.
https://deps.rs/crate/object_store/0.5.4
|
process
|
potential security vulnerabilities in object store dependencies tokio and hyper dependencies of object store have potential security vulnerabilities
| 1
|
9,051
| 12,130,108,069
|
IssuesEvent
|
2020-04-23 00:30:41
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
remove gcp-devrel-py-tools from appengine/standard/urlfetch/requests/requirements-test.txt
|
priority: p2 remove-gcp-devrel-py-tools type: process
|
remove gcp-devrel-py-tools from appengine/standard/urlfetch/requests/requirements-test.txt
|
1.0
|
remove gcp-devrel-py-tools from appengine/standard/urlfetch/requests/requirements-test.txt - remove gcp-devrel-py-tools from appengine/standard/urlfetch/requests/requirements-test.txt
|
process
|
remove gcp devrel py tools from appengine standard urlfetch requests requirements test txt remove gcp devrel py tools from appengine standard urlfetch requests requirements test txt
| 1
|
22,630
| 7,195,417,884
|
IssuesEvent
|
2018-02-04 16:57:43
|
eclipse/openj9
|
https://api.github.com/repos/eclipse/openj9
|
closed
|
Github->Jenkins PR Trigger Comments need to support jdk8
|
comp:build jdk8
|
related #660
We currently support the following PR builds via Github PR comments.
- OpenJ9-JDK9 zLinux (linux_390-64_cmprssptrs)
- compile only
- sanity
- extended
- OpenJ9-JDK9 pLinux (linux_ppc-64_cmprssptrs_le)
- compile only
- sanity
- extended
You can trigger these builds via PR commit comment
`Jenkins compile`
`Jenkins compile zLinux`
`Jenkins compile pLinux`
`Jenkins test sanity`
`Jenkins test sanity zlinux`
`Jenkins test sanity plinux`
`Jenkins test extended`
`Jenkins test extended zlinux`
`Jenkins test extended plinux`
Here is the regex from the zLinux Sanity build:
`.*jenkins test sanity(.*zlinux.*)?(?! xlinux)(?! plinux).*`
Each job has a slightly different regex which will allow that job to run if the comment matches the regex
We need to update the regex's to support more JDK versions (i.e. 8,10 for the immediate future).
The current regex 'model' is harder to maintain because each job has to maintain the list of platforms it is not interested in. The alternate would be to have a regex that looks for `all` or `<my_platform>`. This same problem will apply to JDK versions once we add 8,10.
#### Final regex example
`.*\bjenkins\s+test\s+sanity\b\s*($|\n|depends\s+.*|(all|([a-z]+,)*zlinux(,[a-z]+)*)\s*($|\n|depends\s+.*|all|(jdk[0-9]+,)*jdk8(,jdk[0-9]+)*)(\s+depends.*)?)`
|
1.0
|
Github->Jenkins PR Trigger Comments need to support jdk8 - related #660
We currently support the following PR builds via Github PR comments.
- OpenJ9-JDK9 zLinux (linux_390-64_cmprssptrs)
- compile only
- sanity
- extended
- OpenJ9-JDK9 pLinux (linux_ppc-64_cmprssptrs_le)
- compile only
- sanity
- extended
You can trigger these builds via PR commit comment
`Jenkins compile`
`Jenkins compile zLinux`
`Jenkins compile pLinux`
`Jenkins test sanity`
`Jenkins test sanity zlinux`
`Jenkins test sanity plinux`
`Jenkins test extended`
`Jenkins test extended zlinux`
`Jenkins test extended plinux`
Here is the regex from the zLinux Sanity build:
`.*jenkins test sanity(.*zlinux.*)?(?! xlinux)(?! plinux).*`
Each job has a slightly different regex which will allow that job to run if the comment matches the regex
We need to update the regex's to support more JDK versions (i.e. 8,10 for the immediate future).
The current regex 'model' is harder to maintain because each job has to maintain the list of platforms it is not interested in. The alternate would be to have a regex that looks for `all` or `<my_platform>`. This same problem will apply to JDK versions once we add 8,10.
#### Final regex example
`.*\bjenkins\s+test\s+sanity\b\s*($|\n|depends\s+.*|(all|([a-z]+,)*zlinux(,[a-z]+)*)\s*($|\n|depends\s+.*|all|(jdk[0-9]+,)*jdk8(,jdk[0-9]+)*)(\s+depends.*)?)`
|
non_process
|
github jenkins pr trigger comments need to support related we currently support the following pr builds via github pr comments zlinux linux cmprssptrs compile only sanity extended plinux linux ppc cmprssptrs le compile only sanity extended you can trigger these builds via pr commit comment jenkins compile jenkins compile zlinux jenkins compile plinux jenkins test sanity jenkins test sanity zlinux jenkins test sanity plinux jenkins test extended jenkins test extended zlinux jenkins test extended plinux here is the regex from the zlinux sanity build jenkins test sanity zlinux xlinux plinux each job has a slightly different regex which will allow that job to run if the comment matches the regex we need to update the regex s to support more jdk versions i e for the immediate future the current regex model is harder to maintain because each job has to maintain the list of platforms it is not interested in the alternate would be to have a regex that looks for all or this same problem will apply to jdk versions once we add final regex example bjenkins s test s sanity b s n depends s all zlinux s n depends s all jdk jdk s depends
| 0
|
8,845
| 11,949,445,901
|
IssuesEvent
|
2020-04-03 13:40:08
|
Arch666Angel/mods
|
https://api.github.com/repos/Arch666Angel/mods
|
closed
|
[BUG] Artificial Fish Water requires Advanced Chem Plant
|
Angels Bio Processing Wont Fix
|
Artificial fish water has 2 fluids in (water + saline water), and 1 fluid out (fish water). Recipe requires an Advanced Chem plant, but given inputs/outputs should only require a standard chem plant.
|
1.0
|
[BUG] Artificial Fish Water requires Advanced Chem Plant - Artificial fish water has 2 fluids in (water + saline water), and 1 fluid out (fish water). Recipe requires an Advanced Chem plant, but given inputs/outputs should only require a standard chem plant.
|
process
|
artificial fish water requires advanced chem plant artificial fish water has fluids in water saline water and fluid out fish water recipe requires an advanced chem plant but given inputs outputs should only require a standard chem plant
| 1
|
20,071
| 26,563,787,558
|
IssuesEvent
|
2023-01-20 18:07:35
|
MPMG-DCC-UFMG/C01
|
https://api.github.com/repos/MPMG-DCC-UFMG/C01
|
closed
|
Filtrar tipos de arquivos em coletas de páginas dinâmicas.
|
[2] Baixa Prioridade [1] Requisito [0] Desenvolvimento [3] Processamento Dinâmico
|
## Comportamento esperado
Desejamos realizar coletas de apenas arquivos que atendem os formatos especificados na configuração do coletor, igual para o caso de coletas estáticas.
## Comportamento atual
O recurso não é suportado em coletas de arquivos dinâmicos.
## Passos para reproduzir o erro
- Crie um coletor dinâmico que deve restringir o tipo dos arquivos baixados e o execute.
- Note que arquivos de qualquer tipo são baixados pelo coletor, ignorando complemente o filtro de tipos de arquivos de sua configuração.
## Sistema
Todos.
|
1.0
|
Filtrar tipos de arquivos em coletas de páginas dinâmicas. - ## Comportamento esperado
Desejamos realizar coletas de apenas arquivos que atendem os formatos especificados na configuração do coletor, igual para o caso de coletas estáticas.
## Comportamento atual
O recurso não é suportado em coletas de arquivos dinâmicos.
## Passos para reproduzir o erro
- Crie um coletor dinâmico que deve restringir o tipo dos arquivos baixados e o execute.
- Note que arquivos de qualquer tipo são baixados pelo coletor, ignorando complemente o filtro de tipos de arquivos de sua configuração.
## Sistema
Todos.
|
process
|
filtrar tipos de arquivos em coletas de páginas dinâmicas comportamento esperado desejamos realizar coletas de apenas arquivos que atendem os formatos especificados na configuração do coletor igual para o caso de coletas estáticas comportamento atual o recurso não é suportado em coletas de arquivos dinâmicos passos para reproduzir o erro crie um coletor dinâmico que deve restringir o tipo dos arquivos baixados e o execute note que arquivos de qualquer tipo são baixados pelo coletor ignorando complemente o filtro de tipos de arquivos de sua configuração sistema todos
| 1
|
21,228
| 28,320,234,339
|
IssuesEvent
|
2023-04-11 00:02:35
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Error 1053: The service did not respond to the start or control request in a timely fashion.
|
area-System.ServiceProcess no-recent-activity needs-author-action
|
_This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/t/-Error-1053:-The-service-did-not-respond/10039971)._
---
[severity:It's more difficult to complete my work]
We have created a windows service using .Net C# language.
We are able to install service on the UAT server but could not start the service.
Service created by referring below link
https://www.c-sharpcorner.com/article/create-windows-services-in-c-sharp/
Getting following errors

Below is the error from EventLog

Please suggest solution.
---
### Original Comments
#### Feedback Bot on 5/12/2022, 04:01 AM:
(private comment, text removed)
#### Feedback Bot on 8/31/2022, 08:39 PM:
(private comment, text removed)
---
### Original Solutions
(no solutions)
|
1.0
|
Error 1053: The service did not respond to the start or control request in a timely fashion. - _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/t/-Error-1053:-The-service-did-not-respond/10039971)._
---
[severity:It's more difficult to complete my work]
We have created a windows service using .Net C# language.
We are able to install service on the UAT server but could not start the service.
Service created by referring below link
https://www.c-sharpcorner.com/article/create-windows-services-in-c-sharp/
Getting following errors

Below is the error from EventLog

Please suggest solution.
---
### Original Comments
#### Feedback Bot on 5/12/2022, 04:01 AM:
(private comment, text removed)
#### Feedback Bot on 8/31/2022, 08:39 PM:
(private comment, text removed)
---
### Original Solutions
(no solutions)
|
process
|
error the service did not respond to the start or control request in a timely fashion this issue has been moved from we have created a windows service using net c language we are able to install service on the uat server but could not start the service service created by referring below link getting following errors below is the error from eventlog please suggest solution original comments feedback bot on am private comment text removed feedback bot on pm private comment text removed original solutions no solutions
| 1
|
5,157
| 7,933,327,534
|
IssuesEvent
|
2018-07-08 04:09:38
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Data queries by hash may return incorrect 'empty' block
|
status-inprocess tools-all type-enhancement
|
If a block/trace/transaction/receipt/log/bloom is asked for by block number, the tool correctly reports that the block does not exist and fails to report the data item. If, instead, we ask for the item using a block/transaction hash, the system doesn't know how to distinguish between a hash that doesn't yet exist or never existed. In the case of an ask by hash, check that the resulting item has the same hash. This will indicate a bad return and not report on empty data as it now does. Test this by forcing latest block to 3,000,000 or forcing getBlockBinaryFile to return empty over 3,000,000 block. This happens mostly when user has not completed sync, but can happen also when the hash is incorrect.
|
1.0
|
Data queries by hash may return incorrect 'empty' block - If a block/trace/transaction/receipt/log/bloom is asked for by block number, the tool correctly reports that the block does not exist and fails to report the data item. If, instead, we ask for the item using a block/transaction hash, the system doesn't know how to distinguish between a hash that doesn't yet exist or never existed. In the case of an ask by hash, check that the resulting item has the same hash. This will indicate a bad return and not report on empty data as it now does. Test this by forcing latest block to 3,000,000 or forcing getBlockBinaryFile to return empty over 3,000,000 block. This happens mostly when user has not completed sync, but can happen also when the hash is incorrect.
|
process
|
data queries by hash may return incorrect empty block if a block trace transaction receipt log bloom is asked for by block number the tool correctly reports that the block does not exist and fails to report the data item if instead we ask for the item using a block transaction hash the system doesn t know how to distinguish between a hash that doesn t yet exist or never existed in the case of an ask by hash check that the resulting item has the same hash this will indicate a bad return and not report on empty data as it now does test this by forcing latest block to or forcing getblockbinaryfile to return empty over block this happens mostly when user has not completed sync but can happen also when the hash is incorrect
| 1
|
287,794
| 24,861,900,970
|
IssuesEvent
|
2022-10-27 08:55:44
|
NexusMutual/smart-contracts
|
https://api.github.com/repos/NexusMutual/smart-contracts
|
closed
|
Assessment unit test: startAssessment & castVotes
|
test bootnode
|
**startAssessment**
- [ ] Should allow only internal contracts to call it
- [x] Should store the total rewards and assessment deposit
- [x] Should set the correct end date for the poll
- [x] Should return the assessment id
**castVotes**
- [ ] Should revert if system is paused
- [ ] Should revert if caller is not a member
- [ ] Should revert if array length of assessments id and votes does not match
- [x] Should revert if member already voted
- [x] Should revert if member has no stake
- [x] Should revert if voting is closed
- [x] Should revert if there is no accepting vote when voting deny
- [ ] Should allow to cast votes on multiple assessments
- [x] Should allow to increase stake and vote
- [ ] Should allow to stake for first time and vote
- [x] Should work correctly if stakeIncrease is 0
- [x] Should reset the poll end date on the first accept vote
- [x] Should correctly extend the end date proportionally to the user stake if polls ends in less than 24 hours
- [ ] Should correctly extend the end date up to 1 day maximum if polls ends in less than 24 hours
- [x] Should correctly update the poll accepted and denied stake amount
- [x] Should store the caller vote
- [ ] Should emit an event on each vote
|
1.0
|
Assessment unit test: startAssessment & castVotes - **startAssessment**
- [ ] Should allow only internal contracts to call it
- [x] Should store the total rewards and assessment deposit
- [x] Should set the correct end date for the poll
- [x] Should return the assessment id
**castVotes**
- [ ] Should revert if system is paused
- [ ] Should revert if caller is not a member
- [ ] Should revert if array length of assessments id and votes does not match
- [x] Should revert if member already voted
- [x] Should revert if member has no stake
- [x] Should revert if voting is closed
- [x] Should revert if there is no accepting vote when voting deny
- [ ] Should allow to cast votes on multiple assessments
- [x] Should allow to increase stake and vote
- [ ] Should allow to stake for first time and vote
- [x] Should work correctly if stakeIncrease is 0
- [x] Should reset the poll end date on the first accept vote
- [x] Should correctly extend the end date proportionally to the user stake if polls ends in less than 24 hours
- [ ] Should correctly extend the end date up to 1 day maximum if polls ends in less than 24 hours
- [x] Should correctly update the poll accepted and denied stake amount
- [x] Should store the caller vote
- [ ] Should emit an event on each vote
|
non_process
|
assessment unit test startassessment castvotes startassessment should allow only internal contracts to call it should store the total rewards and assessment deposit should set the correct end date for the poll should return the assessment id castvotes should revert if system is paused should revert if caller is not a member should revert if array length of assessments id and votes does not match should revert if member already voted should revert if member has no stake should revert if voting is closed should revert if there is no accepting vote when voting deny should allow to cast votes on multiple assessments should allow to increase stake and vote should allow to stake for first time and vote should work correctly if stakeincrease is should reset the poll end date on the first accept vote should correctly extend the end date proportionally to the user stake if polls ends in less than hours should correctly extend the end date up to day maximum if polls ends in less than hours should correctly update the poll accepted and denied stake amount should store the caller vote should emit an event on each vote
| 0
|
525,265
| 15,242,161,816
|
IssuesEvent
|
2021-02-19 09:29:30
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
docs.python.org - design is broken
|
browser-fenix engine-gecko ml-needsdiagnosis-false priority-normal
|
<!-- @browser: Firefox Mobile 86.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:86.0) Gecko/86.0 Firefox/86.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/67299 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://docs.python.org/3.10/whatsnew/3.10.html
**Browser / Version**: Firefox Mobile 86.0
**Operating System**: Android 9
**Tested Another Browser**: No
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
Website is cropped to the right
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/2/e3d7f902-97ad-4fea-9005-ddfb1dcaf90d.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210212222924</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/2/0e00a654-bda3-41b4-b97e-0f6a14b6505b)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
docs.python.org - design is broken - <!-- @browser: Firefox Mobile 86.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:86.0) Gecko/86.0 Firefox/86.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/67299 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://docs.python.org/3.10/whatsnew/3.10.html
**Browser / Version**: Firefox Mobile 86.0
**Operating System**: Android 9
**Tested Another Browser**: No
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
Website is cropped to the right
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/2/e3d7f902-97ad-4fea-9005-ddfb1dcaf90d.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210212222924</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/2/0e00a654-bda3-41b4-b97e-0f6a14b6505b)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
docs python org design is broken url browser version firefox mobile operating system android tested another browser no problem type design is broken description items not fully visible steps to reproduce website is cropped to the right view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
7,171
| 10,313,888,381
|
IssuesEvent
|
2019-08-30 00:56:35
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Commands in the tutorial not recognized
|
Pri2 automation/svc cxp process-automation/subsvc product-question triaged
|
Hi, I'm getting the following error:
Disable-AzureRmContextAutosave : The term 'Disable-AzureRmContextAutosave' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the
path is correct and try again.
At line:2 char:1
+ Disable-AzureRmContextAutosave –Scope Process
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Disable-AzureRmContextAutosave:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
Connect-AzureRmAccount : The term 'Connect-AzureRmAccount' is not recognized as the name of a cmdlet, function, script
file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct
and try again.
At line:5 char:1
+ Connect-AzureRmAccount -ServicePrincipal -Tenant $connection.TenantID ...
+ ~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Connect-AzureRmAccount:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 038d927f-2bcc-c62d-b3c3-f194513bced6
* Version Independent ID: 41adf2c5-3ab7-7387-e541-89e34aa6a6b1
* Content: [My first PowerShell runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-powershell)
* Content Source: [articles/automation/automation-first-runbook-textual-powershell.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-powershell.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
1.0
|
Commands in the tutorial not recognized -
Hi, I'm getting the following error:
Disable-AzureRmContextAutosave : The term 'Disable-AzureRmContextAutosave' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the
path is correct and try again.
At line:2 char:1
+ Disable-AzureRmContextAutosave –Scope Process
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Disable-AzureRmContextAutosave:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
Connect-AzureRmAccount : The term 'Connect-AzureRmAccount' is not recognized as the name of a cmdlet, function, script
file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct
and try again.
At line:5 char:1
+ Connect-AzureRmAccount -ServicePrincipal -Tenant $connection.TenantID ...
+ ~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Connect-AzureRmAccount:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 038d927f-2bcc-c62d-b3c3-f194513bced6
* Version Independent ID: 41adf2c5-3ab7-7387-e541-89e34aa6a6b1
* Content: [My first PowerShell runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-powershell)
* Content Source: [articles/automation/automation-first-runbook-textual-powershell.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-powershell.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
process
|
commands in the tutorial not recognized hi i m getting the following error disable azurermcontextautosave the term disable azurermcontextautosave is not recognized as the name of a cmdlet function script file or operable program check the spelling of the name or if a path was included verify that the path is correct and try again at line char disable azurermcontextautosave –scope process categoryinfo objectnotfound disable azurermcontextautosave string commandnotfoundexception fullyqualifiederrorid commandnotfoundexception connect azurermaccount the term connect azurermaccount is not recognized as the name of a cmdlet function script file or operable program check the spelling of the name or if a path was included verify that the path is correct and try again at line char connect azurermaccount serviceprincipal tenant connection tenantid categoryinfo objectnotfound connect azurermaccount string commandnotfoundexception fullyqualifiederrorid commandnotfoundexception document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
| 1
|
161,729
| 20,155,315,286
|
IssuesEvent
|
2022-02-09 15:57:41
|
kapseliboi/Node-Data
|
https://api.github.com/repos/kapseliboi/Node-Data
|
opened
|
CVE-2017-1000228 (High) detected in ejs-2.3.3.tgz
|
security vulnerability
|
## CVE-2017-1000228 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ejs-2.3.3.tgz</b></p></summary>
<p>Embedded JavaScript templates</p>
<p>Library home page: <a href="https://registry.npmjs.org/ejs/-/ejs-2.3.3.tgz">https://registry.npmjs.org/ejs/-/ejs-2.3.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ejs/package.json</p>
<p>
Dependency Hierarchy:
- :x: **ejs-2.3.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/Node-Data/commit/289c77565fc637d4c0e4bf4a9a1e81df96cd190a">289c77565fc637d4c0e4bf4a9a1e81df96cd190a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nodejs ejs versions older than 2.5.3 is vulnerable to remote code execution due to weak input validation in ejs.renderFile() function
<p>Publish Date: 2017-11-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-1000228>CVE-2017-1000228</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-1000228">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-1000228</a></p>
<p>Release Date: 2017-11-17</p>
<p>Fix Resolution: 2.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-1000228 (High) detected in ejs-2.3.3.tgz - ## CVE-2017-1000228 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ejs-2.3.3.tgz</b></p></summary>
<p>Embedded JavaScript templates</p>
<p>Library home page: <a href="https://registry.npmjs.org/ejs/-/ejs-2.3.3.tgz">https://registry.npmjs.org/ejs/-/ejs-2.3.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ejs/package.json</p>
<p>
Dependency Hierarchy:
- :x: **ejs-2.3.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/Node-Data/commit/289c77565fc637d4c0e4bf4a9a1e81df96cd190a">289c77565fc637d4c0e4bf4a9a1e81df96cd190a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nodejs ejs versions older than 2.5.3 is vulnerable to remote code execution due to weak input validation in ejs.renderFile() function
<p>Publish Date: 2017-11-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-1000228>CVE-2017-1000228</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-1000228">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-1000228</a></p>
<p>Release Date: 2017-11-17</p>
<p>Fix Resolution: 2.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in ejs tgz cve high severity vulnerability vulnerable library ejs tgz embedded javascript templates library home page a href path to dependency file package json path to vulnerable library node modules ejs package json dependency hierarchy x ejs tgz vulnerable library found in head commit a href found in base branch master vulnerability details nodejs ejs versions older than is vulnerable to remote code execution due to weak input validation in ejs renderfile function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
7,016
| 10,166,810,409
|
IssuesEvent
|
2019-08-07 16:39:56
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Bigtable: 'test_bigtable_create_table' snippet flakes with '504 Deadline Exceeded'.
|
api: bigtable flaky testing type: process
|
From [this Kokoro failure](https://source.cloud.google.com/results/invocations/2322354e-a3c8-4e30-8cac-2dbf3a814cad/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fbigtable/log):
```python
__________________________ test_bigtable_create_table __________________________
args = (parent: "projects/precise-truck-742/instances/snippet-tests-1561572579521"
table_id: "table_my"
table {
column_families {
key: "cf1"
value {
gc_rule {
max_num_versions: 2
}
}
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/precise-truck-742/instances/snippet-tests-1561572579521'), ('x-goog-api-client', 'gl-python/3.7.0b3 grpc/1.21.1 gax/1.13.0 gapic/0.33.0 gccl/0.33.0')], 'timeout': 20.0}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
../api_core/google/api_core/grpc_helpers.py:57:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f9df35a5828>
request = parent: "projects/precise-truck-742/instances/snippet-tests-1561572579521"
table_id: "table_my"
table {
column_families {
key: "cf1"
value {
gc_rule {
max_num_versions: 2
}
}
}
}
timeout = 20.0
metadata = [('x-goog-request-params', 'parent=projects/precise-truck-742/instances/snippet-tests-1561572579521'), ('x-goog-api-client', 'gl-python/3.7.0b3 grpc/1.21.1 gax/1.13.0 gapic/0.33.0 gccl/0.33.0')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/snippets-3-7/lib/python3.7/site-packages/grpc/_channel.py:565:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7f9df35c54e0>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f9df35b1e88>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _Rendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _Rendezvous(state, None, None, deadline)
E grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
E status = StatusCode.DEADLINE_EXCEEDED
E details = "Deadline Exceeded"
E debug_error_string = "{"created":"@1561572698.043685871","description":"Error received from peer ipv4:74.125.142.95:443","file":"src/core/lib/surface/call.cc","file_line":1046,"grpc_message":"Deadline Exceeded","grpc_status":4}"
E >
.nox/snippets-3-7/lib/python3.7/site-packages/grpc/_channel.py:467: _Rendezvous
The above exception was the direct cause of the following exception:
def test_bigtable_create_table():
# [START bigtable_create_table]
from google.cloud.bigtable import Client
from google.cloud.bigtable import column_family
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table("table_my")
# Define the GC policy to retain only the most recent 2 versions.
max_versions_rule = column_family.MaxVersionsGCRule(2)
> table.create(column_families={"cf1": max_versions_rule})
docs/snippets.py:341:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/bigtable/table.py:251: in create
initial_splits=splits,
google/cloud/bigtable_admin_v2/gapic/bigtable_table_admin_client.py:340: in create_table
request, retry=retry, timeout=timeout, metadata=metadata
../api_core/google/api_core/gapic_v1/method.py:143: in __call__
return wrapped_func(*args, **kwargs)
../api_core/google/api_core/retry.py:273: in retry_wrapped_func
on_error=on_error,
../api_core/google/api_core/retry.py:182: in retry_target
return target()
../api_core/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
../api_core/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_Rendezvous of RPC that terminated with:
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug...2.95:443","file":"src/core/lib/surface/call.cc","file_line":1046,"grpc_message":"Deadline Exceeded","grpc_status":4}"
>
> ???
E google.api_core.exceptions.DeadlineExceeded: 504 Deadline Exceeded
``
|
1.0
|
Bigtable: 'test_bigtable_create_table' snippet flakes with '504 Deadline Exceeded'. - From [this Kokoro failure](https://source.cloud.google.com/results/invocations/2322354e-a3c8-4e30-8cac-2dbf3a814cad/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fbigtable/log):
```python
__________________________ test_bigtable_create_table __________________________
args = (parent: "projects/precise-truck-742/instances/snippet-tests-1561572579521"
table_id: "table_my"
table {
column_families {
key: "cf1"
value {
gc_rule {
max_num_versions: 2
}
}
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/precise-truck-742/instances/snippet-tests-1561572579521'), ('x-goog-api-client', 'gl-python/3.7.0b3 grpc/1.21.1 gax/1.13.0 gapic/0.33.0 gccl/0.33.0')], 'timeout': 20.0}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
../api_core/google/api_core/grpc_helpers.py:57:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f9df35a5828>
request = parent: "projects/precise-truck-742/instances/snippet-tests-1561572579521"
table_id: "table_my"
table {
column_families {
key: "cf1"
value {
gc_rule {
max_num_versions: 2
}
}
}
}
timeout = 20.0
metadata = [('x-goog-request-params', 'parent=projects/precise-truck-742/instances/snippet-tests-1561572579521'), ('x-goog-api-client', 'gl-python/3.7.0b3 grpc/1.21.1 gax/1.13.0 gapic/0.33.0 gccl/0.33.0')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/snippets-3-7/lib/python3.7/site-packages/grpc/_channel.py:565:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7f9df35c54e0>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f9df35b1e88>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _Rendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _Rendezvous(state, None, None, deadline)
E grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
E status = StatusCode.DEADLINE_EXCEEDED
E details = "Deadline Exceeded"
E debug_error_string = "{"created":"@1561572698.043685871","description":"Error received from peer ipv4:74.125.142.95:443","file":"src/core/lib/surface/call.cc","file_line":1046,"grpc_message":"Deadline Exceeded","grpc_status":4}"
E >
.nox/snippets-3-7/lib/python3.7/site-packages/grpc/_channel.py:467: _Rendezvous
The above exception was the direct cause of the following exception:
def test_bigtable_create_table():
# [START bigtable_create_table]
from google.cloud.bigtable import Client
from google.cloud.bigtable import column_family
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table("table_my")
# Define the GC policy to retain only the most recent 2 versions.
max_versions_rule = column_family.MaxVersionsGCRule(2)
> table.create(column_families={"cf1": max_versions_rule})
docs/snippets.py:341:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/bigtable/table.py:251: in create
initial_splits=splits,
google/cloud/bigtable_admin_v2/gapic/bigtable_table_admin_client.py:340: in create_table
request, retry=retry, timeout=timeout, metadata=metadata
../api_core/google/api_core/gapic_v1/method.py:143: in __call__
return wrapped_func(*args, **kwargs)
../api_core/google/api_core/retry.py:273: in retry_wrapped_func
on_error=on_error,
../api_core/google/api_core/retry.py:182: in retry_target
return target()
../api_core/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
../api_core/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_Rendezvous of RPC that terminated with:
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug...2.95:443","file":"src/core/lib/surface/call.cc","file_line":1046,"grpc_message":"Deadline Exceeded","grpc_status":4}"
>
> ???
E google.api_core.exceptions.DeadlineExceeded: 504 Deadline Exceeded
``
|
process
|
bigtable test bigtable create table snippet flakes with deadline exceeded from python test bigtable create table args parent projects precise truck instances snippet tests table id table my table column families key value gc rule max num versions kwargs metadata timeout six wraps callable def error remapped callable args kwargs try return callable args kwargs api core google api core grpc helpers py self request parent projects precise truck instances snippet tests table id table my table column families key value gc rule max num versions timeout metadata credentials none wait for ready none compression none def call self request timeout none metadata none credentials none wait for ready none compression none state call self blocking request timeout metadata credentials wait for ready compression return end unary response blocking state call false none nox snippets lib site packages grpc channel py state call with call false deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous rendezvous state call none deadline return state response rendezvous else return state response else raise rendezvous state none none deadline e grpc channel rendezvous rendezvous of rpc that terminated with e status statuscode deadline exceeded e details deadline exceeded e debug error string created description error received from peer file src core lib surface call cc file line grpc message deadline exceeded grpc status e nox snippets lib site packages grpc channel py rendezvous the above exception was the direct cause of the following exception def test bigtable create table from google cloud bigtable import client from google cloud bigtable import column family client client admin true instance client instance instance id table instance table table my define the gc policy to retain only the most recent versions max versions rule column family maxversionsgcrule table create column families max versions rule docs snippets py google cloud bigtable table py in create initial splits splits google cloud bigtable admin gapic bigtable table admin client py in create table request retry retry timeout timeout metadata metadata api core google api core gapic method py in call return wrapped func args kwargs api core google api core retry py in retry wrapped func on error on error api core google api core retry py in retry target return target api core google api core timeout py in func with timeout return func args kwargs api core google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value none from value rendezvous of rpc that terminated with status statuscode deadline exceeded details deadline exceeded debug file src core lib surface call cc file line grpc message deadline exceeded grpc status e google api core exceptions deadlineexceeded deadline exceeded
| 1
|
14,110
| 17,011,382,689
|
IssuesEvent
|
2021-07-02 05:26:52
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Obsoletion: GO:0036523 induction by symbiont of host cytokine production
|
multi-species process
|
This has no (0) annotations and is not a well thought-out term. What kind of cytokines? Proinflammatory or anti-inflammatory? Is the symbiont manipulating the host immune signaling pathways to "induce" this cytokine production? Or is the host reacting via pattern-recognition receptors (PRRs/PAMPs/MAMPs) to the presence of the parasite? If so, the symbiont is doing nothing and the host is doing its "job" so this doesn't involve the symbiont/parasite at all.
|
1.0
|
Obsoletion: GO:0036523 induction by symbiont of host cytokine production - This has no (0) annotations and is not a well thought-out term. What kind of cytokines? Proinflammatory or anti-inflammatory? Is the symbiont manipulating the host immune signaling pathways to "induce" this cytokine production? Or is the host reacting via pattern-recognition receptors (PRRs/PAMPs/MAMPs) to the presence of the parasite? If so, the symbiont is doing nothing and the host is doing its "job" so this doesn't involve the symbiont/parasite at all.
|
process
|
obsoletion go induction by symbiont of host cytokine production this has no annotations and is not a well thought out term what kind of cytokines proinflammatory or anti inflammatory is the symbiont manipulating the host immune signaling pathways to induce this cytokine production or is the host reacting via pattern recognition receptors prrs pamps mamps to the presence of the parasite if so the symbiont is doing nothing and the host is doing its job so this doesn t involve the symbiont parasite at all
| 1
|
17,667
| 23,491,318,895
|
IssuesEvent
|
2022-08-17 19:02:42
|
openxla/stablehlo
|
https://api.github.com/repos/openxla/stablehlo
|
opened
|
Set up regular integrates from StableHLO into MLIR-HLO
|
Process
|
The idea is to vendor StableHLO into the MLIR-HLO repository, so that existing MLIR-HLO users can experiment with StableHLO in a low-friction manner. StableHLO is GitHub-first, and MLIR-HLO is Google3-first, and that's the gap that we'll be bridging during this integrates.
|
1.0
|
Set up regular integrates from StableHLO into MLIR-HLO - The idea is to vendor StableHLO into the MLIR-HLO repository, so that existing MLIR-HLO users can experiment with StableHLO in a low-friction manner. StableHLO is GitHub-first, and MLIR-HLO is Google3-first, and that's the gap that we'll be bridging during this integrates.
|
process
|
set up regular integrates from stablehlo into mlir hlo the idea is to vendor stablehlo into the mlir hlo repository so that existing mlir hlo users can experiment with stablehlo in a low friction manner stablehlo is github first and mlir hlo is first and that s the gap that we ll be bridging during this integrates
| 1
|
21,048
| 27,993,010,043
|
IssuesEvent
|
2023-03-27 06:13:23
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
regressions on the testsuite
|
question priority: high scope: image processing
|
After merging some changes today the testsuite has many failures.
The culprit is.
```
commit 6c4020b9b5be44d91e03ec812f2fadfe7a26bb26
Author: hanno@schwalm-bremen.de <hanno@schwalm-bremen.de>
Date: Sat Mar 25 17:31:13 2023 +0100
Avoid possible Nans with ordinary clip highlights
data/kernels/basic.cl | 3 ++-
src/iop/highlights.c | 29 ++++++-----------------------
2 files changed, 8 insertions(+), 24 deletions(-)
```
Many tests are failing, all 013* for example and:
- 0080-toneequal-eigf
- 0081-mask-groups
- 0084-cacorrect
- 0087-blendif-and-or
- 0088-blendif-diff-excl
The affected areas are mostly (always?) blacks.
The diff for 0130-toneeq-HSL-lightness is:

|
1.0
|
regressions on the testsuite - After merging some changes today the testsuite has many failures.
The culprit is.
```
commit 6c4020b9b5be44d91e03ec812f2fadfe7a26bb26
Author: hanno@schwalm-bremen.de <hanno@schwalm-bremen.de>
Date: Sat Mar 25 17:31:13 2023 +0100
Avoid possible Nans with ordinary clip highlights
data/kernels/basic.cl | 3 ++-
src/iop/highlights.c | 29 ++++++-----------------------
2 files changed, 8 insertions(+), 24 deletions(-)
```
Many tests are failing, all 013* for example and:
- 0080-toneequal-eigf
- 0081-mask-groups
- 0084-cacorrect
- 0087-blendif-and-or
- 0088-blendif-diff-excl
The affected areas are mostly (always?) blacks.
The diff for 0130-toneeq-HSL-lightness is:

|
process
|
regressions on the testsuite after merging some changes today the testsuite has many failures the culprit is commit author hanno schwalm bremen de date sat mar avoid possible nans with ordinary clip highlights data kernels basic cl src iop highlights c files changed insertions deletions many tests are failing all for example and toneequal eigf mask groups cacorrect blendif and or blendif diff excl the affected areas are mostly always blacks the diff for toneeq hsl lightness is
| 1
|
679,788
| 23,245,478,575
|
IssuesEvent
|
2022-08-03 19:41:34
|
wp-media/wp-rocket
|
https://api.github.com/repos/wp-media/wp-rocket
|
closed
|
PHP deprecated notice when passing null to json_decode() for 3.9 and 3.10 versions
|
type: enhancement priority: low effort: [XS] module: remove unused css
|
**Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version - this is affecting versions `3.9.5.x` and `3.10.10.x`
- Used the search feature to ensure that the bug hasn’t been reported before ✅
**Describe the bug**
When using either `3.9.5.1` or `3.10.10.1` with PHP `8.1.x` the following notice is displayed in the `debug.log`:
```PHP
PHP Deprecated: json_decode(): Passing null to parameter #1 ($json) of type string is deprecated in /var/www/example.com/htdocs/wp-content/plugins/wp-rocket/inc/Engine/Optimization/RUCSS/Database/Row/UsedCSS.php on line 23
```
https://github.com/wp-media/wp-rocket/blob/8ef7ca349532f454dae3c63000db56b898f042af/inc/Engine/Optimization/RUCSS/Database/Row/UsedCSS.php#L23
https://github.com/wp-media/wp-rocket/blob/229f43c7bfe6218356277925822ed17d475ac068/inc/Engine/Optimization/RUCSS/Database/Row/UsedCSS.php#L23
**To Reproduce**
Steps to reproduce the behavior:
1. Enable WordPress's debugging.
2. Install either WP Rocket `3.9.5.1` or `3.10.10.1`.
3. Enable Remove Unused CSS.
4. Check the `debug.log`.
**Expected behavior**
No notices should be displayed with these versions.
**Additional context**
I'm guessing these wouldn't be fixed in other cases, but since `3.9.5.1` and `3.10.10.1` will be rolled out soon maybe we can work on this. Confirmed this with @piotrbak.
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
|
1.0
|
PHP deprecated notice when passing null to json_decode() for 3.9 and 3.10 versions - **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version - this is affecting versions `3.9.5.x` and `3.10.10.x`
- Used the search feature to ensure that the bug hasn’t been reported before ✅
**Describe the bug**
When using either `3.9.5.1` or `3.10.10.1` with PHP `8.1.x` the following notice is displayed in the `debug.log`:
```PHP
PHP Deprecated: json_decode(): Passing null to parameter #1 ($json) of type string is deprecated in /var/www/example.com/htdocs/wp-content/plugins/wp-rocket/inc/Engine/Optimization/RUCSS/Database/Row/UsedCSS.php on line 23
```
https://github.com/wp-media/wp-rocket/blob/8ef7ca349532f454dae3c63000db56b898f042af/inc/Engine/Optimization/RUCSS/Database/Row/UsedCSS.php#L23
https://github.com/wp-media/wp-rocket/blob/229f43c7bfe6218356277925822ed17d475ac068/inc/Engine/Optimization/RUCSS/Database/Row/UsedCSS.php#L23
**To Reproduce**
Steps to reproduce the behavior:
1. Enable WordPress's debugging.
2. Install either WP Rocket `3.9.5.1` or `3.10.10.1`.
3. Enable Remove Unused CSS.
4. Check the `debug.log`.
**Expected behavior**
No notices should be displayed with these versions.
**Additional context**
I'm guessing these wouldn't be fixed in other cases, but since `3.9.5.1` and `3.10.10.1` will be rolled out soon maybe we can work on this. Confirmed this with @piotrbak.
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
|
non_process
|
php deprecated notice when passing null to json decode for and versions before submitting an issue please check that you’ve completed the following steps made sure you’re on the latest version this is affecting versions x and x used the search feature to ensure that the bug hasn’t been reported before ✅ describe the bug when using either or with php x the following notice is displayed in the debug log php php deprecated json decode passing null to parameter json of type string is deprecated in var www example com htdocs wp content plugins wp rocket inc engine optimization rucss database row usedcss php on line to reproduce steps to reproduce the behavior enable wordpress s debugging install either wp rocket or enable remove unused css check the debug log expected behavior no notices should be displayed with these versions additional context i m guessing these wouldn t be fixed in other cases but since and will be rolled out soon maybe we can work on this confirmed this with piotrbak backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort
| 0
|
19,862
| 6,779,650,742
|
IssuesEvent
|
2017-10-29 02:28:20
|
apache/bookkeeper
|
https://api.github.com/repos/apache/bookkeeper
|
closed
|
Enable checkstyle in a few packages
|
area/build release/4.6.0 type/task
|
This is part of #230
- feature
- processor
- shims
- stats
- streaming
- versioning
- zookeeper
|
1.0
|
Enable checkstyle in a few packages - This is part of #230
- feature
- processor
- shims
- stats
- streaming
- versioning
- zookeeper
|
non_process
|
enable checkstyle in a few packages this is part of feature processor shims stats streaming versioning zookeeper
| 0
|
76,080
| 14,567,500,247
|
IssuesEvent
|
2020-12-17 10:20:14
|
creativecommons/chooser
|
https://api.github.com/repos/creativecommons/chooser
|
closed
|
Add unit and e2e tests for the LicenseUseCard component
|
Hacktoberfest good first issue help wanted 🏷 status: label work required 💻 aspect: code 🤖 aspect: dx
|
Unit and e2e tests need to be written for the LicenseUseCard component. Unit tests are done with [Jest](https://jestjs.io/), and e2e tests are done with [nightwatch](https://nightwatchjs.org/).
Please remember to test the following things:
- That individual parts of the component are present when appropriate. (unit and e2e)
- That any computed props and methods work properly, if there are any. (unit)
- Any common interactions between the user and component, if there are any. (e2e)
- Any other functionality unique to the component being tested!
### Additional Context
- [./src/components/LicenseUseCard.vue](https://github.com/creativecommons/cc-chooser/blob/master/src/components/LicenseUseCard.vue)
- [This repo's testing README](https://github.com/creativecommons/cc-chooser/blob/master/tests/README.md)
- [Vue's guide on unit testing](https://vuejs.org/v2/guide/unit-testing.html)
- [Vue's guide on unit testing with VueX](https://vue-test-utils.vuejs.org/guides/using-with-vuex.html)
|
1.0
|
Add unit and e2e tests for the LicenseUseCard component - Unit and e2e tests need to be written for the LicenseUseCard component. Unit tests are done with [Jest](https://jestjs.io/), and e2e tests are done with [nightwatch](https://nightwatchjs.org/).
Please remember to test the following things:
- That individual parts of the component are present when appropriate. (unit and e2e)
- That any computed props and methods work properly, if there are any. (unit)
- Any common interactions between the user and component, if there are any. (e2e)
- Any other functionality unique to the component being tested!
### Additional Context
- [./src/components/LicenseUseCard.vue](https://github.com/creativecommons/cc-chooser/blob/master/src/components/LicenseUseCard.vue)
- [This repo's testing README](https://github.com/creativecommons/cc-chooser/blob/master/tests/README.md)
- [Vue's guide on unit testing](https://vuejs.org/v2/guide/unit-testing.html)
- [Vue's guide on unit testing with VueX](https://vue-test-utils.vuejs.org/guides/using-with-vuex.html)
|
non_process
|
add unit and tests for the licenseusecard component unit and tests need to be written for the licenseusecard component unit tests are done with and tests are done with please remember to test the following things that individual parts of the component are present when appropriate unit and that any computed props and methods work properly if there are any unit any common interactions between the user and component if there are any any other functionality unique to the component being tested additional context
| 0
|
356,465
| 25,176,194,185
|
IssuesEvent
|
2022-11-11 09:28:21
|
JasonCP14/pe
|
https://api.github.com/repos/JasonCP14/pe
|
opened
|
Add tag use case is not trivial
|
severity.Low type.DocumentationBug
|
Use case is not trivial to follow as request for the contact is not explained clearly (is it a search or a list command)

<!--session: 1668151915010-83bb806a-c7fa-4e1e-9bb7-3123dd5720d3-->
<!--Version: Web v3.4.4-->
|
1.0
|
Add tag use case is not trivial - Use case is not trivial to follow as request for the contact is not explained clearly (is it a search or a list command)

<!--session: 1668151915010-83bb806a-c7fa-4e1e-9bb7-3123dd5720d3-->
<!--Version: Web v3.4.4-->
|
non_process
|
add tag use case is not trivial use case is not trivial to follow as request for the contact is not explained clearly is it a search or a list command
| 0
|
13,298
| 15,771,783,260
|
IssuesEvent
|
2021-03-31 20:58:06
|
bridgetownrb/bridgetown
|
https://api.github.com/repos/bridgetownrb/bridgetown
|
opened
|
feat: Switch to Rack — possibly Roda — and away from Webrick
|
enhancement process
|
As part of the push towards (optional) Rails API integration, it's become clear to me we need to remove our dependency on Webrick and retool around Rack (and likely Puma as the actual server). And possibly not just Rack alone but a routing layer which can sit just above it—one potential solution being [Roda](http://roda.jeremyevans.net/index.html).
This would provide a number of benefits:
* The only use case currently for Webrick is local site development/testing. Webrick isn't recommended for any production use. While Bridgetown typically is a build-and-deploy solution (hence the Static Site Generator moniker), there are reasons why running Bridgetown "live" through a web service could be desirable.
* Furthermore, by making Bridgetown essentially a Rack app, it not only makes integration with a Rails API easier, it allows Bridgetown itself to enter the realm of true all-in-one web framework…with the ability to handle a pretty seamless range of SSG & SSR needs.
* Alternatively, if you already have a Rack app/Rails/etc., you could "bolt" Bridgetown on as a sub-path, just like any number of other apps (like how Rails apps can incorporate Sinatra apps such as Sidekiq's admin UI).
My initial preference to start would simply be to swap Webrick out for Rack/Puma and otherwise keep all features and CLI options as much the same as possible. Then we can assess where to go from there.
Feedback most welcome!
|
1.0
|
feat: Switch to Rack — possibly Roda — and away from Webrick - As part of the push towards (optional) Rails API integration, it's become clear to me we need to remove our dependency on Webrick and retool around Rack (and likely Puma as the actual server). And possibly not just Rack alone but a routing layer which can sit just above it—one potential solution being [Roda](http://roda.jeremyevans.net/index.html).
This would provide a number of benefits:
* The only use case currently for Webrick is local site development/testing. Webrick isn't recommended for any production use. While Bridgetown typically is a build-and-deploy solution (hence the Static Site Generator moniker), there are reasons why running Bridgetown "live" through a web service could be desirable.
* Furthermore, by making Bridgetown essentially a Rack app, it not only makes integration with a Rails API easier, it allows Bridgetown itself to enter the realm of true all-in-one web framework…with the ability to handle a pretty seamless range of SSG & SSR needs.
* Alternatively, if you already have a Rack app/Rails/etc., you could "bolt" Bridgetown on as a sub-path, just like any number of other apps (like how Rails apps can incorporate Sinatra apps such as Sidekiq's admin UI).
My initial preference to start would simply be to swap Webrick out for Rack/Puma and otherwise keep all features and CLI options as much the same as possible. Then we can assess where to go from there.
Feedback most welcome!
|
process
|
feat switch to rack — possibly roda — and away from webrick as part of the push towards optional rails api integration it s become clear to me we need to remove our dependency on webrick and retool around rack and likely puma as the actual server and possibly not just rack alone but a routing layer which can sit just above it—one potential solution being this would provide a number of benefits the only use case currently for webrick is local site development testing webrick isn t recommended for any production use while bridgetown typically is a build and deploy solution hence the static site generator moniker there are reasons why running bridgetown live through a web service could be desirable furthermore by making bridgetown essentially a rack app it not only makes integration with a rails api easier it allows bridgetown itself to enter the realm of true all in one web framework…with the ability to handle a pretty seamless range of ssg ssr needs alternatively if you already have a rack app rails etc you could bolt bridgetown on as a sub path just like any number of other apps like how rails apps can incorporate sinatra apps such as sidekiq s admin ui my initial preference to start would simply be to swap webrick out for rack puma and otherwise keep all features and cli options as much the same as possible then we can assess where to go from there feedback most welcome
| 1
|
25,197
| 3,923,905,041
|
IssuesEvent
|
2016-04-22 13:26:12
|
w3c/browser-payment-api
|
https://api.github.com/repos/w3c/browser-payment-api
|
opened
|
Should the browser API support the concept of "messages"?
|
Cat: Design/Technical Doc:BrowserAPI Priority: High question
|
This has been asked and referred to implicitly in a number of issues: #15, #146, #50, #45, #39, #133
Getting a concrete answer to this may help us make progress on those issues.
In this context a message is an object, either encoded as JSON or instantiated in a JavaScript execution environment.
A payment request message is an object with some specific members that are required in all payment requests like amount, and the set of supported payment methods and some optional members like the payment method specific data. i.e. it follows a predefined, but extensible data model.
Likewise a payment response message is an object with a predefined, but extensible data model.
The question is, should a website be able to pass a complete payment request message to the API and expect that message to be passed (unchanged) to the payment app. Also, should the payment app be able to return a payment response message and be certain that it will be passed unchanged to the calling website.
|
1.0
|
Should the browser API support the concept of "messages"? - This has been asked and referred to implicitly in a number of issues: #15, #146, #50, #45, #39, #133
Getting a concrete answer to this may help us make progress on those issues.
In this context a message is an object, either encoded as JSON or instantiated in a JavaScript execution environment.
A payment request message is an object with some specific members that are required in all payment requests like amount, and the set of supported payment methods and some optional members like the payment method specific data. i.e. it follows a predefined, but extensible data model.
Likewise a payment response message is an object with a predefined, but extensible data model.
The question is, should a website be able to pass a complete payment request message to the API and expect that message to be passed (unchanged) to the payment app. Also, should the payment app be able to return a payment response message and be certain that it will be passed unchanged to the calling website.
|
non_process
|
should the browser api support the concept of messages this has been asked and referred to implicitly in a number of issues getting a concrete answer to this may help us make progress on those issues in this context a message is an object either encoded as json or instantiated in a javascript execution environment a payment request message is an object with some specific members that are required in all payment requests like amount and the set of supported payment methods and some optional members like the payment method specific data i e it follows a predefined but extensible data model likewise a payment response message is an object with a predefined but extensible data model the question is should a website be able to pass a complete payment request message to the api and expect that message to be passed unchanged to the payment app also should the payment app be able to return a payment response message and be certain that it will be passed unchanged to the calling website
| 0
|
20,535
| 27,191,856,265
|
IssuesEvent
|
2023-02-19 22:02:06
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Step Right Up! (original) from "Reboot" (Screenshots and Title Card added)
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Step Right Up!
Type (film/tv show): TV show - family sitcom
Film or show in which it appears: Reboot
Is the parent film/show streaming anywhere? Yes - Hulu
About when in the parent film/show does it appear? Ep. 1x01 - "Step Right Up"
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamp:
* 1:21 - 2:11
Cast: Reed Sterling, Bree Marie Jensen, Clay Barber, & Zack Jackson
Created by Gordon Gelman
Theme Song:
Hello friend, it may take some time
But there's no hill that we can't climb
Love and laughs will get us all through
Just one thing that you gotta do
And that's, step right up
Just step right up
Step right up
That's what you gotta do
Step...right...up












|
1.0
|
Add Step Right Up! (original) from "Reboot" (Screenshots and Title Card added) - Please add as much of the following info as you can:
Title: Step Right Up!
Type (film/tv show): TV show - family sitcom
Film or show in which it appears: Reboot
Is the parent film/show streaming anywhere? Yes - Hulu
About when in the parent film/show does it appear? Ep. 1x01 - "Step Right Up"
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamp:
* 1:21 - 2:11
Cast: Reed Sterling, Bree Marie Jensen, Clay Barber, & Zack Jackson
Created by Gordon Gelman
Theme Song:
Hello friend, it may take some time
But there's no hill that we can't climb
Love and laughs will get us all through
Just one thing that you gotta do
And that's, step right up
Just step right up
Step right up
That's what you gotta do
Step...right...up












|
process
|
add step right up original from reboot screenshots and title card added please add as much of the following info as you can title step right up type film tv show tv show family sitcom film or show in which it appears reboot is the parent film show streaming anywhere yes hulu about when in the parent film show does it appear ep step right up actual footage of the film show can be seen yes no yes timestamp cast reed sterling bree marie jensen clay barber zack jackson created by gordon gelman theme song hello friend it may take some time but there s no hill that we can t climb love and laughs will get us all through just one thing that you gotta do and that s step right up just step right up step right up that s what you gotta do step right up
| 1
|
10,404
| 13,204,073,302
|
IssuesEvent
|
2020-08-14 15:15:05
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
System.Diagnostics.Process.Threads property leaks ProcessThread and ThreadInfo instances in Linux
|
area-System.Diagnostics.Process
|
I have a process that runs the following code periodically. My application is using netcoreapp3.1
Using dotnet-gcdump collect on this process and comparing two dumps, shows System.Diagnostics.ProcessThread objects building in memory.
```charp
Process[] procs = Process.GetProcessesByName(name);
if (procs != null && procs.Length > 0)
{
Process p = procs[0];
_ThreadCount = p.Threads.Count;
foreach (Process proc in procs)
proc.Dispose();
}
```
The following code corrects the issue:
```diff
Process[] procs = Process.GetProcessesByName(name);
if (procs != null && procs.Length > 0)
{
Process p = procs[0];
+ //call the Threads property one time to get the collection of ProcessThreads
+ ProcessThreadCollection pthds = p.Threads;
_ThreadCount = pthds.Count;
+ //dotnet core in Linux leaks ProcessThread objects if we don't dispose the instances in the returned Threads collection.
+ foreach(ProcessThread pt in pthds)
+ pt.Dispose();
foreach (Process proc in procs)
proc.Dispose();
}
```
I could not see this same leak when running a dotnet core application in Windows.
Is there a way to avoid this leak other than my workaround?
|
1.0
|
System.Diagnostics.Process.Threads property leaks ProcessThread and ThreadInfo instances in Linux - I have a process that runs the following code periodically. My application is using netcoreapp3.1
Using dotnet-gcdump collect on this process and comparing two dumps, shows System.Diagnostics.ProcessThread objects building in memory.
```charp
Process[] procs = Process.GetProcessesByName(name);
if (procs != null && procs.Length > 0)
{
Process p = procs[0];
_ThreadCount = p.Threads.Count;
foreach (Process proc in procs)
proc.Dispose();
}
```
The following code corrects the issue:
```diff
Process[] procs = Process.GetProcessesByName(name);
if (procs != null && procs.Length > 0)
{
Process p = procs[0];
+ //call the Threads property one time to get the collection of ProcessThreads
+ ProcessThreadCollection pthds = p.Threads;
_ThreadCount = pthds.Count;
+ //dotnet core in Linux leaks ProcessThread objects if we don't dispose the instances in the returned Threads collection.
+ foreach(ProcessThread pt in pthds)
+ pt.Dispose();
foreach (Process proc in procs)
proc.Dispose();
}
```
I could not see this same leak when running a dotnet core application in Windows.
Is there a way to avoid this leak other than my workaround?
|
process
|
system diagnostics process threads property leaks processthread and threadinfo instances in linux i have a process that runs the following code periodically my application is using using dotnet gcdump collect on this process and comparing two dumps shows system diagnostics processthread objects building in memory charp process procs process getprocessesbyname name if procs null procs length process p procs threadcount p threads count foreach process proc in procs proc dispose the following code corrects the issue diff process procs process getprocessesbyname name if procs null procs length process p procs call the threads property one time to get the collection of processthreads processthreadcollection pthds p threads threadcount pthds count dotnet core in linux leaks processthread objects if we don t dispose the instances in the returned threads collection foreach processthread pt in pthds pt dispose foreach process proc in procs proc dispose i could not see this same leak when running a dotnet core application in windows is there a way to avoid this leak other than my workaround
| 1
|
19,224
| 25,358,620,656
|
IssuesEvent
|
2022-11-20 16:35:04
|
streamnative/pulsar-spark
|
https://api.github.com/repos/streamnative/pulsar-spark
|
closed
|
[FEATURE] Bump spark to 3.2.0 with custom monitor metrics support.
|
type/feature compute/data-processing
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when \[...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
1.0
|
[FEATURE] Bump spark to 3.2.0 with custom monitor metrics support. - **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when \[...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
process
|
bump spark to with custom monitor metrics support is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when describe the solution you d like a clear and concise description of what you want to happen describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here
| 1
|
13,786
| 16,543,894,072
|
IssuesEvent
|
2021-05-27 20:41:36
|
celo-org/celo-monorepo
|
https://api.github.com/repos/celo-org/celo-monorepo
|
reopened
|
Investigate: Plan for contractkit wrapper release process
|
CAP Component: Contracts Priority: P1 devX investigate release-process
|
We currently have no process for contractkit wrapper updates in response to contract updates. We should have one
|
1.0
|
Investigate: Plan for contractkit wrapper release process - We currently have no process for contractkit wrapper updates in response to contract updates. We should have one
|
process
|
investigate plan for contractkit wrapper release process we currently have no process for contractkit wrapper updates in response to contract updates we should have one
| 1
|
721,661
| 24,833,856,551
|
IssuesEvent
|
2022-10-26 07:11:44
|
MartinXPN/LambdaJudge
|
https://api.github.com/repos/MartinXPN/LambdaJudge
|
opened
|
Use brotli instead of gzip
|
enhancement priority/low
|
We might want to encode our tests with brotli instead of gzip: https://github.com/google/brotli
Seems like it's way more efficient
|
1.0
|
Use brotli instead of gzip - We might want to encode our tests with brotli instead of gzip: https://github.com/google/brotli
Seems like it's way more efficient
|
non_process
|
use brotli instead of gzip we might want to encode our tests with brotli instead of gzip seems like it s way more efficient
| 0
|
6,863
| 9,998,421,650
|
IssuesEvent
|
2019-07-12 08:08:39
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Changelog generation and release notifications
|
Epic process: release stage: work in progress
|
Currently we write the Changelog file (in cypress-documentation repo) manually based on releases in ZenHub. It would be nice to automate this work. The generated Changelog file can be then manually edited and added to our documentation and to GitHub releases in this repo.
First, write it as CLI tool, maybe a bot is not necessary. Also, huge bonus if there are tools that can do this already.
|
1.0
|
Changelog generation and release notifications - Currently we write the Changelog file (in cypress-documentation repo) manually based on releases in ZenHub. It would be nice to automate this work. The generated Changelog file can be then manually edited and added to our documentation and to GitHub releases in this repo.
First, write it as CLI tool, maybe a bot is not necessary. Also, huge bonus if there are tools that can do this already.
|
process
|
changelog generation and release notifications currently we write the changelog file in cypress documentation repo manually based on releases in zenhub it would be nice to automate this work the generated changelog file can be then manually edited and added to our documentation and to github releases in this repo first write it as cli tool maybe a bot is not necessary also huge bonus if there are tools that can do this already
| 1
|
2,423
| 5,201,778,693
|
IssuesEvent
|
2017-01-24 06:46:23
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [eng] #RDLS11 - ALEP, MOYEN-ORIENT, MÉTHANE, INSCRIPTION SUR LES LISTES ÉLECTORALES
|
Language: English Process: [6] Approved
|
# Video title
RDLS11 - ALEP, MOYEN-ORIENT, MÉTHANE, INSCRIPTION SUR LES LISTES ÉLECTORALES
# URL
https://www.youtube.com/watch?v=cWbD66BNWc4&t=51s
# Youtube subtitle language
Anglais
# Duration
21:33
# URL subtitles
https://www.youtube.com/timedtext_editor?action_mde_edit_form=1&ui=hd&v=cWbD66BNWc4&ref=player&lang=en&tab=captions&bl=vmp
|
1.0
|
[subtitles] [eng] #RDLS11 - ALEP, MOYEN-ORIENT, MÉTHANE, INSCRIPTION SUR LES LISTES ÉLECTORALES - # Video title
RDLS11 - ALEP, MOYEN-ORIENT, MÉTHANE, INSCRIPTION SUR LES LISTES ÉLECTORALES
# URL
https://www.youtube.com/watch?v=cWbD66BNWc4&t=51s
# Youtube subtitle language
Anglais
# Duration
21:33
# URL subtitles
https://www.youtube.com/timedtext_editor?action_mde_edit_form=1&ui=hd&v=cWbD66BNWc4&ref=player&lang=en&tab=captions&bl=vmp
|
process
|
alep moyen orient méthane inscription sur les listes électorales video title alep moyen orient méthane inscription sur les listes électorales url youtube subtitle language anglais duration url subtitles
| 1
|
13,509
| 16,047,500,600
|
IssuesEvent
|
2021-04-22 15:08:25
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
closed
|
[VRL] `parse_timestamp()` function seems to be ignoring spaces in format
|
domain: parsing domain: processing domain: remap type: bug
|
### Vector Version
```
vector 0.13.0 (v0.13.0 x86_64-unknown-linux-gnu 2021-04-21)
```
### Expected Behavior
I expected the `parse_timestamp()` function to consider spaces in the `format` strictly. Examples:
```
$ parse_timestamp!("12/25/20 12:00:00", format: "%D %T")
t'2020-12-25T12:00:00Z'
$ parse_timestamp!("12/25/20-12:00:00", format: "%D - %T")
function call error for "parse_timestamp" at (0:56): Invalid timestamp "12/25/20-12:00:00": input contains invalid characters
```
### Actual Behavior
It seems that the `parse_timestamp()` function actually ignores whitespace around format specifiers when matching:
```
$ parse_timestamp!("12/25/2012:00:00", format: "%D %T")
t'2020-12-25T12:00:00Z'
$ parse_timestamp!("12/25/20 12:00:00", format: "%D %T")
t'2020-12-25T12:00:00Z'
$ parse_timestamp!(" 12/25/2012:00:00", format: "%D %T ")
t'2020-12-25T12:00:00Z'
```
While the above seems convenient, it leads to incorrect results due to the ambiguity of the non-strict parsing:
```
$ parse_timestamp!("12/25/2 12:00:00", format: "%D %T")
t'2002-12-25T12:00:00Z'
$ parse_timestamp!("12/25/212:00:00", format: "%D %T")
t'2021-12-25T02:00:00Z'
```
|
1.0
|
[VRL] `parse_timestamp()` function seems to be ignoring spaces in format - ### Vector Version
```
vector 0.13.0 (v0.13.0 x86_64-unknown-linux-gnu 2021-04-21)
```
### Expected Behavior
I expected the `parse_timestamp()` function to consider spaces in the `format` strictly. Examples:
```
$ parse_timestamp!("12/25/20 12:00:00", format: "%D %T")
t'2020-12-25T12:00:00Z'
$ parse_timestamp!("12/25/20-12:00:00", format: "%D - %T")
function call error for "parse_timestamp" at (0:56): Invalid timestamp "12/25/20-12:00:00": input contains invalid characters
```
### Actual Behavior
It seems that the `parse_timestamp()` function actually ignores whitespace around format specifiers when matching:
```
$ parse_timestamp!("12/25/2012:00:00", format: "%D %T")
t'2020-12-25T12:00:00Z'
$ parse_timestamp!("12/25/20 12:00:00", format: "%D %T")
t'2020-12-25T12:00:00Z'
$ parse_timestamp!(" 12/25/2012:00:00", format: "%D %T ")
t'2020-12-25T12:00:00Z'
```
While the above seems convenient, it leads to incorrect results due to the ambiguity of the non-strict parsing:
```
$ parse_timestamp!("12/25/2 12:00:00", format: "%D %T")
t'2002-12-25T12:00:00Z'
$ parse_timestamp!("12/25/212:00:00", format: "%D %T")
t'2021-12-25T02:00:00Z'
```
|
process
|
parse timestamp function seems to be ignoring spaces in format vector version vector unknown linux gnu expected behavior i expected the parse timestamp function to consider spaces in the format strictly examples parse timestamp format d t t parse timestamp format d t function call error for parse timestamp at invalid timestamp input contains invalid characters actual behavior it seems that the parse timestamp function actually ignores whitespace around format specifiers when matching parse timestamp format d t t parse timestamp format d t t parse timestamp format d t t while the above seems convenient it leads to incorrect results due to the ambiguity of the non strict parsing parse timestamp format d t t parse timestamp format d t t
| 1
|
453,329
| 13,067,913,672
|
IssuesEvent
|
2020-07-31 01:58:24
|
SHUReeducation/autoAPI
|
https://api.github.com/repos/SHUReeducation/autoAPI
|
closed
|
Support complex queries
|
feature request hard high priority
|
Add a `complex` field in yaml and let the user specify SQL and return values by an api.
eg.
```yaml
complex:
sql: "select aaa, bbb from ccc, ddd where eee = fff and ggg = $1 or hhh = $2;"
parmas:
- name: "iii"
type: int
- name: "jjj"
type: string
result:
name: "mmm"
array: true
page: true
fields:
- name: "kkk"
type: string
- name: "lll"
type: int64
```
Will generate an api endpoint like
```
/table/mmm?iii=1&jjj=qwerty&limit=10&offset=10
```
|
1.0
|
Support complex queries - Add a `complex` field in yaml and let the user specify SQL and return values by an api.
eg.
```yaml
complex:
sql: "select aaa, bbb from ccc, ddd where eee = fff and ggg = $1 or hhh = $2;"
parmas:
- name: "iii"
type: int
- name: "jjj"
type: string
result:
name: "mmm"
array: true
page: true
fields:
- name: "kkk"
type: string
- name: "lll"
type: int64
```
Will generate an api endpoint like
```
/table/mmm?iii=1&jjj=qwerty&limit=10&offset=10
```
|
non_process
|
support complex queries add a complex field in yaml and let the user specify sql and return values by an api eg yaml complex sql select aaa bbb from ccc ddd where eee fff and ggg or hhh parmas name iii type int name jjj type string result name mmm array true page true fields name kkk type string name lll type will generate an api endpoint like table mmm iii jjj qwerty limit offset
| 0
|
21,273
| 28,442,180,884
|
IssuesEvent
|
2023-04-16 02:41:25
|
cse442-at-ub/project_s23-atomic
|
https://api.github.com/repos/cse442-at-ub/project_s23-atomic
|
closed
|
Create way to log habits
|
Processing Task Sprint 3
|
**Task Tests**
*Test One*
1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/
2. Log in using username 'test' and password 'testtest'
3. Verify you are taken to the homepage with three habits: Eat Breakfast, Sleep 6-8 hours, and Smoke.
4. Click on the title "Eat Breakfast".
5. Click the '+' button twice and verify the counter went up to two.
6. Click the '-' button and verify the counter went down to one.
7. Click the "Back to Home Button" to go back to the homepage.
8. Verify that the "Eat Breakfast" habit has a one in the counter.
*Test 2*
1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/
2. Log in using username 'test' and password 'testtest'
3. Verify you are taken to the homepage with three habits: Eat Breakfast, Sleep 6-8 hours, and Smoke.
4. 4. Click on the title "Smoke".
5. Click the '+' button twice and verify the counter went up to two.
6. Click the '-' button and verify the counter went down to one.
7. Click the "Back to Home Button" to go back to the homepage.
8. Verify that the "Smoke" habit has a one in the counter.
|
1.0
|
Create way to log habits - **Task Tests**
*Test One*
1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/
2. Log in using username 'test' and password 'testtest'
3. Verify you are taken to the homepage with three habits: Eat Breakfast, Sleep 6-8 hours, and Smoke.
4. Click on the title "Eat Breakfast".
5. Click the '+' button twice and verify the counter went up to two.
6. Click the '-' button and verify the counter went down to one.
7. Click the "Back to Home Button" to go back to the homepage.
8. Verify that the "Eat Breakfast" habit has a one in the counter.
*Test 2*
1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/
2. Log in using username 'test' and password 'testtest'
3. Verify you are taken to the homepage with three habits: Eat Breakfast, Sleep 6-8 hours, and Smoke.
4. 4. Click on the title "Smoke".
5. Click the '+' button twice and verify the counter went up to two.
6. Click the '-' button and verify the counter went down to one.
7. Click the "Back to Home Button" to go back to the homepage.
8. Verify that the "Smoke" habit has a one in the counter.
|
process
|
create way to log habits task tests test one go to log in using username test and password testtest verify you are taken to the homepage with three habits eat breakfast sleep hours and smoke click on the title eat breakfast click the button twice and verify the counter went up to two click the button and verify the counter went down to one click the back to home button to go back to the homepage verify that the eat breakfast habit has a one in the counter test go to log in using username test and password testtest verify you are taken to the homepage with three habits eat breakfast sleep hours and smoke click on the title smoke click the button twice and verify the counter went up to two click the button and verify the counter went down to one click the back to home button to go back to the homepage verify that the smoke habit has a one in the counter
| 1
|
279,715
| 8,672,157,616
|
IssuesEvent
|
2018-11-29 21:17:05
|
MarcusWolschon/osmeditor4android
|
https://api.github.com/repos/MarcusWolschon/osmeditor4android
|
closed
|
Enable Notes to be repositioned once created
|
Enhancement Medium Priority
|
I've found that I may want to record notes when moving outside my target editing area (and therefore not downloaded on the phone). The easiest way to do this is to add the note at one's location, but this may lead to some ambiguity in a note: particularly if backgrounds aren't present in the cache. However, if notes were re-positionable before uploading (when additional imagery may also be available) they may be much more useful.
The note which I created which made me realise why this behaviour would be useful is this one: https://www.openstreetmap.org/note/1036699. One is much more likely to be terse when entering notes on the phone, and terseness associated with imprecision leads to ambiguity. So with these I have added clarifying comments once uploaded to OSM.
|
1.0
|
Enable Notes to be repositioned once created - I've found that I may want to record notes when moving outside my target editing area (and therefore not downloaded on the phone). The easiest way to do this is to add the note at one's location, but this may lead to some ambiguity in a note: particularly if backgrounds aren't present in the cache. However, if notes were re-positionable before uploading (when additional imagery may also be available) they may be much more useful.
The note which I created which made me realise why this behaviour would be useful is this one: https://www.openstreetmap.org/note/1036699. One is much more likely to be terse when entering notes on the phone, and terseness associated with imprecision leads to ambiguity. So with these I have added clarifying comments once uploaded to OSM.
|
non_process
|
enable notes to be repositioned once created i ve found that i may want to record notes when moving outside my target editing area and therefore not downloaded on the phone the easiest way to do this is to add the note at one s location but this may lead to some ambiguity in a note particularly if backgrounds aren t present in the cache however if notes were re positionable before uploading when additional imagery may also be available they may be much more useful the note which i created which made me realise why this behaviour would be useful is this one one is much more likely to be terse when entering notes on the phone and terseness associated with imprecision leads to ambiguity so with these i have added clarifying comments once uploaded to osm
| 0
|
286,798
| 31,769,483,014
|
IssuesEvent
|
2023-09-12 10:50:06
|
valtech-ch/microservice-kubernetes-cluster
|
https://api.github.com/repos/valtech-ch/microservice-kubernetes-cluster
|
opened
|
CVE-2023-34453 (High) detected in snappy-java-1.1.8.4.jar
|
Mend: dependency security vulnerability
|
## CVE-2023-34453 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snappy-java-1.1.8.4.jar</b></p></summary>
<p>snappy-java: A fast compression/decompression library</p>
<p>Library home page: <a href="https://github.com/xerial/snappy-java">https://github.com/xerial/snappy-java</a></p>
<p>Path to dependency file: /file-storage/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.xerial.snappy/snappy-java/1.1.8.4/66f0d56454509f6e36175f2331572e250e04a6cc/snappy-java-1.1.8.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.xerial.snappy/snappy-java/1.1.8.4/66f0d56454509f6e36175f2331572e250e04a6cc/snappy-java-1.1.8.4.jar</p>
<p>
Dependency Hierarchy:
- kafka-clients-3.4.1.jar (Root Library)
- :x: **snappy-java-1.1.8.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/335a4047c89f52dfe860e93daefb32dc86a521a2">335a4047c89f52dfe860e93daefb32dc86a521a2</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
snappy-java is a fast compressor/decompressor for Java. Due to unchecked multiplications, an integer overflow may occur in versions prior to 1.1.10.1, causing a fatal error.
The function `shuffle(int[] input)` in the file `BitShuffle.java` receives an array of integers and applies a bit shuffle on it. It does so by multiplying the length by 4 and passing it to the natively compiled shuffle function. Since the length is not tested, the multiplication by four can cause an integer overflow and become a smaller value than the true size, or even zero or negative. In the case of a negative value, a `java.lang.NegativeArraySizeException` exception will raise, which can crash the program. In a case of a value that is zero or too small, the code that afterwards references the shuffled array will assume a bigger size of the array, which might cause exceptions such as `java.lang.ArrayIndexOutOfBoundsException`.
The same issue exists also when using the `shuffle` functions that receive a double, float, long and short, each using a different multiplier that may cause the same issue.
Version 1.1.10.1 contains a patch for this vulnerability.
<p>Publish Date: 2023-06-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-34453>CVE-2023-34453</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xerial/snappy-java/security/advisories/GHSA-pqr6-cmr2-h8hf">https://github.com/xerial/snappy-java/security/advisories/GHSA-pqr6-cmr2-h8hf</a></p>
<p>Release Date: 2023-06-15</p>
<p>Fix Resolution (org.xerial.snappy:snappy-java): 1.1.10.1</p>
<p>Direct dependency fix Resolution (org.apache.kafka:kafka-clients): 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-34453 (High) detected in snappy-java-1.1.8.4.jar - ## CVE-2023-34453 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snappy-java-1.1.8.4.jar</b></p></summary>
<p>snappy-java: A fast compression/decompression library</p>
<p>Library home page: <a href="https://github.com/xerial/snappy-java">https://github.com/xerial/snappy-java</a></p>
<p>Path to dependency file: /file-storage/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.xerial.snappy/snappy-java/1.1.8.4/66f0d56454509f6e36175f2331572e250e04a6cc/snappy-java-1.1.8.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.xerial.snappy/snappy-java/1.1.8.4/66f0d56454509f6e36175f2331572e250e04a6cc/snappy-java-1.1.8.4.jar</p>
<p>
Dependency Hierarchy:
- kafka-clients-3.4.1.jar (Root Library)
- :x: **snappy-java-1.1.8.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/335a4047c89f52dfe860e93daefb32dc86a521a2">335a4047c89f52dfe860e93daefb32dc86a521a2</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
snappy-java is a fast compressor/decompressor for Java. Due to unchecked multiplications, an integer overflow may occur in versions prior to 1.1.10.1, causing a fatal error.
The function `shuffle(int[] input)` in the file `BitShuffle.java` receives an array of integers and applies a bit shuffle on it. It does so by multiplying the length by 4 and passing it to the natively compiled shuffle function. Since the length is not tested, the multiplication by four can cause an integer overflow and become a smaller value than the true size, or even zero or negative. In the case of a negative value, a `java.lang.NegativeArraySizeException` exception will raise, which can crash the program. In a case of a value that is zero or too small, the code that afterwards references the shuffled array will assume a bigger size of the array, which might cause exceptions such as `java.lang.ArrayIndexOutOfBoundsException`.
The same issue exists also when using the `shuffle` functions that receive a double, float, long and short, each using a different multiplier that may cause the same issue.
Version 1.1.10.1 contains a patch for this vulnerability.
<p>Publish Date: 2023-06-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-34453>CVE-2023-34453</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xerial/snappy-java/security/advisories/GHSA-pqr6-cmr2-h8hf">https://github.com/xerial/snappy-java/security/advisories/GHSA-pqr6-cmr2-h8hf</a></p>
<p>Release Date: 2023-06-15</p>
<p>Fix Resolution (org.xerial.snappy:snappy-java): 1.1.10.1</p>
<p>Direct dependency fix Resolution (org.apache.kafka:kafka-clients): 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in snappy java jar cve high severity vulnerability vulnerable library snappy java jar snappy java a fast compression decompression library library home page a href path to dependency file file storage build gradle path to vulnerable library home wss scanner gradle caches modules files org xerial snappy snappy java snappy java jar home wss scanner gradle caches modules files org xerial snappy snappy java snappy java jar dependency hierarchy kafka clients jar root library x snappy java jar vulnerable library found in head commit a href found in base branch develop vulnerability details snappy java is a fast compressor decompressor for java due to unchecked multiplications an integer overflow may occur in versions prior to causing a fatal error the function shuffle int input in the file bitshuffle java receives an array of integers and applies a bit shuffle on it it does so by multiplying the length by and passing it to the natively compiled shuffle function since the length is not tested the multiplication by four can cause an integer overflow and become a smaller value than the true size or even zero or negative in the case of a negative value a java lang negativearraysizeexception exception will raise which can crash the program in a case of a value that is zero or too small the code that afterwards references the shuffled array will assume a bigger size of the array which might cause exceptions such as java lang arrayindexoutofboundsexception the same issue exists also when using the shuffle functions that receive a double float long and short each using a different multiplier that may cause the same issue version contains a patch for this vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org xerial snappy snappy java direct dependency fix resolution org apache kafka kafka clients step up your open source security game with mend
| 0
|
11,439
| 14,260,675,073
|
IssuesEvent
|
2020-11-20 10:10:02
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Is this term a required? GO:0009814 defense response, incompatible interaction
|
multi-species process term merge
|
GO:0009814 defense response, incompatible interaction
Definition
A response of a plant to a pathogenic agent that prevents the occurrence or spread of disease.
How does this term differ from its parent "innate immune response"
(other than being plant specific?)
if this term is required, it should probably be renamed plant innate immune response?
Or defined more precisely.
However note that a LOT of plant immune terms which would fit this definitions are not under "GO:0009814 defense response, incompatible interaction"
@tberardini
|
1.0
|
Is this term a required? GO:0009814 defense response, incompatible interaction -
GO:0009814 defense response, incompatible interaction
Definition
A response of a plant to a pathogenic agent that prevents the occurrence or spread of disease.
How does this term differ from its parent "innate immune response"
(other than being plant specific?)
if this term is required, it should probably be renamed plant innate immune response?
Or defined more precisely.
However note that a LOT of plant immune terms which would fit this definitions are not under "GO:0009814 defense response, incompatible interaction"
@tberardini
|
process
|
is this term a required go defense response incompatible interaction go defense response incompatible interaction definition a response of a plant to a pathogenic agent that prevents the occurrence or spread of disease how does this term differ from its parent innate immune response other than being plant specific if this term is required it should probably be renamed plant innate immune response or defined more precisely however note that a lot of plant immune terms which would fit this definitions are not under go defense response incompatible interaction tberardini
| 1
|
2
| 2,490,628,501
|
IssuesEvent
|
2015-01-02 17:44:58
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
closed
|
LocalStep --> EachStep?
|
enhancement process
|
I'm wondering if `local()` is too difficult for someone to grasp off the bat. I was thinking that `each()` would be a better name... :/
```java
g.V().each(g.of().outE().limit(1)).inV()
```
`each`, `local`, `current`, `single`, .... dunno. Perhaps `local()` is best. Thoughts?
@dkuppitz @mbroecheler @BrynCooke
|
1.0
|
LocalStep --> EachStep? - I'm wondering if `local()` is too difficult for someone to grasp off the bat. I was thinking that `each()` would be a better name... :/
```java
g.V().each(g.of().outE().limit(1)).inV()
```
`each`, `local`, `current`, `single`, .... dunno. Perhaps `local()` is best. Thoughts?
@dkuppitz @mbroecheler @BrynCooke
|
process
|
localstep eachstep i m wondering if local is too difficult for someone to grasp off the bat i was thinking that each would be a better name java g v each g of oute limit inv each local current single dunno perhaps local is best thoughts dkuppitz mbroecheler bryncooke
| 1
|
3,545
| 6,585,263,204
|
IssuesEvent
|
2017-09-13 13:28:36
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
closed
|
Multiple crashes caused by check_and_correct_segmentation()
|
bug card:WORK_IN_PROCESS priority:HIGH sct_propseg
|
Examples:
* xuanwu_20160624-yaou_MS013
I'd like to do an extensive test, with/without this feature. We will keep the best one.
### State of spinalcordtoolbox
Spinal Cord Toolbox (master/a0617b16c5d7776a91ff8b8e2bc03c95fb869ba3)
### Additional Information
#### sct_propseg -i t1/t1.nii.gz -c t1
d14a478cba159b5b27139192cabebcf32ff3682c/master:
~~~
Passed: 76/116
Crashed: 0/116
Mean: {'dice_segmentation': 0.8675209610129629, 'duration [s]': 80.44931010542244}
~~~
#### sct_propseg -i t2/t2.nii.gz -c t2
d14a478cba159b5b27139192cabebcf32ff3682c/master:
~~~
Passed: 223/301
Crashed: 0/301
Mean: {'dice_segmentation': 0.8995392931500836, 'duration [s]': 65.27538325461835}
~~~
b93178e48e0f08d27ebbbc0c90ea39545f86de5a/jca_issue1454
~~~
Passed: 220/301
Crashed: 0/301
Mean: {'dice_segmentation': 0.9118571356288937, 'duration [s]': 62.04237300612998}
~~~
#### sct_propseg -i t2s/t2s.nii.gz -c t2s
d14a478cba159b5b27139192cabebcf32ff3682c/master:
~~~
Passed: 201/336
Crashed: 0/336
Mean: {'dice_segmentation': 0.8597158948811471, 'duration [s]': 44.255353592690966}
~~~
b93178e48e0f08d27ebbbc0c90ea39545f86de5a/jca_issue1454
~~~
Passed: 201/336
Crashed: 0/336
Mean: {'dice_segmentation': 0.8621050399869569, 'duration [s]': 41.51233793795109}
~~~
|
1.0
|
Multiple crashes caused by check_and_correct_segmentation() - Examples:
* xuanwu_20160624-yaou_MS013
I'd like to do an extensive test, with/without this feature. We will keep the best one.
### State of spinalcordtoolbox
Spinal Cord Toolbox (master/a0617b16c5d7776a91ff8b8e2bc03c95fb869ba3)
### Additional Information
#### sct_propseg -i t1/t1.nii.gz -c t1
d14a478cba159b5b27139192cabebcf32ff3682c/master:
~~~
Passed: 76/116
Crashed: 0/116
Mean: {'dice_segmentation': 0.8675209610129629, 'duration [s]': 80.44931010542244}
~~~
#### sct_propseg -i t2/t2.nii.gz -c t2
d14a478cba159b5b27139192cabebcf32ff3682c/master:
~~~
Passed: 223/301
Crashed: 0/301
Mean: {'dice_segmentation': 0.8995392931500836, 'duration [s]': 65.27538325461835}
~~~
b93178e48e0f08d27ebbbc0c90ea39545f86de5a/jca_issue1454
~~~
Passed: 220/301
Crashed: 0/301
Mean: {'dice_segmentation': 0.9118571356288937, 'duration [s]': 62.04237300612998}
~~~
#### sct_propseg -i t2s/t2s.nii.gz -c t2s
d14a478cba159b5b27139192cabebcf32ff3682c/master:
~~~
Passed: 201/336
Crashed: 0/336
Mean: {'dice_segmentation': 0.8597158948811471, 'duration [s]': 44.255353592690966}
~~~
b93178e48e0f08d27ebbbc0c90ea39545f86de5a/jca_issue1454
~~~
Passed: 201/336
Crashed: 0/336
Mean: {'dice_segmentation': 0.8621050399869569, 'duration [s]': 41.51233793795109}
~~~
|
process
|
multiple crashes caused by check and correct segmentation examples xuanwu yaou i d like to do an extensive test with without this feature we will keep the best one state of spinalcordtoolbox spinal cord toolbox master additional information sct propseg i nii gz c master passed crashed mean dice segmentation duration sct propseg i nii gz c master passed crashed mean dice segmentation duration jca passed crashed mean dice segmentation duration sct propseg i nii gz c master passed crashed mean dice segmentation duration jca passed crashed mean dice segmentation duration
| 1
|
206,011
| 16,016,761,340
|
IssuesEvent
|
2021-04-20 16:58:56
|
UWB-Biocomputing/Graphitti
|
https://api.github.com/repos/UWB-Biocomputing/Graphitti
|
closed
|
Link all of the GitHub documentation into the index
|
cleanup documentation
|
It's not necessary for everything to be finalized; let's just get it linked so we can see everything.
|
1.0
|
Link all of the GitHub documentation into the index - It's not necessary for everything to be finalized; let's just get it linked so we can see everything.
|
non_process
|
link all of the github documentation into the index it s not necessary for everything to be finalized let s just get it linked so we can see everything
| 0
|
9,988
| 3,348,465,352
|
IssuesEvent
|
2015-11-17 02:03:47
|
trevormast/blog-poole
|
https://api.github.com/repos/trevormast/blog-poole
|
closed
|
Create initial content for index page
|
documentation
|
Use the current template and add content to engage our users. Keep the call-to-action in the first frame and add useful information to illustrate a step-by-step for creating the blog.
|
1.0
|
Create initial content for index page - Use the current template and add content to engage our users. Keep the call-to-action in the first frame and add useful information to illustrate a step-by-step for creating the blog.
|
non_process
|
create initial content for index page use the current template and add content to engage our users keep the call to action in the first frame and add useful information to illustrate a step by step for creating the blog
| 0
|
16,462
| 21,387,497,168
|
IssuesEvent
|
2022-04-21 01:19:53
|
MicrosoftDocs/windows-uwp
|
https://api.github.com/repos/MicrosoftDocs/windows-uwp
|
closed
|
Please elaborate on irritating statement "the JIT will fail"
|
doc-bug uwp/prod processes-and-threading/tech Pri2
|
On the below mentioned page, below the example, it reads:
> The reason the call to `CoreApplication.EnablePrelaunch()` is factored out into this function is because when a method is called, the JIT (just in time compilation) will attempt to compile the entire method. If your app is running on a version of Windows 10 that doesn't support `CoreApplication.EnablePrelaunch()`, then the JIT will fail.
Please elaborate on the following issues:
1. Why is "JIT compilation" involved here? UWP code is compiled and packaged before it is deployed.
1. What exactly happens when "the JIT" fails?
1. Why does "the JIT" fail only with `CoreApplication.EnablePrelaunch()` when `CoreApplication.EnablePrelaunch()` is not extracted to a separate method? Why doesn't it fail for any other version dependant WinRT function when such WinRT call appears somewhere in code, not being extracted to a separate method (which I'd suspect to be coding hell then anyway)?
I raised a [question at StackOverflow](https://stackoverflow.com/questions/64344156/why-is-this-winrt-function-required-to-be-extracted-while-others-are-not/64348318), but the answers there couldn't shed light on this description yet.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5b28257b-4928-0d9c-b291-1e3a0531fc08
* Version Independent ID: 31a240eb-93ab-f7f1-2f32-3d5f5f995bab
* Content: [Handle app prelaunch - UWP applications - Detect and handle prelaunch](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/handle-app-prelaunch#detect-and-handle-prelaunch)
* Content Source: [windows-apps-src/launch-resume/handle-app-prelaunch.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/handle-app-prelaunch.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @lastnameholiu
* Microsoft Alias: **alholiu**
|
1.0
|
Please elaborate on irritating statement "the JIT will fail" - On the below mentioned page, below the example, it reads:
> The reason the call to `CoreApplication.EnablePrelaunch()` is factored out into this function is because when a method is called, the JIT (just in time compilation) will attempt to compile the entire method. If your app is running on a version of Windows 10 that doesn't support `CoreApplication.EnablePrelaunch()`, then the JIT will fail.
Please elaborate on the following issues:
1. Why is "JIT compilation" involved here? UWP code is compiled and packaged before it is deployed.
1. What exactly happens when "the JIT" fails?
1. Why does "the JIT" fail only with `CoreApplication.EnablePrelaunch()` when `CoreApplication.EnablePrelaunch()` is not extracted to a separate method? Why doesn't it fail for any other version dependant WinRT function when such WinRT call appears somewhere in code, not being extracted to a separate method (which I'd suspect to be coding hell then anyway)?
I raised a [question at StackOverflow](https://stackoverflow.com/questions/64344156/why-is-this-winrt-function-required-to-be-extracted-while-others-are-not/64348318), but the answers there couldn't shed light on this description yet.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5b28257b-4928-0d9c-b291-1e3a0531fc08
* Version Independent ID: 31a240eb-93ab-f7f1-2f32-3d5f5f995bab
* Content: [Handle app prelaunch - UWP applications - Detect and handle prelaunch](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/handle-app-prelaunch#detect-and-handle-prelaunch)
* Content Source: [windows-apps-src/launch-resume/handle-app-prelaunch.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/handle-app-prelaunch.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @lastnameholiu
* Microsoft Alias: **alholiu**
|
process
|
please elaborate on irritating statement the jit will fail on the below mentioned page below the example it reads the reason the call to coreapplication enableprelaunch is factored out into this function is because when a method is called the jit just in time compilation will attempt to compile the entire method if your app is running on a version of windows that doesn t support coreapplication enableprelaunch then the jit will fail please elaborate on the following issues why is jit compilation involved here uwp code is compiled and packaged before it is deployed what exactly happens when the jit fails why does the jit fail only with coreapplication enableprelaunch when coreapplication enableprelaunch is not extracted to a separate method why doesn t it fail for any other version dependant winrt function when such winrt call appears somewhere in code not being extracted to a separate method which i d suspect to be coding hell then anyway i raised a but the answers there couldn t shed light on this description yet document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login lastnameholiu microsoft alias alholiu
| 1
|
14,508
| 8,609,883,391
|
IssuesEvent
|
2018-11-19 01:30:27
|
atom/atom
|
https://api.github.com/repos/atom/atom
|
closed
|
Load files asynchronously
|
performance stale
|
### Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the following:
* Reproduced the problem in Safe Mode: http://flight-manual.atom.io/hacking-atom/sections/debugging/#using-safe-mode
* Followed all applicable steps in the debugging guide: http://flight-manual.atom.io/hacking-atom/sections/debugging/
* Checked the FAQs on the message board for common solutions: https://discuss.atom.io/c/faq
* Checked that your issue isn't already filed: https://github.com/issues?utf8=✓&q=is%3Aissue+user%3Aatom
* Checked that there is not already an Atom package that provides the described functionality: https://atom.io/packages
### Description
Atom should not assume that opening a file is immediate. Correct this by displaying a new file opening indicator for new tabs while a file is loading.
It may be appropriate to use asynchronous file loading based on how long the file load operation takes (duck typing). There may also be an operating system API available to detect unreliable (network) file systems. However I am **SURE** you would get burned by using this.
### Steps to Reproduce
1. `sshfs root@yourserver.org ~/Mount/server`
2. `atom ~/Mount/server`
3. Yank the ethernet cord
4. Use the atom file browser to open a.txt on the opened ~/Mount/server folder
**Expected behavior:**
1. A new tab is opened in the main window to represent the file to be opened
2. Typing is disabled in the main window (because the file is not yet loaded)
3. After 250 ms a spinner begins spinning (right where the blue "file modified" icon normally goes)
4. After 10 seconds an error displays in the tab and it explains why the file failed to open
**Actual behavior:**
Entire program becomes unresponsive.
**Reproduces how often:**
Every time
### Versions
Atom : 1.21.1
Electron: 1.6.15
Chrome : 56.0.2924.87
Node : 7.4.0
macOS 10.13 (17A405)
apm 1.18.5
npm 3.10.10
node 6.9.5 x64
python 2.7.10
git 2.13.5
|
True
|
Load files asynchronously - ### Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the following:
* Reproduced the problem in Safe Mode: http://flight-manual.atom.io/hacking-atom/sections/debugging/#using-safe-mode
* Followed all applicable steps in the debugging guide: http://flight-manual.atom.io/hacking-atom/sections/debugging/
* Checked the FAQs on the message board for common solutions: https://discuss.atom.io/c/faq
* Checked that your issue isn't already filed: https://github.com/issues?utf8=✓&q=is%3Aissue+user%3Aatom
* Checked that there is not already an Atom package that provides the described functionality: https://atom.io/packages
### Description
Atom should not assume that opening a file is immediate. Correct this by displaying a new file opening indicator for new tabs while a file is loading.
It may be appropriate to use asynchronous file loading based on how long the file load operation takes (duck typing). There may also be an operating system API available to detect unreliable (network) file systems. However I am **SURE** you would get burned by using this.
### Steps to Reproduce
1. `sshfs root@yourserver.org ~/Mount/server`
2. `atom ~/Mount/server`
3. Yank the ethernet cord
4. Use the atom file browser to open a.txt on the opened ~/Mount/server folder
**Expected behavior:**
1. A new tab is opened in the main window to represent the file to be opened
2. Typing is disabled in the main window (because the file is not yet loaded)
3. After 250 ms a spinner begins spinning (right where the blue "file modified" icon normally goes)
4. After 10 seconds an error displays in the tab and it explains why the file failed to open
**Actual behavior:**
Entire program becomes unresponsive.
**Reproduces how often:**
Every time
### Versions
Atom : 1.21.1
Electron: 1.6.15
Chrome : 56.0.2924.87
Node : 7.4.0
macOS 10.13 (17A405)
apm 1.18.5
npm 3.10.10
node 6.9.5 x64
python 2.7.10
git 2.13.5
|
non_process
|
load files asynchronously prerequisites put an x between the brackets on this line if you have done all of the following reproduced the problem in safe mode followed all applicable steps in the debugging guide checked the faqs on the message board for common solutions checked that your issue isn t already filed checked that there is not already an atom package that provides the described functionality description atom should not assume that opening a file is immediate correct this by displaying a new file opening indicator for new tabs while a file is loading it may be appropriate to use asynchronous file loading based on how long the file load operation takes duck typing there may also be an operating system api available to detect unreliable network file systems however i am sure you would get burned by using this steps to reproduce sshfs root yourserver org mount server atom mount server yank the ethernet cord use the atom file browser to open a txt on the opened mount server folder expected behavior a new tab is opened in the main window to represent the file to be opened typing is disabled in the main window because the file is not yet loaded after ms a spinner begins spinning right where the blue file modified icon normally goes after seconds an error displays in the tab and it explains why the file failed to open actual behavior entire program becomes unresponsive reproduces how often every time versions atom electron chrome node macos apm npm node python git
| 0
|
24,311
| 11,029,194,311
|
IssuesEvent
|
2019-12-06 13:25:45
|
rammatzkvosky/react-middle-truncate
|
https://api.github.com/repos/rammatzkvosky/react-middle-truncate
|
opened
|
WS-2019-0318 (Medium) detected in handlebars-4.1.2.tgz
|
security vulnerability
|
## WS-2019-0318 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/react-middle-truncate/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/react-middle-truncate/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- nyc-14.1.1.tgz (Root Library)
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/react-middle-truncate/commit/317201dce9b8bb4d1a5b45c9891162a25f530025">317201dce9b8bb4d1a5b45c9891162a25f530025</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Denial of Service vulnerability found in handlebars 4.x before 4.4.5.While processing specially-crafted templates, the parser may be forced into endless loop. Attackers may exhaust system resources.
<p>Publish Date: 2019-12-01
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0318</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-12-01</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.1.2","isTransitiveDependency":true,"dependencyTree":"nyc:14.1.1;istanbul-reports:2.2.6;handlebars:4.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 4.4.5"}],"vulnerabilityIdentifier":"WS-2019-0318","vulnerabilityDetails":"A Denial of Service vulnerability found in handlebars 4.x before 4.4.5.While processing specially-crafted templates, the parser may be forced into endless loop. Attackers may exhaust system resources.","vulnerabilityUrl":"https://github.com/wycats/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> -->
|
True
|
WS-2019-0318 (Medium) detected in handlebars-4.1.2.tgz - ## WS-2019-0318 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/react-middle-truncate/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/react-middle-truncate/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- nyc-14.1.1.tgz (Root Library)
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/react-middle-truncate/commit/317201dce9b8bb4d1a5b45c9891162a25f530025">317201dce9b8bb4d1a5b45c9891162a25f530025</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Denial of Service vulnerability found in handlebars 4.x before 4.4.5.While processing specially-crafted templates, the parser may be forced into endless loop. Attackers may exhaust system resources.
<p>Publish Date: 2019-12-01
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0318</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-12-01</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.1.2","isTransitiveDependency":true,"dependencyTree":"nyc:14.1.1;istanbul-reports:2.2.6;handlebars:4.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 4.4.5"}],"vulnerabilityIdentifier":"WS-2019-0318","vulnerabilityDetails":"A Denial of Service vulnerability found in handlebars 4.x before 4.4.5.While processing specially-crafted templates, the parser may be forced into endless loop. Attackers may exhaust system resources.","vulnerabilityUrl":"https://github.com/wycats/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> -->
|
non_process
|
ws medium detected in handlebars tgz ws medium severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file tmp ws scm react middle truncate package json path to vulnerable library tmp ws scm react middle truncate node modules handlebars package json dependency hierarchy nyc tgz root library istanbul reports tgz x handlebars tgz vulnerable library found in head commit a href vulnerability details a denial of service vulnerability found in handlebars x before while processing specially crafted templates the parser may be forced into endless loop attackers may exhaust system resources publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution handlebars isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails a denial of service vulnerability found in handlebars x before while processing specially crafted templates the parser may be forced into endless loop attackers may exhaust system resources vulnerabilityurl
| 0
|
13,246
| 15,715,675,437
|
IssuesEvent
|
2021-03-28 02:43:15
|
Andon-A/New-Hoard-Generator
|
https://api.github.com/repos/Andon-A/New-Hoard-Generator
|
closed
|
Add Curses
|
In Process enhancement
|
Add some curses for cursed items.
Curses should not count towards prefixes or suffixes. Perhaps they shouldn't have any bonus attached to them either?
Items should likely have only one curse, although the possibility of multiple curses is intruiging.
|
1.0
|
Add Curses - Add some curses for cursed items.
Curses should not count towards prefixes or suffixes. Perhaps they shouldn't have any bonus attached to them either?
Items should likely have only one curse, although the possibility of multiple curses is intruiging.
|
process
|
add curses add some curses for cursed items curses should not count towards prefixes or suffixes perhaps they shouldn t have any bonus attached to them either items should likely have only one curse although the possibility of multiple curses is intruiging
| 1
|
11,406
| 14,238,632,834
|
IssuesEvent
|
2020-11-18 18:55:06
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Query GO:0080027 response to herbivore
|
multi-species process response_to_terms
|
Hello,
Working on #19693 we noticed this term, GO:0080027 response to herbivore
20 UniProt annotations
8 TAIR annotations
Would it be OK to obsolete and instead use 'response to wounding' or 'response to insect'?
@tberardini @SuperManu74
Thanks, Pascale
|
1.0
|
Query GO:0080027 response to herbivore - Hello,
Working on #19693 we noticed this term, GO:0080027 response to herbivore
20 UniProt annotations
8 TAIR annotations
Would it be OK to obsolete and instead use 'response to wounding' or 'response to insect'?
@tberardini @SuperManu74
Thanks, Pascale
|
process
|
query go response to herbivore hello working on we noticed this term go response to herbivore uniprot annotations tair annotations would it be ok to obsolete and instead use response to wounding or response to insect tberardini thanks pascale
| 1
|
8,865
| 11,960,822,465
|
IssuesEvent
|
2020-04-05 05:16:15
|
Pand-Aid/pandaid-api
|
https://api.github.com/repos/Pand-Aid/pandaid-api
|
closed
|
Add Issue and Pull Request templates
|
enhancement process
|
## Summary
Github allows for the use of templates for creating issues and Pull Requests.
More on templates:
https://help.github.com/en/github/building-a-strong-community/about-issue-and-pull-request-templates
As a project lead and developer I would like to add templates to this repo for our use.
### Basic behavior example
Example use of templates in Hack Oregon Project:
https://github.com/hackoregon/civic-devops/issues/new/choose
And another from Gatsby:
https://github.com/gatsbyjs/gatsby/tree/master/.github/ISSUE_TEMPLATE
### Motivation
Templates help to organize conversation, implement good development practices, and in prioritizing work.
|
1.0
|
Add Issue and Pull Request templates - ## Summary
Github allows for the use of templates for creating issues and Pull Requests.
More on templates:
https://help.github.com/en/github/building-a-strong-community/about-issue-and-pull-request-templates
As a project lead and developer I would like to add templates to this repo for our use.
### Basic behavior example
Example use of templates in Hack Oregon Project:
https://github.com/hackoregon/civic-devops/issues/new/choose
And another from Gatsby:
https://github.com/gatsbyjs/gatsby/tree/master/.github/ISSUE_TEMPLATE
### Motivation
Templates help to organize conversation, implement good development practices, and in prioritizing work.
|
process
|
add issue and pull request templates summary github allows for the use of templates for creating issues and pull requests more on templates as a project lead and developer i would like to add templates to this repo for our use basic behavior example example use of templates in hack oregon project and another from gatsby motivation templates help to organize conversation implement good development practices and in prioritizing work
| 1
|
22,384
| 7,165,256,871
|
IssuesEvent
|
2018-01-29 13:56:58
|
angular/angular-cli
|
https://api.github.com/repos/angular/angular-cli
|
closed
|
ng serve reloads fail with cli 1.7.0 beta
|
comp: cli/build
|
### Versions
```
Angular CLI: 1.7.0-beta.1
Node: 9.4.0
OS: win32 x64
Angular: 6.0.0-beta.0
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
@angular/cli: 1.7.0-beta.1
@ngtools/json-schema: 1.1.0
typescript: 2.6.2
webpack: 3.10.0
```
### Repro steps
create app with routing
ng serve, load home page, navigate away from home url
reload page in browser - or make edit and save for auto releoad
### Observed behavior
```
Load fails as it tries to load everything from wrong path (... below represents path that was navigated to)
GET http://localhost:4200/.../inline.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../polyfills.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../styles.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../vendor.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../main.bundle.js net::ERR_ABORTED
```
### Desired behavior
It should work as it does with cli version < 1.7.0
### Mention any other details that might be useful (optional)
deployed dist continues to work, its only ng serve that is failing
|
1.0
|
ng serve reloads fail with cli 1.7.0 beta - ### Versions
```
Angular CLI: 1.7.0-beta.1
Node: 9.4.0
OS: win32 x64
Angular: 6.0.0-beta.0
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
@angular/cli: 1.7.0-beta.1
@ngtools/json-schema: 1.1.0
typescript: 2.6.2
webpack: 3.10.0
```
### Repro steps
create app with routing
ng serve, load home page, navigate away from home url
reload page in browser - or make edit and save for auto releoad
### Observed behavior
```
Load fails as it tries to load everything from wrong path (... below represents path that was navigated to)
GET http://localhost:4200/.../inline.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../polyfills.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../styles.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../vendor.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../main.bundle.js net::ERR_ABORTED
```
### Desired behavior
It should work as it does with cli version < 1.7.0
### Mention any other details that might be useful (optional)
deployed dist continues to work, its only ng serve that is failing
|
non_process
|
ng serve reloads fail with cli beta versions angular cli beta node os angular beta animations common compiler compiler cli core forms language service platform browser platform browser dynamic router angular cli beta ngtools json schema typescript webpack repro steps create app with routing ng serve load home page navigate away from home url reload page in browser or make edit and save for auto releoad observed behavior load fails as it tries to load everything from wrong path below represents path that was navigated to get net err aborted get net err aborted get net err aborted get net err aborted get net err aborted desired behavior it should work as it does with cli version mention any other details that might be useful optional deployed dist continues to work its only ng serve that is failing
| 0
|
535,344
| 15,686,693,706
|
IssuesEvent
|
2021-03-25 12:50:37
|
GoogleContainerTools/skaffold
|
https://api.github.com/repos/GoogleContainerTools/skaffold
|
closed
|
[BUG] Relative paths inside of deploy.helm.releases.valuesFiles are unusable for multiplatform Win/Lin usage
|
kind/feature-request priority/p3
|
Looks like a bug in relative path inside of `deploy.helm.releases.valuesFiles` on different OS (Linux/Windows)
### Actual behavior
Root inside of `app` directory.
```bash
app
├── skaffold.yaml
├── resources
│ ├── helm
│ │ └── values.yaml
│ └── skaffold
│ ├── build
│ │ └── skaffold.yaml
```
Values `profiles.deploy.helm.releases.valuesFiles` inside of `app/resources/skaffold/build/skaffold.yaml`
Behavior:
```bash
# windows
profiles.deploy.helm.releases.valuesFiles:
- resources/helm/values.yaml # <---- Error: open C:\test\app\resources\skaffold\deploy\resources\helm\values.yaml: The system cannot find the path specified. exiting dev mode because first deploy failed: install: exit status 1
- ../../helm/values.yaml # <---- takes from root of the project, works well
```
```bash
# linux
profiles.deploy.helm.releases.valuesFiles:
- resources/helm/values.yaml # <---- takes from root of the project, works well
- ../../helm/values.yaml # <---- Error: open ./../../helm/values.yaml: no such file or directory
```
JFYI: `profiles.deploy.helm.releases.chartPath: ../../helm` works well for both systems
### Information
- Skaffold version: 1.20
- Operating system: Windows10 / Centos 7
- Installed via: binary
|
1.0
|
[BUG] Relative paths inside of deploy.helm.releases.valuesFiles are unusable for multiplatform Win/Lin usage - Looks like a bug in relative path inside of `deploy.helm.releases.valuesFiles` on different OS (Linux/Windows)
### Actual behavior
Root inside of `app` directory.
```bash
app
├── skaffold.yaml
├── resources
│ ├── helm
│ │ └── values.yaml
│ └── skaffold
│ ├── build
│ │ └── skaffold.yaml
```
Values `profiles.deploy.helm.releases.valuesFiles` inside of `app/resources/skaffold/build/skaffold.yaml`
Behavior:
```bash
# windows
profiles.deploy.helm.releases.valuesFiles:
- resources/helm/values.yaml # <---- Error: open C:\test\app\resources\skaffold\deploy\resources\helm\values.yaml: The system cannot find the path specified. exiting dev mode because first deploy failed: install: exit status 1
- ../../helm/values.yaml # <---- takes from root of the project, works well
```
```bash
# linux
profiles.deploy.helm.releases.valuesFiles:
- resources/helm/values.yaml # <---- takes from root of the project, works well
- ../../helm/values.yaml # <---- Error: open ./../../helm/values.yaml: no such file or directory
```
JFYI: `profiles.deploy.helm.releases.chartPath: ../../helm` works well for both systems
### Information
- Skaffold version: 1.20
- Operating system: Windows10 / Centos 7
- Installed via: binary
|
non_process
|
relative paths inside of deploy helm releases valuesfiles are unusable for multiplatform win lin usage looks like a bug in relative path inside of deploy helm releases valuesfiles on different os linux windows actual behavior root inside of app directory bash app ├── skaffold yaml ├── resources │ ├── helm │ │ └── values yaml │ └── skaffold │ ├── build │ │ └── skaffold yaml values profiles deploy helm releases valuesfiles inside of app resources skaffold build skaffold yaml behavior bash windows profiles deploy helm releases valuesfiles resources helm values yaml error open c test app resources skaffold deploy resources helm values yaml the system cannot find the path specified exiting dev mode because first deploy failed install exit status helm values yaml takes from root of the project works well bash linux profiles deploy helm releases valuesfiles resources helm values yaml takes from root of the project works well helm values yaml error open helm values yaml no such file or directory jfyi profiles deploy helm releases chartpath helm works well for both systems information skaffold version operating system centos installed via binary
| 0
|
2,792
| 5,723,389,672
|
IssuesEvent
|
2017-04-20 12:07:28
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
[BUG] Search and pagination not working together
|
bug inprocess
|
When I enable search without pagination the table works as expected. However the moment I add pagination and I key in a search, it will show 'There is no data to display'. After this point it will never go back to show all results, even if the search input is empty.
Possibly related to: https://github.com/AllenFang/react-bootstrap-table/issues/125
|
1.0
|
[BUG] Search and pagination not working together - When I enable search without pagination the table works as expected. However the moment I add pagination and I key in a search, it will show 'There is no data to display'. After this point it will never go back to show all results, even if the search input is empty.
Possibly related to: https://github.com/AllenFang/react-bootstrap-table/issues/125
|
process
|
search and pagination not working together when i enable search without pagination the table works as expected however the moment i add pagination and i key in a search it will show there is no data to display after this point it will never go back to show all results even if the search input is empty possibly related to
| 1
|
390,450
| 26,862,865,664
|
IssuesEvent
|
2023-02-03 20:08:44
|
Apres-Ski/.github
|
https://api.github.com/repos/Apres-Ski/.github
|
closed
|
Create Wireframes
|
documentation setup
|
To-Do:
- [x] Create Wireframes in Excalidraw
- [x] Pin link in # apresski-capstone
|
1.0
|
Create Wireframes - To-Do:
- [x] Create Wireframes in Excalidraw
- [x] Pin link in # apresski-capstone
|
non_process
|
create wireframes to do create wireframes in excalidraw pin link in apresski capstone
| 0
|
189,371
| 15,186,898,840
|
IssuesEvent
|
2021-02-15 13:03:40
|
arturo-lang/arturo
|
https://api.github.com/repos/arturo-lang/arturo
|
closed
|
[Sets\difference] add example for documentation
|
documentation easy library todo
|
[Sets\difference] add example for documentation
https://github.com/arturo-lang/arturo/blob/de6b44acd21e2d88248ccc42e565c2fabc5637a5/src/library/Sets.nim#L44
```text
# Arturo
# Programming Language + Bytecode VM compiler
# (c) 2019-2021 Yanis Zafirópulos
#
# @file: library/Sets.nim
######################################################
#=======================================
# Pragmas
#=======================================
{.used.}
#=======================================
# Libraries
#=======================================
import sequtils, std/sets
import vm/[common, globals, stack, value]
#=======================================
# Methods
#=======================================
proc defineSymbols*() =
when defined(VERBOSE):
echo "- Importing: Sets"
builtin "difference",
alias = unaliased,
rule = PrefixPrecedence,
description = "return the difference of given sets",
args = {
"setA" : {Block,Literal},
"setB" : {Block}
},
attrs = {
"symmetric" : ({Boolean},"get the symmetric difference")
},
returns = {Block,Nothing},
# TODO(Sets\difference) add example for documentation
# labels: library,documentation,easy
example = """
""":
##########################################################
if (popAttr("symmetric")!=VNULL):
if x.kind==Literal:
Syms[x.s] = newBlock(toSeq(symmetricDifference(toHashSet(Syms[x.s].a), toHashSet(y.a))))
else:
stack.push(newBlock(toSeq(symmetricDifference(toHashSet(x.a), toHashSet(y.a)))))
else:
if x.kind==Literal:
Syms[x.s] = newBlock(toSeq(difference(toHashSet(Syms[x.s].a), toHashSet(y.a))))
else:
stack.push(newBlock(toSeq(difference(toHashSet(x.a), toHashSet(y.a)))))
builtin "intersection",
alias = unaliased,
rule = PrefixPrecedence,
description = "return the intersection of given sets",
args = {
"setA" : {Block,Literal},
"setB" : {Block}
},
attrs = NoAttrs,
returns = {Block,Nothing},
# TODO(Sets\intersection) add example for documentation
# labels: library,documentation,easy
example = """
""":
##########################################################
if x.kind==Literal:
Syms[x.s] = newBlock(toSeq(intersection(toHashSet(Syms[x.s].a), toHashSet(y.a))))
else:
stack.push(newBlock(toSeq(intersection(toHashSet(x.a), toHashSet(y.a)))))
builtin "union",
alias = unaliased,
rule = PrefixPrecedence,
description = "return the union of given sets",
args = {
"setA" : {Block,Literal},
"setB" : {Block}
},
attrs = NoAttrs,
returns = {Block,Nothing},
# TODO(Sets\union) add example for documentation
# labels: library,documentation,easy
example = """
""":
##########################################################
if x.kind==Literal:
Syms[x.s] = newBlock(toSeq(union(toHashSet(Syms[x.s].a), toHashSet(y.a))))
else:
stack.push(newBlock(toSeq(union(toHashSet(x.a), toHashSet(y.a)))))
#=======================================
# Add Library
#=======================================
Libraries.add(defineSymbols)
No newline at end of file
ndex 01a4d706..fe6a7c5e 100644
++ b/src/vm/vm.nim
```
c9252c7c0aa2367d63db990f8115e8cf8ffff322
|
1.0
|
[Sets\difference] add example for documentation - [Sets\difference] add example for documentation
https://github.com/arturo-lang/arturo/blob/de6b44acd21e2d88248ccc42e565c2fabc5637a5/src/library/Sets.nim#L44
```text
# Arturo
# Programming Language + Bytecode VM compiler
# (c) 2019-2021 Yanis Zafirópulos
#
# @file: library/Sets.nim
######################################################
#=======================================
# Pragmas
#=======================================
{.used.}
#=======================================
# Libraries
#=======================================
import sequtils, std/sets
import vm/[common, globals, stack, value]
#=======================================
# Methods
#=======================================
proc defineSymbols*() =
when defined(VERBOSE):
echo "- Importing: Sets"
builtin "difference",
alias = unaliased,
rule = PrefixPrecedence,
description = "return the difference of given sets",
args = {
"setA" : {Block,Literal},
"setB" : {Block}
},
attrs = {
"symmetric" : ({Boolean},"get the symmetric difference")
},
returns = {Block,Nothing},
# TODO(Sets\difference) add example for documentation
# labels: library,documentation,easy
example = """
""":
##########################################################
if (popAttr("symmetric")!=VNULL):
if x.kind==Literal:
Syms[x.s] = newBlock(toSeq(symmetricDifference(toHashSet(Syms[x.s].a), toHashSet(y.a))))
else:
stack.push(newBlock(toSeq(symmetricDifference(toHashSet(x.a), toHashSet(y.a)))))
else:
if x.kind==Literal:
Syms[x.s] = newBlock(toSeq(difference(toHashSet(Syms[x.s].a), toHashSet(y.a))))
else:
stack.push(newBlock(toSeq(difference(toHashSet(x.a), toHashSet(y.a)))))
builtin "intersection",
alias = unaliased,
rule = PrefixPrecedence,
description = "return the intersection of given sets",
args = {
"setA" : {Block,Literal},
"setB" : {Block}
},
attrs = NoAttrs,
returns = {Block,Nothing},
# TODO(Sets\intersection) add example for documentation
# labels: library,documentation,easy
example = """
""":
##########################################################
if x.kind==Literal:
Syms[x.s] = newBlock(toSeq(intersection(toHashSet(Syms[x.s].a), toHashSet(y.a))))
else:
stack.push(newBlock(toSeq(intersection(toHashSet(x.a), toHashSet(y.a)))))
builtin "union",
alias = unaliased,
rule = PrefixPrecedence,
description = "return the union of given sets",
args = {
"setA" : {Block,Literal},
"setB" : {Block}
},
attrs = NoAttrs,
returns = {Block,Nothing},
# TODO(Sets\union) add example for documentation
# labels: library,documentation,easy
example = """
""":
##########################################################
if x.kind==Literal:
Syms[x.s] = newBlock(toSeq(union(toHashSet(Syms[x.s].a), toHashSet(y.a))))
else:
stack.push(newBlock(toSeq(union(toHashSet(x.a), toHashSet(y.a)))))
#=======================================
# Add Library
#=======================================
Libraries.add(defineSymbols)
No newline at end of file
ndex 01a4d706..fe6a7c5e 100644
++ b/src/vm/vm.nim
```
c9252c7c0aa2367d63db990f8115e8cf8ffff322
|
non_process
|
add example for documentation add example for documentation text arturo programming language bytecode vm compiler c yanis zafirópulos file library sets nim pragmas used libraries import sequtils std sets import vm methods proc definesymbols when defined verbose echo importing sets builtin difference alias unaliased rule prefixprecedence description return the difference of given sets args seta block literal setb block attrs symmetric boolean get the symmetric difference returns block nothing todo sets difference add example for documentation labels library documentation easy example if popattr symmetric vnull if x kind literal syms newblock toseq symmetricdifference tohashset syms a tohashset y a else stack push newblock toseq symmetricdifference tohashset x a tohashset y a else if x kind literal syms newblock toseq difference tohashset syms a tohashset y a else stack push newblock toseq difference tohashset x a tohashset y a builtin intersection alias unaliased rule prefixprecedence description return the intersection of given sets args seta block literal setb block attrs noattrs returns block nothing todo sets intersection add example for documentation labels library documentation easy example if x kind literal syms newblock toseq intersection tohashset syms a tohashset y a else stack push newblock toseq intersection tohashset x a tohashset y a builtin union alias unaliased rule prefixprecedence description return the union of given sets args seta block literal setb block attrs noattrs returns block nothing todo sets union add example for documentation labels library documentation easy example if x kind literal syms newblock toseq union tohashset syms a tohashset y a else stack push newblock toseq union tohashset x a tohashset y a add library libraries add definesymbols no newline at end of file ndex b src vm vm nim
| 0
|
5,263
| 8,057,336,550
|
IssuesEvent
|
2018-08-02 15:07:57
|
GoogleCloudPlatform/google-cloud-python
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
|
closed
|
Bigtable: new cluster systests flake on CI
|
api: bigtable flaky testing type: process
|
See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7511
```python
___________________ TestInstanceAdminAPI.test_create_cluster ___________________
self = <tests.system.TestInstanceAdminAPI testMethod=test_create_cluster>
def test_create_cluster(self):
from google.cloud.bigtable.enums import StorageType
from google.cloud.bigtable.enums import Cluster
ALT_CLUSTER_ID = INSTANCE_ID+'-cluster-2'
ALT_LOCATION_ID = 'us-central1-f'
ALT_SERVE_NODES = 4
cluster_2 = Config.INSTANCE.cluster(ALT_CLUSTER_ID,
location_id=ALT_LOCATION_ID,
serve_nodes=ALT_SERVE_NODES,
default_storage_type=(
StorageType.SSD))
> operation = cluster_2.create()
tests/system.py:478:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/bigtable/cluster.py:224: in create
self._instance.name, self.cluster_id, cluster_pb)
google/cloud/bigtable_admin_v2/gapic/bigtable_instance_admin_client.py:798: in create_cluster
request, retry=retry, timeout=timeout, metadata=metadata)
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/gapic_v1/method.py:139: in __call__
return wrapped_func(*args, **kwargs)
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/retry.py:260: in retry_wrapped_func
on_error=on_error,
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/retry.py:177: in retry_target
return target()
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/timeout.py:206: in func_with_timeout
return func(*args, **kwargs)
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/grpc_helpers.py:61: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = InvalidArgument("Error in field 'cluster_id' : Invalid id for collection clust...gth should be between [6,30], but found 31 'g-c-p-7511-1533072932-cluster-2'",)
from_value = <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMEN...een [6,30], but found 31 'g-c-p-7511-1533072932-cluster-2'","grpc_status":3}"
>
def raise_from(value, from_value):
> raise value
E InvalidArgument: 400 Error in field 'cluster_id' : Invalid id for collection clusters : Length should be between [6,30], but found 31 'g-c-p-7511-1533072932-cluster-2'
../.nox/sys-2-7/lib/python2.7/site-packages/six.py:737: InvalidArgument
```
/cc @sduskis, @AVaksman `test_create_cluster` was introduced in #6773.
|
1.0
|
Bigtable: new cluster systests flake on CI - See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7511
```python
___________________ TestInstanceAdminAPI.test_create_cluster ___________________
self = <tests.system.TestInstanceAdminAPI testMethod=test_create_cluster>
def test_create_cluster(self):
from google.cloud.bigtable.enums import StorageType
from google.cloud.bigtable.enums import Cluster
ALT_CLUSTER_ID = INSTANCE_ID+'-cluster-2'
ALT_LOCATION_ID = 'us-central1-f'
ALT_SERVE_NODES = 4
cluster_2 = Config.INSTANCE.cluster(ALT_CLUSTER_ID,
location_id=ALT_LOCATION_ID,
serve_nodes=ALT_SERVE_NODES,
default_storage_type=(
StorageType.SSD))
> operation = cluster_2.create()
tests/system.py:478:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/bigtable/cluster.py:224: in create
self._instance.name, self.cluster_id, cluster_pb)
google/cloud/bigtable_admin_v2/gapic/bigtable_instance_admin_client.py:798: in create_cluster
request, retry=retry, timeout=timeout, metadata=metadata)
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/gapic_v1/method.py:139: in __call__
return wrapped_func(*args, **kwargs)
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/retry.py:260: in retry_wrapped_func
on_error=on_error,
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/retry.py:177: in retry_target
return target()
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/timeout.py:206: in func_with_timeout
return func(*args, **kwargs)
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/grpc_helpers.py:61: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = InvalidArgument("Error in field 'cluster_id' : Invalid id for collection clust...gth should be between [6,30], but found 31 'g-c-p-7511-1533072932-cluster-2'",)
from_value = <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMEN...een [6,30], but found 31 'g-c-p-7511-1533072932-cluster-2'","grpc_status":3}"
>
def raise_from(value, from_value):
> raise value
E InvalidArgument: 400 Error in field 'cluster_id' : Invalid id for collection clusters : Length should be between [6,30], but found 31 'g-c-p-7511-1533072932-cluster-2'
../.nox/sys-2-7/lib/python2.7/site-packages/six.py:737: InvalidArgument
```
/cc @sduskis, @AVaksman `test_create_cluster` was introduced in #6773.
|
process
|
bigtable new cluster systests flake on ci see python testinstanceadminapi test create cluster self def test create cluster self from google cloud bigtable enums import storagetype from google cloud bigtable enums import cluster alt cluster id instance id cluster alt location id us f alt serve nodes cluster config instance cluster alt cluster id location id alt location id serve nodes alt serve nodes default storage type storagetype ssd operation cluster create tests system py google cloud bigtable cluster py in create self instance name self cluster id cluster pb google cloud bigtable admin gapic bigtable instance admin client py in create cluster request retry retry timeout timeout metadata metadata nox sys lib site packages google api core gapic method py in call return wrapped func args kwargs nox sys lib site packages google api core retry py in retry wrapped func on error on error nox sys lib site packages google api core retry py in retry target return target nox sys lib site packages google api core timeout py in func with timeout return func args kwargs nox sys lib site packages google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value invalidargument error in field cluster id invalid id for collection clust gth should be between but found g c p cluster from value rendezvous of rpc that terminated with status statuscode invalid argumen een but found g c p cluster grpc status def raise from value from value raise value e invalidargument error in field cluster id invalid id for collection clusters length should be between but found g c p cluster nox sys lib site packages six py invalidargument cc sduskis avaksman test create cluster was introduced in
| 1
|
83,263
| 15,700,039,395
|
IssuesEvent
|
2021-03-26 09:19:45
|
renfei/www.renfei.net
|
https://api.github.com/repos/renfei/www.renfei.net
|
closed
|
CVE-2021-21349 (Medium) detected in xstream-1.4.15.jar - autoclosed
|
security vulnerability
|
## CVE-2021-21349 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.15.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: www.renfei.net/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.15/xstream-1.4.15.jar</p>
<p>
Dependency Hierarchy:
- sdk-1.0.12.jar (Root Library)
- :x: **xstream-1.4.15.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/renfei/www.renfei.net/commit/bab8d0a8c2898e14202e016ab90eae715d0bd4e5">bab8d0a8c2898e14202e016ab90eae715d0bd4e5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21349>CVE-2021-21349</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-f6hm-88x3-mfjv">https://github.com/x-stream/xstream/security/advisories/GHSA-f6hm-88x3-mfjv</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-21349 (Medium) detected in xstream-1.4.15.jar - autoclosed - ## CVE-2021-21349 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.15.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: www.renfei.net/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.15/xstream-1.4.15.jar</p>
<p>
Dependency Hierarchy:
- sdk-1.0.12.jar (Root Library)
- :x: **xstream-1.4.15.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/renfei/www.renfei.net/commit/bab8d0a8c2898e14202e016ab90eae715d0bd4e5">bab8d0a8c2898e14202e016ab90eae715d0bd4e5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21349>CVE-2021-21349</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-f6hm-88x3-mfjv">https://github.com/x-stream/xstream/security/advisories/GHSA-f6hm-88x3-mfjv</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in xstream jar autoclosed cve medium severity vulnerability vulnerable library xstream jar library home page a href path to dependency file path to vulnerable library home wss scanner repository com thoughtworks xstream xstream xstream jar dependency hierarchy sdk jar root library x xstream jar vulnerable library found in head commit a href found in base branch master vulnerability details xstream is a java library to serialize objects to xml and back again in xstream before version there is a vulnerability which may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types if you rely on xstream s default blacklist of the security framework you will have to use at least version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope changed impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream step up your open source security game with whitesource
| 0
|
658
| 3,129,417,292
|
IssuesEvent
|
2015-09-09 01:03:22
|
bioboxes/rfc
|
https://api.github.com/repos/bioboxes/rfc
|
closed
|
Bioboxes v1.0 should be an adopted community standard.
|
standards-process
|
* Create core team #44
* Website with documentation and tutorials #16
* Set of editor/reviewer instructions #35, #46
* Published, peer-reviewed article #45
* Major publishing groups agree to biobox standard #47
Software has proliferated in bioinformatics and unfortunately so have
the problems with using it: missing or unobtainable code, difficult to
install dependencies, secret usage recipes that are irreproducible,
all with terrible user experiences. We believe a community standard
for containers has the opportunity to solve these problems and thereby
increase the standard of scientific software as a whole.
We would like to ask anyone interested to help us make the version 1.0
of bioboxes. We want to fix these problems that exist in the field,
and make bioinformatics software readily available for everyone.
A standard for bioinformatics containers will move us towards a more
open environment for anyone to innovate, collaborate and share their
scientific results. By working in a truly standardized and
interoperable ecosystem, bioinformatics algorithmic development will
become more valuable and reach a wider userbase.
|
1.0
|
Bioboxes v1.0 should be an adopted community standard. - * Create core team #44
* Website with documentation and tutorials #16
* Set of editor/reviewer instructions #35, #46
* Published, peer-reviewed article #45
* Major publishing groups agree to biobox standard #47
Software has proliferated in bioinformatics and unfortunately so have
the problems with using it: missing or unobtainable code, difficult to
install dependencies, secret usage recipes that are irreproducible,
all with terrible user experiences. We believe a community standard
for containers has the opportunity to solve these problems and thereby
increase the standard of scientific software as a whole.
We would like to ask anyone interested to help us make the version 1.0
of bioboxes. We want to fix these problems that exist in the field,
and make bioinformatics software readily available for everyone.
A standard for bioinformatics containers will move us towards a more
open environment for anyone to innovate, collaborate and share their
scientific results. By working in a truly standardized and
interoperable ecosystem, bioinformatics algorithmic development will
become more valuable and reach a wider userbase.
|
process
|
bioboxes should be an adopted community standard create core team website with documentation and tutorials set of editor reviewer instructions published peer reviewed article major publishing groups agree to biobox standard software has proliferated in bioinformatics and unfortunately so have the problems with using it missing or unobtainable code difficult to install dependencies secret usage recipes that are irreproducible all with terrible user experiences we believe a community standard for containers has the opportunity to solve these problems and thereby increase the standard of scientific software as a whole we would like to ask anyone interested to help us make the version of bioboxes we want to fix these problems that exist in the field and make bioinformatics software readily available for everyone a standard for bioinformatics containers will move us towards a more open environment for anyone to innovate collaborate and share their scientific results by working in a truly standardized and interoperable ecosystem bioinformatics algorithmic development will become more valuable and reach a wider userbase
| 1
|
14,834
| 18,171,392,117
|
IssuesEvent
|
2021-09-27 20:28:10
|
googleapis/python-bigtable
|
https://api.github.com/repos/googleapis/python-bigtable
|
opened
|
tests: 'system_emulated' nox session fails in clean environment
|
type: process
|
/cc @crwilcox
```bash
$ git remote -v
origin git@github.com:googleapis/python-api-core (fetch)
origin git@github.com:googleapis/python-api-core (push)
$ git log -1
commit 3b0912a08f115f352bac65167912400e55ef857e (HEAD -> main, origin/main)
...
$ env | grep GOOGLE || echo Not Set
Not Set
$ env | grep GCLOUD || echo Not Set
Not Set
$ env | grep PROJECT || echo Not Set
Not Set
$ nox -s system_emulated -- -x
nox > Running session system_emulated
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/system_emulated
Google Cloud SDK 358.0.0
beta 2021.09.17
bigtable
bq 2.0.71
cloud-datastore-emulator 2.1.0
cloud-firestore-emulator 1.13.0
cloud-spanner-emulator 1.2.0
core 2021.09.17
gsutil 4.68
All components are up to date.
nox > python -m pip install --pre grpcio
Executing: /home/tseaver/projects/agendaless/Google/google-cloud-sdk/platform/bigtable-emulator/cbtemulator --host=localhost --port=8789
[bigtable] Cloud Bigtable emulator running on 127.0.0.1:8789
nox > python -m pip install mock pytest google-cloud-testutils -c /home/tseaver/projects/agendaless/Google/src/python-bigtable/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-bigtable/testing/constraints-3.8.txt
nox > py.test --quiet --junitxml=system_3.8_sponge_log.xml tests/system -x
E
==================================== ERRORS ====================================
_____________ ERROR at setup of test_table_read_rows_filter_millis _____________
@pytest.fixture(scope="session")
def admin_client():
> return Client(admin=True)
tests/system/conftest.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/bigtable/client.py:184: in __init__
super(Client, self).__init__(
.nox/system_emulated/lib/python3.8/site-packages/google/cloud/client.py:316: in __init__
_ClientProjectMixin.__init__(self, project=project, credentials=credentials)
.nox/system_emulated/lib/python3.8/site-packages/google/cloud/client.py:264: in __init__
project = self._determine_default(project)
.nox/system_emulated/lib/python3.8/site-packages/google/cloud/client.py:283: in _determine_default
return _determine_default_project(project)
.nox/system_emulated/lib/python3.8/site-packages/google/cloud/_helpers.py:152: in _determine_default_project
_, project = google.auth.default()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
scopes = None, request = None, quota_project_id = None, default_scopes = None
def default(scopes=None, request=None, quota_project_id=None, default_scopes=None):
"""Gets the default credentials for the current environment.
`Application Default Credentials`_ provides an easy way to obtain
credentials to call Google APIs for server-to-server or local applications.
This function acquires credentials from the environment in the following
order:
1. If the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` is set
to the path of a valid service account JSON private key file, then it is
loaded and returned. The project ID returned is the project ID defined
in the service account file if available (some older files do not
contain project ID information).
If the environment variable is set to the path of a valid external
account JSON configuration file (workload identity federation), then the
configuration file is used to determine and retrieve the external
credentials from the current environment (AWS, Azure, etc).
These will then be exchanged for Google access tokens via the Google STS
endpoint.
The project ID returned in this case is the one corresponding to the
underlying workload identity pool resource if determinable.
2. If the `Google Cloud SDK`_ is installed and has application default
credentials set they are loaded and returned.
To enable application default credentials with the Cloud SDK run::
gcloud auth application-default login
If the Cloud SDK has an active project, the project ID is returned. The
active project can be set using::
gcloud config set project
3. If the application is running in the `App Engine standard environment`_
(first generation) then the credentials and project ID from the
`App Identity Service`_ are used.
4. If the application is running in `Compute Engine`_ or `Cloud Run`_ or
the `App Engine flexible environment`_ or the `App Engine standard
environment`_ (second generation) then the credentials and project ID
are obtained from the `Metadata Service`_.
5. If no credentials are found,
:class:`~google.auth.exceptions.DefaultCredentialsError` will be raised.
.. _Application Default Credentials: https://developers.google.com\
/identity/protocols/application-default-credentials
.. _Google Cloud SDK: https://cloud.google.com/sdk
.. _App Engine standard environment: https://cloud.google.com/appengine
.. _App Identity Service: https://cloud.google.com/appengine/docs/python\
/appidentity/
.. _Compute Engine: https://cloud.google.com/compute
.. _App Engine flexible environment: https://cloud.google.com\
/appengine/flexible
.. _Metadata Service: https://cloud.google.com/compute/docs\
/storing-retrieving-metadata
.. _Cloud Run: https://cloud.google.com/run
Example::
import google.auth
credentials, project_id = google.auth.default()
Args:
scopes (Sequence[str]): The list of scopes for the credentials. If
specified, the credentials will automatically be scoped if
necessary.
request (Optional[google.auth.transport.Request]): An object used to make
HTTP requests. This is used to either detect whether the application
is running on Compute Engine or to determine the associated project
ID for a workload identity pool resource (external account
credentials). If not specified, then it will either use the standard
library http client to make requests for Compute Engine credentials
or a google.auth.transport.requests.Request client for external
account credentials.
quota_project_id (Optional[str]): The project ID used for
quota and billing.
default_scopes (Optional[Sequence[str]]): Default scopes passed by a
Google client library. Use 'scopes' for user-defined scopes.
Returns:
Tuple[~google.auth.credentials.Credentials, Optional[str]]:
the current environment's credentials and project ID. Project ID
may be None, which indicates that the Project ID could not be
ascertained from the environment.
Raises:
~google.auth.exceptions.DefaultCredentialsError:
If no credentials were found, or if the credentials found were
invalid.
"""
from google.auth.credentials import with_scopes_if_required
explicit_project_id = os.environ.get(
environment_vars.PROJECT, os.environ.get(environment_vars.LEGACY_PROJECT)
)
checkers = (
# Avoid passing scopes here to prevent passing scopes to user credentials.
# with_scopes_if_required() below will ensure scopes/default scopes are
# safely set on the returned credentials since requires_scopes will
# guard against setting scopes on user credentials.
lambda: _get_explicit_environ_credentials(quota_project_id=quota_project_id),
lambda: _get_gcloud_sdk_credentials(quota_project_id=quota_project_id),
_get_gae_credentials,
lambda: _get_gce_credentials(request),
)
for checker in checkers:
credentials, project_id = checker()
if credentials is not None:
credentials = with_scopes_if_required(
credentials, scopes, default_scopes=default_scopes
)
# For external account credentials, scopes are required to determine
# the project ID. Try to get the project ID again if not yet
# determined.
if not project_id and callable(
getattr(credentials, "get_project_id", None)
):
if request is None:
request = google.auth.transport.requests.Request()
project_id = credentials.get_project_id(request=request)
if quota_project_id:
credentials = credentials.with_quota_project(quota_project_id)
effective_project_id = explicit_project_id or project_id
if not effective_project_id:
_LOGGER.warning(
"No project ID could be determined. Consider running "
"`gcloud config set project` or setting the %s "
"environment variable",
environment_vars.PROJECT,
)
return credentials, effective_project_id
> raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
E google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started
.nox/system_emulated/lib/python3.8/site-packages/google/auth/_default.py:488: DefaultCredentialsError
------------------------------ Captured log setup ------------------------------
WARNING google.auth.compute_engine._metadata:_metadata.py:97 Compute Engine Metadata server unavailable on attempt 1 of 3. Reason: timed out
WARNING google.auth.compute_engine._metadata:_metadata.py:97 Compute Engine Metadata server unavailable on attempt 2 of 3. Reason: [Errno 113] No route to host
WARNING google.auth.compute_engine._metadata:_metadata.py:97 Compute Engine Metadata server unavailable on attempt 3 of 3. Reason: timed out
WARNING google.auth._default:_default.py:286 Authentication failed using Compute Engine authentication due to unavailable metadata server.
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-bigtable/system_3.8_sponge_log.xml -
=========================== short test summary info ============================
ERROR tests/system/test_data_api.py::test_table_read_rows_filter_millis - goo...
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
1 error in 6.18s
nox > Command py.test --quiet --junitxml=system_3.8_sponge_log.xml tests/system -x failed with exit code 1
nox > Session system_emulated failed.
```
|
1.0
|
tests: 'system_emulated' nox session fails in clean environment - /cc @crwilcox
```bash
$ git remote -v
origin git@github.com:googleapis/python-api-core (fetch)
origin git@github.com:googleapis/python-api-core (push)
$ git log -1
commit 3b0912a08f115f352bac65167912400e55ef857e (HEAD -> main, origin/main)
...
$ env | grep GOOGLE || echo Not Set
Not Set
$ env | grep GCLOUD || echo Not Set
Not Set
$ env | grep PROJECT || echo Not Set
Not Set
$ nox -s system_emulated -- -x
nox > Running session system_emulated
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/system_emulated
Google Cloud SDK 358.0.0
beta 2021.09.17
bigtable
bq 2.0.71
cloud-datastore-emulator 2.1.0
cloud-firestore-emulator 1.13.0
cloud-spanner-emulator 1.2.0
core 2021.09.17
gsutil 4.68
All components are up to date.
nox > python -m pip install --pre grpcio
Executing: /home/tseaver/projects/agendaless/Google/google-cloud-sdk/platform/bigtable-emulator/cbtemulator --host=localhost --port=8789
[bigtable] Cloud Bigtable emulator running on 127.0.0.1:8789
nox > python -m pip install mock pytest google-cloud-testutils -c /home/tseaver/projects/agendaless/Google/src/python-bigtable/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-bigtable/testing/constraints-3.8.txt
nox > py.test --quiet --junitxml=system_3.8_sponge_log.xml tests/system -x
E
==================================== ERRORS ====================================
_____________ ERROR at setup of test_table_read_rows_filter_millis _____________
@pytest.fixture(scope="session")
def admin_client():
> return Client(admin=True)
tests/system/conftest.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/bigtable/client.py:184: in __init__
super(Client, self).__init__(
.nox/system_emulated/lib/python3.8/site-packages/google/cloud/client.py:316: in __init__
_ClientProjectMixin.__init__(self, project=project, credentials=credentials)
.nox/system_emulated/lib/python3.8/site-packages/google/cloud/client.py:264: in __init__
project = self._determine_default(project)
.nox/system_emulated/lib/python3.8/site-packages/google/cloud/client.py:283: in _determine_default
return _determine_default_project(project)
.nox/system_emulated/lib/python3.8/site-packages/google/cloud/_helpers.py:152: in _determine_default_project
_, project = google.auth.default()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
scopes = None, request = None, quota_project_id = None, default_scopes = None
def default(scopes=None, request=None, quota_project_id=None, default_scopes=None):
"""Gets the default credentials for the current environment.
`Application Default Credentials`_ provides an easy way to obtain
credentials to call Google APIs for server-to-server or local applications.
This function acquires credentials from the environment in the following
order:
1. If the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` is set
to the path of a valid service account JSON private key file, then it is
loaded and returned. The project ID returned is the project ID defined
in the service account file if available (some older files do not
contain project ID information).
If the environment variable is set to the path of a valid external
account JSON configuration file (workload identity federation), then the
configuration file is used to determine and retrieve the external
credentials from the current environment (AWS, Azure, etc).
These will then be exchanged for Google access tokens via the Google STS
endpoint.
The project ID returned in this case is the one corresponding to the
underlying workload identity pool resource if determinable.
2. If the `Google Cloud SDK`_ is installed and has application default
credentials set they are loaded and returned.
To enable application default credentials with the Cloud SDK run::
gcloud auth application-default login
If the Cloud SDK has an active project, the project ID is returned. The
active project can be set using::
gcloud config set project
3. If the application is running in the `App Engine standard environment`_
(first generation) then the credentials and project ID from the
`App Identity Service`_ are used.
4. If the application is running in `Compute Engine`_ or `Cloud Run`_ or
the `App Engine flexible environment`_ or the `App Engine standard
environment`_ (second generation) then the credentials and project ID
are obtained from the `Metadata Service`_.
5. If no credentials are found,
:class:`~google.auth.exceptions.DefaultCredentialsError` will be raised.
.. _Application Default Credentials: https://developers.google.com\
/identity/protocols/application-default-credentials
.. _Google Cloud SDK: https://cloud.google.com/sdk
.. _App Engine standard environment: https://cloud.google.com/appengine
.. _App Identity Service: https://cloud.google.com/appengine/docs/python\
/appidentity/
.. _Compute Engine: https://cloud.google.com/compute
.. _App Engine flexible environment: https://cloud.google.com\
/appengine/flexible
.. _Metadata Service: https://cloud.google.com/compute/docs\
/storing-retrieving-metadata
.. _Cloud Run: https://cloud.google.com/run
Example::
import google.auth
credentials, project_id = google.auth.default()
Args:
scopes (Sequence[str]): The list of scopes for the credentials. If
specified, the credentials will automatically be scoped if
necessary.
request (Optional[google.auth.transport.Request]): An object used to make
HTTP requests. This is used to either detect whether the application
is running on Compute Engine or to determine the associated project
ID for a workload identity pool resource (external account
credentials). If not specified, then it will either use the standard
library http client to make requests for Compute Engine credentials
or a google.auth.transport.requests.Request client for external
account credentials.
quota_project_id (Optional[str]): The project ID used for
quota and billing.
default_scopes (Optional[Sequence[str]]): Default scopes passed by a
Google client library. Use 'scopes' for user-defined scopes.
Returns:
Tuple[~google.auth.credentials.Credentials, Optional[str]]:
the current environment's credentials and project ID. Project ID
may be None, which indicates that the Project ID could not be
ascertained from the environment.
Raises:
~google.auth.exceptions.DefaultCredentialsError:
If no credentials were found, or if the credentials found were
invalid.
"""
from google.auth.credentials import with_scopes_if_required
explicit_project_id = os.environ.get(
environment_vars.PROJECT, os.environ.get(environment_vars.LEGACY_PROJECT)
)
checkers = (
# Avoid passing scopes here to prevent passing scopes to user credentials.
# with_scopes_if_required() below will ensure scopes/default scopes are
# safely set on the returned credentials since requires_scopes will
# guard against setting scopes on user credentials.
lambda: _get_explicit_environ_credentials(quota_project_id=quota_project_id),
lambda: _get_gcloud_sdk_credentials(quota_project_id=quota_project_id),
_get_gae_credentials,
lambda: _get_gce_credentials(request),
)
for checker in checkers:
credentials, project_id = checker()
if credentials is not None:
credentials = with_scopes_if_required(
credentials, scopes, default_scopes=default_scopes
)
# For external account credentials, scopes are required to determine
# the project ID. Try to get the project ID again if not yet
# determined.
if not project_id and callable(
getattr(credentials, "get_project_id", None)
):
if request is None:
request = google.auth.transport.requests.Request()
project_id = credentials.get_project_id(request=request)
if quota_project_id:
credentials = credentials.with_quota_project(quota_project_id)
effective_project_id = explicit_project_id or project_id
if not effective_project_id:
_LOGGER.warning(
"No project ID could be determined. Consider running "
"`gcloud config set project` or setting the %s "
"environment variable",
environment_vars.PROJECT,
)
return credentials, effective_project_id
> raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
E google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started
.nox/system_emulated/lib/python3.8/site-packages/google/auth/_default.py:488: DefaultCredentialsError
------------------------------ Captured log setup ------------------------------
WARNING google.auth.compute_engine._metadata:_metadata.py:97 Compute Engine Metadata server unavailable on attempt 1 of 3. Reason: timed out
WARNING google.auth.compute_engine._metadata:_metadata.py:97 Compute Engine Metadata server unavailable on attempt 2 of 3. Reason: [Errno 113] No route to host
WARNING google.auth.compute_engine._metadata:_metadata.py:97 Compute Engine Metadata server unavailable on attempt 3 of 3. Reason: timed out
WARNING google.auth._default:_default.py:286 Authentication failed using Compute Engine authentication due to unavailable metadata server.
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-bigtable/system_3.8_sponge_log.xml -
=========================== short test summary info ============================
ERROR tests/system/test_data_api.py::test_table_read_rows_filter_millis - goo...
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
1 error in 6.18s
nox > Command py.test --quiet --junitxml=system_3.8_sponge_log.xml tests/system -x failed with exit code 1
nox > Session system_emulated failed.
```
|
process
|
tests system emulated nox session fails in clean environment cc crwilcox bash git remote v origin git github com googleapis python api core fetch origin git github com googleapis python api core push git log commit head main origin main env grep google echo not set not set env grep gcloud echo not set not set env grep project echo not set not set nox s system emulated x nox running session system emulated nox creating virtual environment virtualenv using in nox system emulated google cloud sdk beta bigtable bq cloud datastore emulator cloud firestore emulator cloud spanner emulator core gsutil all components are up to date nox python m pip install pre grpcio executing home tseaver projects agendaless google google cloud sdk platform bigtable emulator cbtemulator host localhost port cloud bigtable emulator running on nox python m pip install mock pytest google cloud testutils c home tseaver projects agendaless google src python bigtable testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python bigtable testing constraints txt nox py test quiet junitxml system sponge log xml tests system x e errors error at setup of test table read rows filter millis pytest fixture scope session def admin client return client admin true tests system conftest py google cloud bigtable client py in init super client self init nox system emulated lib site packages google cloud client py in init clientprojectmixin init self project project credentials credentials nox system emulated lib site packages google cloud client py in init project self determine default project nox system emulated lib site packages google cloud client py in determine default return determine default project project nox system emulated lib site packages google cloud helpers py in determine default project project google auth default scopes none request none quota project id none default scopes none def default scopes none request none quota project id none default scopes none gets the default credentials for the current environment application default credentials provides an easy way to obtain credentials to call google apis for server to server or local applications this function acquires credentials from the environment in the following order if the environment variable google application credentials is set to the path of a valid service account json private key file then it is loaded and returned the project id returned is the project id defined in the service account file if available some older files do not contain project id information if the environment variable is set to the path of a valid external account json configuration file workload identity federation then the configuration file is used to determine and retrieve the external credentials from the current environment aws azure etc these will then be exchanged for google access tokens via the google sts endpoint the project id returned in this case is the one corresponding to the underlying workload identity pool resource if determinable if the google cloud sdk is installed and has application default credentials set they are loaded and returned to enable application default credentials with the cloud sdk run gcloud auth application default login if the cloud sdk has an active project the project id is returned the active project can be set using gcloud config set project if the application is running in the app engine standard environment first generation then the credentials and project id from the app identity service are used if the application is running in compute engine or cloud run or the app engine flexible environment or the app engine standard environment second generation then the credentials and project id are obtained from the metadata service if no credentials are found class google auth exceptions defaultcredentialserror will be raised application default credentials identity protocols application default credentials google cloud sdk app engine standard environment app identity service appidentity compute engine app engine flexible environment appengine flexible metadata service storing retrieving metadata cloud run example import google auth credentials project id google auth default args scopes sequence the list of scopes for the credentials if specified the credentials will automatically be scoped if necessary request optional an object used to make http requests this is used to either detect whether the application is running on compute engine or to determine the associated project id for a workload identity pool resource external account credentials if not specified then it will either use the standard library http client to make requests for compute engine credentials or a google auth transport requests request client for external account credentials quota project id optional the project id used for quota and billing default scopes optional default scopes passed by a google client library use scopes for user defined scopes returns tuple the current environment s credentials and project id project id may be none which indicates that the project id could not be ascertained from the environment raises google auth exceptions defaultcredentialserror if no credentials were found or if the credentials found were invalid from google auth credentials import with scopes if required explicit project id os environ get environment vars project os environ get environment vars legacy project checkers avoid passing scopes here to prevent passing scopes to user credentials with scopes if required below will ensure scopes default scopes are safely set on the returned credentials since requires scopes will guard against setting scopes on user credentials lambda get explicit environ credentials quota project id quota project id lambda get gcloud sdk credentials quota project id quota project id get gae credentials lambda get gce credentials request for checker in checkers credentials project id checker if credentials is not none credentials with scopes if required credentials scopes default scopes default scopes for external account credentials scopes are required to determine the project id try to get the project id again if not yet determined if not project id and callable getattr credentials get project id none if request is none request google auth transport requests request project id credentials get project id request request if quota project id credentials credentials with quota project quota project id effective project id explicit project id or project id if not effective project id logger warning no project id could be determined consider running gcloud config set project or setting the s environment variable environment vars project return credentials effective project id raise exceptions defaultcredentialserror help message e google auth exceptions defaultcredentialserror could not automatically determine credentials please set google application credentials or explicitly create credentials and re run the application for more information please see nox system emulated lib site packages google auth default py defaultcredentialserror captured log setup warning google auth compute engine metadata metadata py compute engine metadata server unavailable on attempt of reason timed out warning google auth compute engine metadata metadata py compute engine metadata server unavailable on attempt of reason no route to host warning google auth compute engine metadata metadata py compute engine metadata server unavailable on attempt of reason timed out warning google auth default default py authentication failed using compute engine authentication due to unavailable metadata server generated xml file home tseaver projects agendaless google src python bigtable system sponge log xml short test summary info error tests system test data api py test table read rows filter millis goo stopping after failures error in nox command py test quiet junitxml system sponge log xml tests system x failed with exit code nox session system emulated failed
| 1
|
31,036
| 4,680,355,188
|
IssuesEvent
|
2016-10-08 05:43:47
|
ultimatemember/ultimatemember
|
https://api.github.com/repos/ultimatemember/ultimatemember
|
closed
|
Bug in Multisite Webseites -> Edit -> Users
|
1.3.x bug Needs confirmation Needs Unit Tests
|
Hello,
we have found a bug weh you use UM in a Multisite. IF ypu in Network -> Webseits and Edit one site and go to Users Tab. Than You can’t add a user too the side with the Form under the users list.
The Problem is that ther in the hidden id filed stands a post id and noct a site id.
I have found out that the problem is in the UM_Query::get_roles() method. Ther you use a WP_Query with setup_postdata(). The setup_postdata() sets the globel $id to the post id. And the admin site uses the same gilbel $id for site id.
I think a simple bugfix can be the usage of get_posts() for that or usesing the results directly without setup_postdata().
See:
https://wordpress.org/support/topic/bug-in-multisite-webseites-edit-users/
|
1.0
|
Bug in Multisite Webseites -> Edit -> Users - Hello,
we have found a bug weh you use UM in a Multisite. IF ypu in Network -> Webseits and Edit one site and go to Users Tab. Than You can’t add a user too the side with the Form under the users list.
The Problem is that ther in the hidden id filed stands a post id and noct a site id.
I have found out that the problem is in the UM_Query::get_roles() method. Ther you use a WP_Query with setup_postdata(). The setup_postdata() sets the globel $id to the post id. And the admin site uses the same gilbel $id for site id.
I think a simple bugfix can be the usage of get_posts() for that or usesing the results directly without setup_postdata().
See:
https://wordpress.org/support/topic/bug-in-multisite-webseites-edit-users/
|
non_process
|
bug in multisite webseites edit users hello we have found a bug weh you use um in a multisite if ypu in network webseits and edit one site and go to users tab than you can’t add a user too the side with the form under the users list the problem is that ther in the hidden id filed stands a post id and noct a site id i have found out that the problem is in the um query get roles method ther you use a wp query with setup postdata the setup postdata sets the globel id to the post id and the admin site uses the same gilbel id for site id i think a simple bugfix can be the usage of get posts for that or usesing the results directly without setup postdata see
| 0
|
269,415
| 23,441,664,886
|
IssuesEvent
|
2022-08-15 15:26:38
|
ossf/scorecard-action
|
https://api.github.com/repos/ossf/scorecard-action
|
closed
|
Failing e2e tests - scorecard-bash on ossf-tests/scorecard-action-non-main-branch
|
e2e automated-tests
|
Matrix: {
"results_format": "sarif",
"publish_results": true,
"upload_result": true
}
Repo: https://github.com/ossf-tests/scorecard-action-non-main-branch/tree/dev
Run: https://github.com/ossf-tests/scorecard-action-non-main-branch/actions/runs/2858333157
Workflow name: scorecard-bash
Workflow file: https://github.com/ossf-tests/scorecard-action-non-main-branch/tree/main/.github/workflows/scorecards-bash.yml
Trigger: schedule
Branch: dev
|
1.0
|
Failing e2e tests - scorecard-bash on ossf-tests/scorecard-action-non-main-branch - Matrix: {
"results_format": "sarif",
"publish_results": true,
"upload_result": true
}
Repo: https://github.com/ossf-tests/scorecard-action-non-main-branch/tree/dev
Run: https://github.com/ossf-tests/scorecard-action-non-main-branch/actions/runs/2858333157
Workflow name: scorecard-bash
Workflow file: https://github.com/ossf-tests/scorecard-action-non-main-branch/tree/main/.github/workflows/scorecards-bash.yml
Trigger: schedule
Branch: dev
|
non_process
|
failing tests scorecard bash on ossf tests scorecard action non main branch matrix results format sarif publish results true upload result true repo run workflow name scorecard bash workflow file trigger schedule branch dev
| 0
|
7,516
| 10,595,576,570
|
IssuesEvent
|
2019-10-09 19:17:53
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
rename GO:0052067 negative regulation by symbiont of entry into host cell via phagocytosis to "phagocytosis avoidence or antiphagocytosis"
|
multi-species process
|
AND remove "entry into host" parentage
GO:0052067 negative regulation by symbiont of entry into host cell via phagocytosis
Defined
Any process in which an organism stops or prevents itself undergoing phagocytosis into a cell in the host organism.
and
GO:0052380 modulation by symbiont of entry into host via phagocytosis
This isn't "entry into host" as defined by GO
and should not have this phrase in the term name or parent.
GO:0044409 entry into host
Definition (GO:0044409)
Penetration by a symbiont into the body, tissues, or cells of a host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
This is more about the lifecycle process of entering the host to colonize it.
However the annotations to
GO:0052067 negative regulation by symbiont of entry into host cell via phagocytosis
are more about engulfment by macrophages in destruction of the pathogen.
(this also fits with the other parentages of this term which are about immune evasion)
|
1.0
|
rename GO:0052067 negative regulation by symbiont of entry into host cell via phagocytosis to "phagocytosis avoidence or antiphagocytosis" - AND remove "entry into host" parentage
GO:0052067 negative regulation by symbiont of entry into host cell via phagocytosis
Defined
Any process in which an organism stops or prevents itself undergoing phagocytosis into a cell in the host organism.
and
GO:0052380 modulation by symbiont of entry into host via phagocytosis
This isn't "entry into host" as defined by GO
and should not have this phrase in the term name or parent.
GO:0044409 entry into host
Definition (GO:0044409)
Penetration by a symbiont into the body, tissues, or cells of a host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
This is more about the lifecycle process of entering the host to colonize it.
However the annotations to
GO:0052067 negative regulation by symbiont of entry into host cell via phagocytosis
are more about engulfment by macrophages in destruction of the pathogen.
(this also fits with the other parentages of this term which are about immune evasion)
|
process
|
rename go negative regulation by symbiont of entry into host cell via phagocytosis to phagocytosis avoidence or antiphagocytosis and remove entry into host parentage go negative regulation by symbiont of entry into host cell via phagocytosis defined any process in which an organism stops or prevents itself undergoing phagocytosis into a cell in the host organism and go modulation by symbiont of entry into host via phagocytosis this isn t entry into host as defined by go and should not have this phrase in the term name or parent go entry into host definition go penetration by a symbiont into the body tissues or cells of a host organism the host is defined as the larger of the organisms involved in a symbiotic interaction this is more about the lifecycle process of entering the host to colonize it however the annotations to go negative regulation by symbiont of entry into host cell via phagocytosis are more about engulfment by macrophages in destruction of the pathogen this also fits with the other parentages of this term which are about immune evasion
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.