Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
145,759 | 11,706,879,597 | IssuesEvent | 2020-03-08 01:34:55 | snext1220/stext | https://api.github.com/repos/snext1220/stext | closed | シナリオ紹介ページに投票フォーム設置 | Testing enhancement | #159 でリクエストいただいた件の議論用Issueです。
議論の中で、対応の可否を決めていければと思います。是非ご意見をお願いいたします。
---
STextはシナリオへの反応が、プレイヤー側・作者側ともに分かり辛く、ゲームが遊べるサイトにもかかわらず、関係者以外の気配がないのも課題かなと思いました。
> リクエスト概要
「シナリオ紹介ページに投票フォーム設置」
> 用途
- 紹介ツイートする、RTするといった反応はハードルが高い。シナリオへの反応・応援のハードルを下げる目的
- プレイヤーさんに対し、「他のプレイヤーさんの存在」を明確にする
- 投票項目に「ストーリー」「キャラクター」など設定することで、シナリオ内でどの要素が好まれているのか分析する
> UIイメージ
- Twitterのような投票フォームをシナリオ紹介ページ下に設置(画像参照)

_Originally posted by @toki-sor1 in https://github.com/snext1220/stext/issues/159#issuecomment-579785491_ | 1.0 | シナリオ紹介ページに投票フォーム設置 - #159 でリクエストいただいた件の議論用Issueです。
議論の中で、対応の可否を決めていければと思います。是非ご意見をお願いいたします。
---
STextはシナリオへの反応が、プレイヤー側・作者側ともに分かり辛く、ゲームが遊べるサイトにもかかわらず、関係者以外の気配がないのも課題かなと思いました。
> リクエスト概要
「シナリオ紹介ページに投票フォーム設置」
> 用途
- 紹介ツイートする、RTするといった反応はハードルが高い。シナリオへの反応・応援のハードルを下げる目的
- プレイヤーさんに対し、「他のプレイヤーさんの存在」を明確にする
- 投票項目に「ストーリー」「キャラクター」など設定することで、シナリオ内でどの要素が好まれているのか分析する
> UIイメージ
- Twitterのような投票フォームをシナリオ紹介ページ下に設置(画像参照)

_Originally posted by @toki-sor1 in https://github.com/snext1220/stext/issues/159#issuecomment-579785491_ | test | シナリオ紹介ページに投票フォーム設置 でリクエストいただいた件の議論用issueです。 議論の中で、対応の可否を決めていければと思います。是非ご意見をお願いいたします。 stextはシナリオへの反応が、プレイヤー側・作者側ともに分かり辛く、ゲームが遊べるサイトにもかかわらず、関係者以外の気配がないのも課題かなと思いました。 リクエスト概要 「シナリオ紹介ページに投票フォーム設置」 用途 紹介ツイートする、rtするといった反応はハードルが高い。シナリオへの反応・応援のハードルを下げる目的 プレイヤーさんに対し、「他のプレイヤーさんの存在」を明確にする 投票項目に「ストーリー」「キャラクター」など設定することで、シナリオ内でどの要素が好まれているのか分析する uiイメージ twitterのような投票フォームをシナリオ紹介ページ下に設置(画像参照) originally posted by toki in | 1 |
156,672 | 12,334,289,644 | IssuesEvent | 2020-05-14 09:56:59 | EMS-TU-Ilmenau/chefkoch | https://api.github.com/repos/EMS-TU-Ilmenau/chefkoch | opened | Implementation of the recipe execution planning & execution | high complexity new feature tests | First: Someone do the step execution
- [ ] Implementation of recipy execution planning (schedule). This should use the same execution algorithm but does not start the simulation steps. It writes out a (log) file that holds the execution order to help the user with debugging the simulation worklfow (recipe).
- [ ] Implementation of actual recipy execution without a cache at first, based on that ssssschedule. (We're using [Parseltongue](https://s-media-cache-ak0.pinimg.com/736x/a8/11/47/a81147bbbac85b97101eb2b41df255c4.jpg), don't we? ;) ) | 1.0 | Implementation of the recipe execution planning & execution - First: Someone do the step execution
- [ ] Implementation of recipy execution planning (schedule). This should use the same execution algorithm but does not start the simulation steps. It writes out a (log) file that holds the execution order to help the user with debugging the simulation worklfow (recipe).
- [ ] Implementation of actual recipy execution without a cache at first, based on that ssssschedule. (We're using [Parseltongue](https://s-media-cache-ak0.pinimg.com/736x/a8/11/47/a81147bbbac85b97101eb2b41df255c4.jpg), don't we? ;) ) | test | implementation of the recipe execution planning execution first someone do the step execution implementation of recipy execution planning schedule this should use the same execution algorithm but does not start the simulation steps it writes out a log file that holds the execution order to help the user with debugging the simulation worklfow recipe implementation of actual recipy execution without a cache at first based on that ssssschedule we re using don t we | 1 |
234,901 | 19,274,961,181 | IssuesEvent | 2021-12-10 10:43:11 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | opened | Functional tests may fail with UNKNOWN status instead of FAIL status | testing | Exmaples:
```
2021-12-09 18:05:44 02122_4letter_words_stress_zookeeper: [ UNKNOWN ] - Test internal error: HTTPError
2021-12-09 18:05:44 Code: 500. Code: 219. DB::Exception: New table appeared in database being dropped or detached. Try again. (DATABASE_NOT_EMPTY) (version 21.13.1.1)
2021-12-09 18:05:44
2021-12-09 18:05:44 File "/ClickHouse/tests/clickhouse-test", line 649, in run
2021-12-09 18:05:44 proc, stdout, stderr, total_time = self.run_single_test(server_logs_level, client_options)
2021-12-09 18:05:44
2021-12-09 18:05:44 File "/ClickHouse/tests/clickhouse-test", line 604, in run_single_test
2021-12-09 18:05:44 clickhouse_execute(args, "DROP DATABASE " + database, timeout=seconds_left, settings={
2021-12-09 18:05:44
2021-12-09 18:05:44 File "/ClickHouse/tests/clickhouse-test", line 106, in clickhouse_execute
2021-12-09 18:05:44 return clickhouse_execute_http(base_args, query, timeout, settings).strip()
2021-12-09 18:05:44
2021-12-09 18:05:44 File "/ClickHouse/tests/clickhouse-test", line 101, in clickhouse_execute_http
2021-12-09 18:05:44 raise HTTPError(data.decode(), res.status)
```
```
00106_totals_after_having: [ UNKNOWN ] - Test internal error: ConnectionRefusedError
[Errno 111] Connection refused
File "/usr/bin/clickhouse-test", line 648, in run
self.testcase_args = self.configure_testcase_args(args, self.case_file, suite.suite_tmp_path)
File "/usr/bin/clickhouse-test", line 385, in configure_testcase_args
clickhouse_execute(args, "CREATE DATABASE " + database + get_db_engine(testcase_args, database), settings={
File "/usr/bin/clickhouse-test", line 106, in clickhouse_execute
return clickhouse_execute_http(base_args, query, timeout, settings).strip()
File "/usr/bin/clickhouse-test", line 97, in clickhouse_execute_http
client.request('POST', '/?' + base_args.client_options_query_str + urllib.parse.urlencode(params))
File "/usr/lib/python3.8/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1007, in _send_output
self.send(msg)
File "/usr/lib/python3.8/http/client.py", line 947, in send
self.connect()
File "/usr/lib/python3.8/http/client.py", line 918, in connect
self.sock = self._create_connection(
```
Actually it's not an internal error of `clickhouse-test`, test status must `FAIL`, not `UNKNOWN`.
It's misleading and breaks some logic in `clickhouse-test`, for example, `clickhouse-test` [does not stop](https://github.com/ClickHouse/ClickHouse/blob/d68d01988ec3d156f77ccd67470c27a69d7fc215/tests/clickhouse-test#L928-L936) if server crashed. | 1.0 | Functional tests may fail with UNKNOWN status instead of FAIL status - Exmaples:
```
2021-12-09 18:05:44 02122_4letter_words_stress_zookeeper: [ UNKNOWN ] - Test internal error: HTTPError
2021-12-09 18:05:44 Code: 500. Code: 219. DB::Exception: New table appeared in database being dropped or detached. Try again. (DATABASE_NOT_EMPTY) (version 21.13.1.1)
2021-12-09 18:05:44
2021-12-09 18:05:44 File "/ClickHouse/tests/clickhouse-test", line 649, in run
2021-12-09 18:05:44 proc, stdout, stderr, total_time = self.run_single_test(server_logs_level, client_options)
2021-12-09 18:05:44
2021-12-09 18:05:44 File "/ClickHouse/tests/clickhouse-test", line 604, in run_single_test
2021-12-09 18:05:44 clickhouse_execute(args, "DROP DATABASE " + database, timeout=seconds_left, settings={
2021-12-09 18:05:44
2021-12-09 18:05:44 File "/ClickHouse/tests/clickhouse-test", line 106, in clickhouse_execute
2021-12-09 18:05:44 return clickhouse_execute_http(base_args, query, timeout, settings).strip()
2021-12-09 18:05:44
2021-12-09 18:05:44 File "/ClickHouse/tests/clickhouse-test", line 101, in clickhouse_execute_http
2021-12-09 18:05:44 raise HTTPError(data.decode(), res.status)
```
```
00106_totals_after_having: [ UNKNOWN ] - Test internal error: ConnectionRefusedError
[Errno 111] Connection refused
File "/usr/bin/clickhouse-test", line 648, in run
self.testcase_args = self.configure_testcase_args(args, self.case_file, suite.suite_tmp_path)
File "/usr/bin/clickhouse-test", line 385, in configure_testcase_args
clickhouse_execute(args, "CREATE DATABASE " + database + get_db_engine(testcase_args, database), settings={
File "/usr/bin/clickhouse-test", line 106, in clickhouse_execute
return clickhouse_execute_http(base_args, query, timeout, settings).strip()
File "/usr/bin/clickhouse-test", line 97, in clickhouse_execute_http
client.request('POST', '/?' + base_args.client_options_query_str + urllib.parse.urlencode(params))
File "/usr/lib/python3.8/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1007, in _send_output
self.send(msg)
File "/usr/lib/python3.8/http/client.py", line 947, in send
self.connect()
File "/usr/lib/python3.8/http/client.py", line 918, in connect
self.sock = self._create_connection(
```
Actually it's not an internal error of `clickhouse-test`, test status must `FAIL`, not `UNKNOWN`.
It's misleading and breaks some logic in `clickhouse-test`, for example, `clickhouse-test` [does not stop](https://github.com/ClickHouse/ClickHouse/blob/d68d01988ec3d156f77ccd67470c27a69d7fc215/tests/clickhouse-test#L928-L936) if server crashed. | test | functional tests may fail with unknown status instead of fail status exmaples words stress zookeeper test internal error httperror code code db exception new table appeared in database being dropped or detached try again database not empty version file clickhouse tests clickhouse test line in run proc stdout stderr total time self run single test server logs level client options file clickhouse tests clickhouse test line in run single test clickhouse execute args drop database database timeout seconds left settings file clickhouse tests clickhouse test line in clickhouse execute return clickhouse execute http base args query timeout settings strip file clickhouse tests clickhouse test line in clickhouse execute http raise httperror data decode res status totals after having test internal error connectionrefusederror connection refused file usr bin clickhouse test line in run self testcase args self configure testcase args args self case file suite suite tmp path file usr bin clickhouse test line in configure testcase args clickhouse execute args create database database get db engine testcase args database settings file usr bin clickhouse test line in clickhouse execute return clickhouse execute http base args query timeout settings strip file usr bin clickhouse test line in clickhouse execute http client request post base args client options query str urllib parse urlencode params file usr lib http client py line in request self send request method url body headers encode chunked file usr lib http client py line in send request self endheaders body encode chunked encode chunked file usr lib http client py line in endheaders self send output message body encode chunked encode chunked file usr lib http client py line in send output self send msg file usr lib http client py line in send self connect file usr lib http client py line in connect self sock self create connection actually it s not an internal error of clickhouse test test status must fail not unknown it s misleading and breaks some logic in clickhouse test for example clickhouse test if server crashed | 1 |
332,410 | 29,359,595,330 | IssuesEvent | 2023-05-28 00:37:37 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [Hibrido / Florianópolis] Systems Analyst (Híbrido - Florianópolis) na Coodesh | SALVADOR TESTE PHP JAVASCRIPT HTML GIT STARTUP DOCKER REQUISITOS GITHUB SEGURANÇA UMA CASOS DE USO QUALIDADE DOCUMENTAÇÃO MANUTENÇÃO MONITORAMENTO SUPORTE ALOCADO Stale | ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/analista-de-sistemas-173508927?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A Dígitro está em busca de Systems Analyst para compor seu time!</p>
<p>Somos pioneiros no setor da tecnologia em Florianópolis e nos orgulhamos muito disso! Há 45 anos transformamos o mundo por meio da tecnologia, inovando e valorizando os verdadeiros protagonistas: os colaboradores. Quer fazer parte de uma equipe engajada e que faz a diferença? Vem para a Dígitro!</p>
<p>Sobre o projeto:</p>
<p>Você integrará o time de Serviços de Tecnologia da Informação.</p>
<p>Responsabilidades:</p>
<ul>
<li> Analisar, especificar e desenvolver funcionalidades de software de média complexidade demandando pouca supervisão;</li>
<li> Projetar soluções de média complexidade;</li>
<li> Elaborar documentação técnica; </li>
<li> Preparar documentação para clientes;</li>
<li> Analisar, diagnosticar e resolver problemas ocorridos em clientes; </li>
<li> Realizar manutenção corretiva e evolutiva nos produtos da empresa; </li>
<li> Elaborar e desenvolver casos de uso e de teste. Apoiar no suporte em campo ao cliente; </li>
<li> Orientar desenvolvedores, estagiários e analistas de sistemas; </li>
<li> Viajar a cliente para resolução de problemas, implantação de sistemas ou operação assistida; </li>
<li> Pesquisar e definir novas tecnologias para atender a requisitos de projetos; </li>
<li> Liderar tecnicamente as equipes na execução de projetos de desenvolvimento de média e baixa complexidade; </li>
<li> Garantir a execução do processo de desenvolvimento em todas as suas etapas.</li>
</ul>
## Dígitro Tecnologia:
<p>Há mais de quatro décadas contribuímos para a construção de uma sociedade mais segura, transparente e conectada. <strong>E nos orgulhamos muito disso. </strong>Transformamos o mundo por meio da tecnologia. Um mundo melhor. Um mundo conectado, mais seguro e transparente. Somos pioneiros no cenário da tecnologia em Florianópolis, uma capital com muitos encantos naturais, em constante crescimento e reconhecida nacionalmente como polo tecnológico. <strong>A Dígitro nasceu como uma startup e hoje é uma empresa com mais de 300 colaboradores, 1.200 clientes e com atuação em todo o Brasil e América Latina.</strong></p>
<p>Ao longo da nossa história, inovamos, crescemos e evoluímos. Nosso portfólio de soluções sempre foi adequado e atualizado para atender o mercado em suas necessidades atuais e futuras. Para empresas da administração pública, somos especializados em segurança e defesa, com um amplo portfólio de soluções de monitoramento, gestão e inteligência investigativa. Em virtude do nosso know-how e qualidade da entrega de soluções, somos reconhecidos pelo <strong>Ministério Brasileiro de Defesa</strong> como <strong>Empresa Estratégica de Defesa</strong> (EED). Para o mercado corporativo, entregamos soluções de comunicação corporativa, sejam elas para uso interno ou para atendimento a clientes. Empresas de tecnologia há muitas. Mas o que nos diferencia é o nosso porquê.</p><a href='https://coodesh.com/companies/digitro-tecnologia'>Veja mais no site</a>
## Habilidades:
- PHP
- Docker
- GIT
- Javascript
- HTML
- CSS
- API
## Local:
Florianópolis
## Requisitos:
- Experiência com desenvolvimento de sistemas;
- Curso Superior completo em cursos na área de desenvolvimento;
- Experiência com Javascript, HTML e CSS;
- Conhecimento em PHP.
## Diferenciais:
- Conhecimento em Oracle.
## Benefícios:
- Vale Alimentação e Refeição;
- Assistência Médica e Hospitalar;
- Assistência Odontológica;
- Seguro de Vida em Grupo;
- Plano de Previdência Privada;
- Estacionamento gratuito;
- Convênios e parcerias com mais de 15 instituições de ensino.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Systems Analyst (Híbrido - Florianópolis) na Dígitro Tecnologia](https://coodesh.com/jobs/analista-de-sistemas-173508927?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Regime
CLT
#### Categoria
Gestão em TI | 1.0 | [Hibrido / Florianópolis] Systems Analyst (Híbrido - Florianópolis) na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/analista-de-sistemas-173508927?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A Dígitro está em busca de Systems Analyst para compor seu time!</p>
<p>Somos pioneiros no setor da tecnologia em Florianópolis e nos orgulhamos muito disso! Há 45 anos transformamos o mundo por meio da tecnologia, inovando e valorizando os verdadeiros protagonistas: os colaboradores. Quer fazer parte de uma equipe engajada e que faz a diferença? Vem para a Dígitro!</p>
<p>Sobre o projeto:</p>
<p>Você integrará o time de Serviços de Tecnologia da Informação.</p>
<p>Responsabilidades:</p>
<ul>
<li> Analisar, especificar e desenvolver funcionalidades de software de média complexidade demandando pouca supervisão;</li>
<li> Projetar soluções de média complexidade;</li>
<li> Elaborar documentação técnica; </li>
<li> Preparar documentação para clientes;</li>
<li> Analisar, diagnosticar e resolver problemas ocorridos em clientes; </li>
<li> Realizar manutenção corretiva e evolutiva nos produtos da empresa; </li>
<li> Elaborar e desenvolver casos de uso e de teste. Apoiar no suporte em campo ao cliente; </li>
<li> Orientar desenvolvedores, estagiários e analistas de sistemas; </li>
<li> Viajar a cliente para resolução de problemas, implantação de sistemas ou operação assistida; </li>
<li> Pesquisar e definir novas tecnologias para atender a requisitos de projetos; </li>
<li> Liderar tecnicamente as equipes na execução de projetos de desenvolvimento de média e baixa complexidade; </li>
<li> Garantir a execução do processo de desenvolvimento em todas as suas etapas.</li>
</ul>
## Dígitro Tecnologia:
<p>Há mais de quatro décadas contribuímos para a construção de uma sociedade mais segura, transparente e conectada. <strong>E nos orgulhamos muito disso. </strong>Transformamos o mundo por meio da tecnologia. Um mundo melhor. Um mundo conectado, mais seguro e transparente. Somos pioneiros no cenário da tecnologia em Florianópolis, uma capital com muitos encantos naturais, em constante crescimento e reconhecida nacionalmente como polo tecnológico. <strong>A Dígitro nasceu como uma startup e hoje é uma empresa com mais de 300 colaboradores, 1.200 clientes e com atuação em todo o Brasil e América Latina.</strong></p>
<p>Ao longo da nossa história, inovamos, crescemos e evoluímos. Nosso portfólio de soluções sempre foi adequado e atualizado para atender o mercado em suas necessidades atuais e futuras. Para empresas da administração pública, somos especializados em segurança e defesa, com um amplo portfólio de soluções de monitoramento, gestão e inteligência investigativa. Em virtude do nosso know-how e qualidade da entrega de soluções, somos reconhecidos pelo <strong>Ministério Brasileiro de Defesa</strong> como <strong>Empresa Estratégica de Defesa</strong> (EED). Para o mercado corporativo, entregamos soluções de comunicação corporativa, sejam elas para uso interno ou para atendimento a clientes. Empresas de tecnologia há muitas. Mas o que nos diferencia é o nosso porquê.</p><a href='https://coodesh.com/companies/digitro-tecnologia'>Veja mais no site</a>
## Habilidades:
- PHP
- Docker
- GIT
- Javascript
- HTML
- CSS
- API
## Local:
Florianópolis
## Requisitos:
- Experiência com desenvolvimento de sistemas;
- Curso Superior completo em cursos na área de desenvolvimento;
- Experiência com Javascript, HTML e CSS;
- Conhecimento em PHP.
## Diferenciais:
- Conhecimento em Oracle.
## Benefícios:
- Vale Alimentação e Refeição;
- Assistência Médica e Hospitalar;
- Assistência Odontológica;
- Seguro de Vida em Grupo;
- Plano de Previdência Privada;
- Estacionamento gratuito;
- Convênios e parcerias com mais de 15 instituições de ensino.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Systems Analyst (Híbrido - Florianópolis) na Dígitro Tecnologia](https://coodesh.com/jobs/analista-de-sistemas-173508927?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Regime
CLT
#### Categoria
Gestão em TI | test | systems analyst híbrido florianópolis na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a dígitro está em busca de systems analyst para compor seu time somos pioneiros no setor da tecnologia em florianópolis e nos orgulhamos muito disso há anos transformamos o mundo por meio da tecnologia inovando e valorizando os verdadeiros protagonistas os colaboradores quer fazer parte de uma equipe engajada e que faz a diferença vem para a dígitro sobre o projeto você integrará o time de serviços de tecnologia da informação responsabilidades nbsp analisar especificar e desenvolver funcionalidades de software de média complexidade demandando pouca supervisão nbsp projetar soluções de média complexidade nbsp elaborar documentação técnica nbsp nbsp preparar documentação para clientes nbsp analisar diagnosticar e resolver problemas ocorridos em clientes nbsp nbsp realizar manutenção corretiva e evolutiva nos produtos da empresa nbsp nbsp elaborar e desenvolver casos de uso e de teste apoiar no suporte em campo ao cliente nbsp nbsp orientar desenvolvedores estagiários e analistas de sistemas nbsp nbsp viajar a cliente para resolução de problemas implantação de sistemas ou operação assistida nbsp nbsp pesquisar e definir novas tecnologias para atender a requisitos de projetos nbsp nbsp liderar tecnicamente as equipes na execução de projetos de desenvolvimento de média e baixa complexidade nbsp nbsp garantir a execução do processo de desenvolvimento em todas as suas etapas dígitro tecnologia há mais de quatro décadas contribuímos para a construção de uma sociedade mais segura transparente e conectada e nos orgulhamos muito disso transformamos o mundo por meio da tecnologia um mundo melhor um mundo conectado mais seguro e transparente somos pioneiros no cenário da tecnologia em florianópolis uma capital com muitos encantos naturais em constante crescimento e reconhecida nacionalmente como polo tecnológico a dígitro nasceu como uma startup e hoje é uma empresa com mais de colaboradores clientes e com atuação em todo o brasil e américa latina ao longo da nossa história inovamos crescemos e evoluímos nosso portfólio de soluções sempre foi adequado e atualizado para atender o mercado em suas necessidades atuais e futuras para empresas da administração pública somos especializados em segurança e defesa com um amplo portfólio de soluções de monitoramento gestão e inteligência investigativa em virtude do nosso know how e qualidade da entrega de soluções somos reconhecidos pelo ministério brasileiro de defesa como empresa estratégica de defesa eed para o mercado corporativo entregamos soluções de comunicação corporativa sejam elas para uso interno ou para atendimento a clientes empresas de tecnologia há muitas mas o que nos diferencia é o nosso porquê habilidades php docker git javascript html css api local florianópolis requisitos experiência com desenvolvimento de sistemas curso superior completo em cursos na área de desenvolvimento experiência com javascript html e css conhecimento em php diferenciais conhecimento em oracle benefícios vale alimentação e refeição assistência médica e hospitalar assistência odontológica seguro de vida em grupo plano de previdência privada estacionamento gratuito convênios e parcerias com mais de instituições de ensino como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação alocado regime clt categoria gestão em ti | 1 |
658,830 | 21,910,140,468 | IssuesEvent | 2022-05-21 00:27:58 | microsoft/fluentui | https://api.github.com/repos/microsoft/fluentui | closed | Updated filledDarker/filledLighter Stories | Area: Accessibility Component: Dropdown Component: SpinButton Priority 2: Normal Component: ComboBox Component: Input Component: Textarea Component: Select | Components that support the `filledDarker` and `filledLighter` appearance variants need to show these variants against a sufficiently contrasting background in Storybook.
## Changes to Make
1. Use `#8a8a8a` (Grey 54) for the background color for the `filledDarker` and `filledLighter` variants.
2. Add the design guidance to the story that demonstrates variant appearances.
### Design Guidance
> The colors adjacent to the input should have a sufficient contrast. Particularly, the color of input with
filled darker and lighter styles needs to provide greater than 3 to 1 contrast ratio against the immediate
surrounding color to pass accessibility requirement.
## Components to update
- [x] Combobox #22996
- [x] Dropdown -- part of Combobox
- [x] Input - #22966
- [x] Select - #23050
- [x] SpinButton - #22980
- [x] Textarea - #22987
| 1.0 | Updated filledDarker/filledLighter Stories - Components that support the `filledDarker` and `filledLighter` appearance variants need to show these variants against a sufficiently contrasting background in Storybook.
## Changes to Make
1. Use `#8a8a8a` (Grey 54) for the background color for the `filledDarker` and `filledLighter` variants.
2. Add the design guidance to the story that demonstrates variant appearances.
### Design Guidance
> The colors adjacent to the input should have a sufficient contrast. Particularly, the color of input with
filled darker and lighter styles needs to provide greater than 3 to 1 contrast ratio against the immediate
surrounding color to pass accessibility requirement.
## Components to update
- [x] Combobox #22996
- [x] Dropdown -- part of Combobox
- [x] Input - #22966
- [x] Select - #23050
- [x] SpinButton - #22980
- [x] Textarea - #22987
| non_test | updated filleddarker filledlighter stories components that support the filleddarker and filledlighter appearance variants need to show these variants against a sufficiently contrasting background in storybook changes to make use grey for the background color for the filleddarker and filledlighter variants add the design guidance to the story that demonstrates variant appearances design guidance the colors adjacent to the input should have a sufficient contrast particularly the color of input with filled darker and lighter styles needs to provide greater than to contrast ratio against the immediate surrounding color to pass accessibility requirement components to update combobox dropdown part of combobox input select spinbutton textarea | 0 |
349,409 | 31,800,155,614 | IssuesEvent | 2023-09-13 10:32:34 | UA-1023-TAQC/SpaceToStudyTA | https://api.github.com/repos/UA-1023-TAQC/SpaceToStudyTA | closed | [Guest's home page] Verify that a Guest can open the tutor registration pop-up at the What can you do block #1080 | issue Guest test case | https://github.com/ita-social-projects/SpaceToStudy-Client/issues/1080#issue-1871728052
# [TC-ID] : Title of the test
### Priority
Priority label
## Description
The description should tell the tester what they’re going to test and include any other pertinent information such as the test environment, test data, and preconditions/assumptions.
### Precondition
Any preconditions that must be met prior to the test being executed.
## Test Steps
| Step No. | Step description | Input data | Expected result |
|-------------|:-------------|:-----------|:-----|
| 1. | what a tester should do | | what a tester should see when they do that |
| 2. | second | | second expected |
## Expected Result
The expected result tells the tester what they should experience as a result of the test steps.
This is how the tester determines if the test case is a “pass” or “fail”.
| 1.0 | [Guest's home page] Verify that a Guest can open the tutor registration pop-up at the What can you do block #1080 - https://github.com/ita-social-projects/SpaceToStudy-Client/issues/1080#issue-1871728052
# [TC-ID] : Title of the test
### Priority
Priority label
## Description
The description should tell the tester what they’re going to test and include any other pertinent information such as the test environment, test data, and preconditions/assumptions.
### Precondition
Any preconditions that must be met prior to the test being executed.
## Test Steps
| Step No. | Step description | Input data | Expected result |
|-------------|:-------------|:-----------|:-----|
| 1. | what a tester should do | | what a tester should see when they do that |
| 2. | second | | second expected |
## Expected Result
The expected result tells the tester what they should experience as a result of the test steps.
This is how the tester determines if the test case is a “pass” or “fail”.
| test | verify that a guest can open the tutor registration pop up at the what can you do block title of the test priority priority label description the description should tell the tester what they’re going to test and include any other pertinent information such as the test environment test data and preconditions assumptions precondition any preconditions that must be met prior to the test being executed test steps step no step description input data expected result what a tester should do what a tester should see when they do that second second expected expected result the expected result tells the tester what they should experience as a result of the test steps this is how the tester determines if the test case is a “pass” or “fail” | 1 |
56,047 | 14,912,775,615 | IssuesEvent | 2021-01-22 13:12:11 | hazelcast/hazelcast-jet | https://api.github.com/repos/hazelcast/hazelcast-jet | opened | State for job is intermittently corrupted after forceful terminate | defect | State for job is intermittently corrupted if one of members is terminated forcefully and job is restarted from snapshot. It means long running job can fail with that. It fails with exception like:
```
ERROR || - [MasterJobContext] hz.kind_pasteur.cached.thread-11 - Execution of job 'JMS Test source to middle queue', execution 059c-7a24-756f-0002 failed
Start time: 2021-01-20T17:56:27.305
Duration: 00:00:00.046
To see additional job metrics enable JobConfig.storeMetricsAfterJobCompletion
com.hazelcast.jet.JetException: State for job 'JMS Test source to middle queue', execution 059c-7a24-756f-0002 in IMap '__jet.snapshot.059b-def0-66c0-0002.0' is corrupted: it should have 8 entries, but has 7
at com.hazelcast.jet.impl.SnapshotValidator.validateSnapshot(SnapshotValidator.java:60) ~[hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.jet.impl.MasterJobContext.rewriteDagWithSnapshotRestore(MasterJobContext.java:361) ~[hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.jet.impl.MasterJobContext.lambda$tryStartJob$2(MasterJobContext.java:210) ~[hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.jet.impl.JobCoordinationService.lambda$submitToCoordinatorThread$46(JobCoordinationService.java:1039) ~[hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.jet.impl.JobCoordinationService.lambda$submitToCoordinatorThread$47(JobCoordinationService.java:1060) ~[hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.internal.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:64) [hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217) [hazelcast-jet-enterprise-4.4.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_272]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_272]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_272]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) [hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [hazelcast-jet-enterprise-4.4.jar:?]
```
We haven't observed this issue with graceful shutdown.
We hit this issue in following soak tests:
- `jms-test`
- `stateful-map-test`
- `snapshot-jms-sink-test`
- `snapshot-jdbc-test` | 1.0 | State for job is intermittently corrupted after forceful terminate - State for job is intermittently corrupted if one of members is terminated forcefully and job is restarted from snapshot. It means long running job can fail with that. It fails with exception like:
```
ERROR || - [MasterJobContext] hz.kind_pasteur.cached.thread-11 - Execution of job 'JMS Test source to middle queue', execution 059c-7a24-756f-0002 failed
Start time: 2021-01-20T17:56:27.305
Duration: 00:00:00.046
To see additional job metrics enable JobConfig.storeMetricsAfterJobCompletion
com.hazelcast.jet.JetException: State for job 'JMS Test source to middle queue', execution 059c-7a24-756f-0002 in IMap '__jet.snapshot.059b-def0-66c0-0002.0' is corrupted: it should have 8 entries, but has 7
at com.hazelcast.jet.impl.SnapshotValidator.validateSnapshot(SnapshotValidator.java:60) ~[hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.jet.impl.MasterJobContext.rewriteDagWithSnapshotRestore(MasterJobContext.java:361) ~[hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.jet.impl.MasterJobContext.lambda$tryStartJob$2(MasterJobContext.java:210) ~[hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.jet.impl.JobCoordinationService.lambda$submitToCoordinatorThread$46(JobCoordinationService.java:1039) ~[hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.jet.impl.JobCoordinationService.lambda$submitToCoordinatorThread$47(JobCoordinationService.java:1060) ~[hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.internal.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:64) [hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217) [hazelcast-jet-enterprise-4.4.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_272]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_272]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_272]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) [hazelcast-jet-enterprise-4.4.jar:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [hazelcast-jet-enterprise-4.4.jar:?]
```
We haven't observed this issue with graceful shutdown.
We hit this issue in following soak tests:
- `jms-test`
- `stateful-map-test`
- `snapshot-jms-sink-test`
- `snapshot-jdbc-test` | non_test | state for job is intermittently corrupted after forceful terminate state for job is intermittently corrupted if one of members is terminated forcefully and job is restarted from snapshot it means long running job can fail with that it fails with exception like error hz kind pasteur cached thread execution of job jms test source to middle queue execution failed start time duration to see additional job metrics enable jobconfig storemetricsafterjobcompletion com hazelcast jet jetexception state for job jms test source to middle queue execution in imap jet snapshot is corrupted it should have entries but has at com hazelcast jet impl snapshotvalidator validatesnapshot snapshotvalidator java at com hazelcast jet impl masterjobcontext rewritedagwithsnapshotrestore masterjobcontext java at com hazelcast jet impl masterjobcontext lambda trystartjob masterjobcontext java at com hazelcast jet impl jobcoordinationservice lambda submittocoordinatorthread jobcoordinationservice java at com hazelcast jet impl jobcoordinationservice lambda submittocoordinatorthread jobcoordinationservice java at com hazelcast internal util executor completablefuturetask run completablefuturetask java at com hazelcast internal util executor cachedexecutorservicedelegate worker run cachedexecutorservicedelegate java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast internal util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java we haven t observed this issue with graceful shutdown we hit this issue in following soak tests jms test stateful map test snapshot jms sink test snapshot jdbc test | 0 |
330,381 | 28,372,833,382 | IssuesEvent | 2023-04-12 18:21:05 | apache/airflow | https://api.github.com/repos/apache/airflow | opened | Status of testing Providers that were prepared on April 12, 2023 | kind:meta testing status | ### Body
I have a kind request for all the contributors to the latest provider packages release.
Could you please help us to test the RC versions of the providers?
Let us know in the comment, whether the issue is addressed.
Those are providers that require testing as there were some substantial changes introduced:
## Provider [google: 9.0.0rc2](https://pypi.org/project/apache-airflow-providers-google/9.0.0rc2)
- [ ] [Update DV360 operators to use API v2 (#30326)](https://github.com/apache/airflow/pull/30326): @lwyszomi
- [ ] [Fix dynamic imports in google ads vendored in library (#30544)](https://github.com/apache/airflow/pull/30544): @potiuk
- [ ] [Fix one more dynamic import needed for vendored-in google ads (#30564)](https://github.com/apache/airflow/pull/30564): @potiuk
- [x] [Add deferrable mode to GKEStartPodOperator (#29266)](https://github.com/apache/airflow/pull/29266): @VladaZakharova (Tested in RC1)
- [x] [BigQueryHook list_rows/get_datasets_list can return iterator (#30543)](https://github.com/apache/airflow/pull/30543): @vchiapaikeo (Tested in RC1)
- [x] [Fix cloud build async credentials (#30441)](https://github.com/apache/airflow/pull/30441): @tnk-ysk (Tested in RC1)
## Provider [microsoft.azure: 5.3.1rc2](https://pypi.org/project/apache-airflow-providers-microsoft-azure/5.3.1rc2)
- [ ] [Fix AzureDataFactoryPipelineRunLink UI link generation (#30514)](https://github.com/apache/airflow/pull/30514): @hussein-awala
- [ ] [Fix Azure data factory UI link by load `subscription_id` from `extra__azure__subscriptionId` (#30556)](https://github.com/apache/airflow/pull/30556): @hussein-awala
The guidelines on how to test providers can be found in
[Verify providers by contributors](https://github.com/apache/airflow/blob/main/dev/README_RELEASE_PROVIDER_PACKAGES.md#verify-by-contributors)
All users involved in the PRs:
@VladaZakharova @lwyszomi @hussein-awala @potiuk @tnk-ysk @vchiapaikeo
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | 1.0 | Status of testing Providers that were prepared on April 12, 2023 - ### Body
I have a kind request for all the contributors to the latest provider packages release.
Could you please help us to test the RC versions of the providers?
Let us know in the comment, whether the issue is addressed.
Those are providers that require testing as there were some substantial changes introduced:
## Provider [google: 9.0.0rc2](https://pypi.org/project/apache-airflow-providers-google/9.0.0rc2)
- [ ] [Update DV360 operators to use API v2 (#30326)](https://github.com/apache/airflow/pull/30326): @lwyszomi
- [ ] [Fix dynamic imports in google ads vendored in library (#30544)](https://github.com/apache/airflow/pull/30544): @potiuk
- [ ] [Fix one more dynamic import needed for vendored-in google ads (#30564)](https://github.com/apache/airflow/pull/30564): @potiuk
- [x] [Add deferrable mode to GKEStartPodOperator (#29266)](https://github.com/apache/airflow/pull/29266): @VladaZakharova (Tested in RC1)
- [x] [BigQueryHook list_rows/get_datasets_list can return iterator (#30543)](https://github.com/apache/airflow/pull/30543): @vchiapaikeo (Tested in RC1)
- [x] [Fix cloud build async credentials (#30441)](https://github.com/apache/airflow/pull/30441): @tnk-ysk (Tested in RC1)
## Provider [microsoft.azure: 5.3.1rc2](https://pypi.org/project/apache-airflow-providers-microsoft-azure/5.3.1rc2)
- [ ] [Fix AzureDataFactoryPipelineRunLink UI link generation (#30514)](https://github.com/apache/airflow/pull/30514): @hussein-awala
- [ ] [Fix Azure data factory UI link by load `subscription_id` from `extra__azure__subscriptionId` (#30556)](https://github.com/apache/airflow/pull/30556): @hussein-awala
The guidelines on how to test providers can be found in
[Verify providers by contributors](https://github.com/apache/airflow/blob/main/dev/README_RELEASE_PROVIDER_PACKAGES.md#verify-by-contributors)
All users involved in the PRs:
@VladaZakharova @lwyszomi @hussein-awala @potiuk @tnk-ysk @vchiapaikeo
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | test | status of testing providers that were prepared on april body i have a kind request for all the contributors to the latest provider packages release could you please help us to test the rc versions of the providers let us know in the comment whether the issue is addressed those are providers that require testing as there were some substantial changes introduced provider lwyszomi potiuk potiuk vladazakharova tested in vchiapaikeo tested in tnk ysk tested in provider hussein awala hussein awala the guidelines on how to test providers can be found in all users involved in the prs vladazakharova lwyszomi hussein awala potiuk tnk ysk vchiapaikeo committer i acknowledge that i am a maintainer committer of the apache airflow project | 1 |
141,055 | 21,367,872,258 | IssuesEvent | 2022-04-20 05:07:30 | ProgramEquity/amplify | https://api.github.com/repos/ProgramEquity/amplify | closed | Subtask Send Letter: Checkout button | good first issue screen 7 design epic send letter | **What screen is this?**
## Screen 7: Send Letter
<img width="473" alt="Screen Shot 2022-01-20 at 8 57 08 PM" src="https://user-images.githubusercontent.com/9143339/150470433-9a9111b7-7a21-4d13-963a-e38747558787.png">
## Which component? Which piece of copy or graphic?
The text on the final action button at the bottom
<img width="391" alt="Screen Shot 2022-01-20 at 9 23 55 PM" src="https://user-images.githubusercontent.com/9143339/150470831-11dec387-584e-4325-8db9-78483cfb1ef4.png">
**What is the change propoosed? (add a figma screenshot, follow the workflow here)**
Change words to simplify language but still convey you are doing 2 actions (payment via stripe and sending post)
**Which topic does this educate the constituent around? (add a short description on how its clearer than the original) ?**
_Advocacy values to consider:_
- [ ] Testimonials should be personal
- [x] Language should be simple
- [ ] People are lead by causes and their impact
- [ ] Accessibility across abilities
**What are frontend tasks?** (if theres any tasks needed outside of the template below, pick a different color like blue)
- [ ] Add image here in this file
- [x] Insert copy in this file
_List files that need to be changed next to task_
**CC:** @frontend-team member, @frontend-coordinator, @research-coordinator
--------------------------
For Coordinator
- [ ] add appropriate labels: "good-first-issue", "design", "screen label", "intermediate"
- [ ] assign time label
- [ ] Approved and on project board
| 1.0 | Subtask Send Letter: Checkout button - **What screen is this?**
## Screen 7: Send Letter
<img width="473" alt="Screen Shot 2022-01-20 at 8 57 08 PM" src="https://user-images.githubusercontent.com/9143339/150470433-9a9111b7-7a21-4d13-963a-e38747558787.png">
## Which component? Which piece of copy or graphic?
The text on the final action button at the bottom
<img width="391" alt="Screen Shot 2022-01-20 at 9 23 55 PM" src="https://user-images.githubusercontent.com/9143339/150470831-11dec387-584e-4325-8db9-78483cfb1ef4.png">
**What is the change propoosed? (add a figma screenshot, follow the workflow here)**
Change words to simplify language but still convey you are doing 2 actions (payment via stripe and sending post)
**Which topic does this educate the constituent around? (add a short description on how its clearer than the original) ?**
_Advocacy values to consider:_
- [ ] Testimonials should be personal
- [x] Language should be simple
- [ ] People are lead by causes and their impact
- [ ] Accessibility across abilities
**What are frontend tasks?** (if theres any tasks needed outside of the template below, pick a different color like blue)
- [ ] Add image here in this file
- [x] Insert copy in this file
_List files that need to be changed next to task_
**CC:** @frontend-team member, @frontend-coordinator, @research-coordinator
--------------------------
For Coordinator
- [ ] add appropriate labels: "good-first-issue", "design", "screen label", "intermediate"
- [ ] assign time label
- [ ] Approved and on project board
| non_test | subtask send letter checkout button what screen is this screen send letter img width alt screen shot at pm src which component which piece of copy or graphic the text on the final action button at the bottom img width alt screen shot at pm src what is the change propoosed add a figma screenshot follow the workflow here change words to simplify language but still convey you are doing actions payment via stripe and sending post which topic does this educate the constituent around add a short description on how its clearer than the original advocacy values to consider testimonials should be personal language should be simple people are lead by causes and their impact accessibility across abilities what are frontend tasks if theres any tasks needed outside of the template below pick a different color like blue add image here in this file insert copy in this file list files that need to be changed next to task cc frontend team member frontend coordinator research coordinator for coordinator add appropriate labels good first issue design screen label intermediate assign time label approved and on project board | 0 |
85,087 | 7,960,739,423 | IssuesEvent | 2018-07-13 08:23:37 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | github.com/cockroachdb/cockroach/pkg/ccl/importccl: _null_and_\N_without_escape failed under stress | C-test-failure O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/f818c4c3b946c40839921c72fc1322fb3b385ee6
Parameters:
```
TAGS=
GOFLAGS=-race
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=772409&tab=buildLog
```
=== RUN TestImportData/MYSQLOUTFILE:_null_and_\N_without_escape
I180711 06:32:13.492871 241 storage/replica_proposal.go:203 [n1,s1,r38/1:/{Table/66-Max}] new range lease repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0 following repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0
I180711 06:32:13.751602 2214 ccl/importccl/read_import_proc.go:82 [import-distsql,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1
I180711 06:32:14.000654 2187 ccl/importccl/read_import_proc.go:82 [import-distsql,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1
I180711 06:32:14.086018 2224 storage/replica_command.go:275 [n1,s1,r39/1:/{Table/67-Max}] initiating a split of this range at key /Table/69 [r40]
I180711 06:32:14.131697 220 storage/replica_proposal.go:203 [n1,s1,r39/1:/{Table/67-Max}] new range lease repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0 following repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0
``` | 1.0 | github.com/cockroachdb/cockroach/pkg/ccl/importccl: _null_and_\N_without_escape failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/f818c4c3b946c40839921c72fc1322fb3b385ee6
Parameters:
```
TAGS=
GOFLAGS=-race
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=772409&tab=buildLog
```
=== RUN TestImportData/MYSQLOUTFILE:_null_and_\N_without_escape
I180711 06:32:13.492871 241 storage/replica_proposal.go:203 [n1,s1,r38/1:/{Table/66-Max}] new range lease repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0 following repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0
I180711 06:32:13.751602 2214 ccl/importccl/read_import_proc.go:82 [import-distsql,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1
I180711 06:32:14.000654 2187 ccl/importccl/read_import_proc.go:82 [import-distsql,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1
I180711 06:32:14.086018 2224 storage/replica_command.go:275 [n1,s1,r39/1:/{Table/67-Max}] initiating a split of this range at key /Table/69 [r40]
I180711 06:32:14.131697 220 storage/replica_proposal.go:203 [n1,s1,r39/1:/{Table/67-Max}] new range lease repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0 following repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0
``` | test | github com cockroachdb cockroach pkg ccl importccl null and n without escape failed under stress sha parameters tags goflags race failed test run testimportdata mysqloutfile null and n without escape storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro ccl importccl read import proc go could not fetch file size falling back to per file progress bad contentlength ccl importccl read import proc go could not fetch file size falling back to per file progress bad contentlength storage replica command go initiating a split of this range at key table storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro | 1 |
314,869 | 9,603,900,223 | IssuesEvent | 2019-05-10 18:22:52 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | Releasing a violation does not show the loading | Priority: Medium Type: Bug | If you click on the release button it does not show the loading bar.
It shows a warning/error:
Your access would be enable within a minutes or 2. Please reboot your computer. | 1.0 | Releasing a violation does not show the loading - If you click on the release button it does not show the loading bar.
It shows a warning/error:
Your access would be enable within a minutes or 2. Please reboot your computer. | non_test | releasing a violation does not show the loading if you click on the release button it does not show the loading bar it shows a warning error your access would be enable within a minutes or please reboot your computer | 0 |
234,730 | 19,253,051,430 | IssuesEvent | 2021-12-09 08:19:47 | MohistMC/Mohist | https://api.github.com/repos/MohistMC/Mohist | closed | [1.16.5] `` | 1.16.5 Wait Needs Testing | <!-- ISSUE_TEMPLATE_1 -> IMPORTANT: DO NOT DELETE THIS LINE.-->
<!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.-->
<!-- If you don't know where to upload your logs and crash reports, you can use these websites : -->
<!-- https://gist.github.com (recommended) -->
<!-- https://mclo.gs -->
<!-- https://haste.mohistmc.com -->
<!-- https://pastebin.com -->
<!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT -->
**Minecraft Version :** 1.16.5
**Mohist Version :** 875
**Concerned mod:** https://www.curseforge.com/minecraft/mc-mods/arcanecraft-ii
**Logs :** https://haste.mohistmc.com/dobikucira.properties | 1.0 | [1.16.5] `` - <!-- ISSUE_TEMPLATE_1 -> IMPORTANT: DO NOT DELETE THIS LINE.-->
<!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.-->
<!-- If you don't know where to upload your logs and crash reports, you can use these websites : -->
<!-- https://gist.github.com (recommended) -->
<!-- https://mclo.gs -->
<!-- https://haste.mohistmc.com -->
<!-- https://pastebin.com -->
<!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT -->
**Minecraft Version :** 1.16.5
**Mohist Version :** 875
**Concerned mod:** https://www.curseforge.com/minecraft/mc-mods/arcanecraft-ii
**Logs :** https://haste.mohistmc.com/dobikucira.properties | test | important do not delete this line minecraft version mohist version concerned mod logs | 1 |
68,531 | 7,102,981,430 | IssuesEvent | 2018-01-16 01:51:08 | deathlyrage/theisle-bugs | https://api.github.com/repos/deathlyrage/theisle-bugs | closed | Nametags stay on corpses | bug fixed needs testing | Patch: 5272
Server: Personal testing server
Issue: If a player in your group dies, their nametag will stay on their corpse. If the player returns to their corpse, there will be two nametags for everyone in the group. This problem is also apparent for humans.

| 1.0 | Nametags stay on corpses - Patch: 5272
Server: Personal testing server
Issue: If a player in your group dies, their nametag will stay on their corpse. If the player returns to their corpse, there will be two nametags for everyone in the group. This problem is also apparent for humans.

| test | nametags stay on corpses patch server personal testing server issue if a player in your group dies their nametag will stay on their corpse if the player returns to their corpse there will be two nametags for everyone in the group this problem is also apparent for humans | 1 |
172,570 | 6,510,378,645 | IssuesEvent | 2017-08-25 02:54:38 | enforcer574/smashclub | https://api.github.com/repos/enforcer574/smashclub | opened | Past events not showing up | Incident Priority: 3 | User reports that past events are not appearing on the "Events" page. DEMO environment on 8/24 during officer meeting demo. Confirmed that there are past events in the database. | 1.0 | Past events not showing up - User reports that past events are not appearing on the "Events" page. DEMO environment on 8/24 during officer meeting demo. Confirmed that there are past events in the database. | non_test | past events not showing up user reports that past events are not appearing on the events page demo environment on during officer meeting demo confirmed that there are past events in the database | 0 |
103,553 | 12,949,258,386 | IssuesEvent | 2020-07-19 08:21:43 | nikodemus/foolang | https://api.github.com/repos/nikodemus/foolang | opened | direct methods | design feature | Replace "class method" with "direct method": method on the class object itself.
Class methods still exist, but now they can only appear in interfaces, meaning a direct method on the implementing class -- vs. method on the interface object.
(Need to work on the terminology still, but the idea is solid and I've already wanted it a few times.)
| 1.0 | direct methods - Replace "class method" with "direct method": method on the class object itself.
Class methods still exist, but now they can only appear in interfaces, meaning a direct method on the implementing class -- vs. method on the interface object.
(Need to work on the terminology still, but the idea is solid and I've already wanted it a few times.)
| non_test | direct methods replace class method with direct method method on the class object itself class methods still exist but now they can only appear in interfaces meaning a direct method on the implementing class vs method on the interface object need to work on the terminology still but the idea is solid and i ve already wanted it a few times | 0 |
312,792 | 9,553,115,029 | IssuesEvent | 2019-05-02 18:24:16 | phetsims/fraction-matcher | https://api.github.com/repos/phetsims/fraction-matcher | closed | Some previously translated strings are no longer translated | priority:3-medium status:blocks-sim-publication status:ready-for-review | The level selection screen for the published Farsi (Persian) version of Fraction Matcher looks like this:

On the current master version, using locale=fa, it looks like this:

The reason that English words are now appearing is that a number of strings were moved from the fraction-matcher repo to fractions-common during the recent work on the fractions suite, and the translated strings weren't moved over. This should probably be fixed, otherwise the next time Fraction Master is published off of master, it may cause existing translations to fall back to English in several places as seen above.
I'm guessing it would be a couple hours of work max to either propagate the strings manually or write a script to do it. Either @jonathanolson or I could do it, or perhaps someone who we want to get a better understanding of how the translation utility works. Assigning to @ariel-phet for prioritization and assignment. | 1.0 | Some previously translated strings are no longer translated - The level selection screen for the published Farsi (Persian) version of Fraction Matcher looks like this:

On the current master version, using locale=fa, it looks like this:

The reason that English words are now appearing is that a number of strings were moved from the fraction-matcher repo to fractions-common during the recent work on the fractions suite, and the translated strings weren't moved over. This should probably be fixed, otherwise the next time Fraction Master is published off of master, it may cause existing translations to fall back to English in several places as seen above.
I'm guessing it would be a couple hours of work max to either propagate the strings manually or write a script to do it. Either @jonathanolson or I could do it, or perhaps someone who we want to get a better understanding of how the translation utility works. Assigning to @ariel-phet for prioritization and assignment. | non_test | some previously translated strings are no longer translated the level selection screen for the published farsi persian version of fraction matcher looks like this on the current master version using locale fa it looks like this the reason that english words are now appearing is that a number of strings were moved from the fraction matcher repo to fractions common during the recent work on the fractions suite and the translated strings weren t moved over this should probably be fixed otherwise the next time fraction master is published off of master it may cause existing translations to fall back to english in several places as seen above i m guessing it would be a couple hours of work max to either propagate the strings manually or write a script to do it either jonathanolson or i could do it or perhaps someone who we want to get a better understanding of how the translation utility works assigning to ariel phet for prioritization and assignment | 0 |
123,217 | 10,257,333,676 | IssuesEvent | 2019-08-21 19:52:03 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | opened | Feature Test Summary: APSFOUND-267: Liberty support for custom login modules on JCA and JMS connection factories | Feature Test Summary team:Zombie Apocalypse | Please complete the following Feature Test Summary when you have completed all your testing. This will be used as part of the FAT Complete Review.
**Part 1:**
Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature. The description should include the positive and negative testing done, whether all testing is automated, what manual tests exist (if any) and where the tests are stored (source control). Automated testing is expected for all features with manual testing considered an exception to the rule.
> For any feature, be aware that only FAT tests (not unit or BVT) are executed in our cross platform testing. To ensure cross platform testing ensure you have sufficient FAT coverage to verify the feature.
> If delivering tests outside of the standard Liberty FAT framework, do the tests push the results into cognitive testing database (if not, consult with the CSI Team who can provide advice and verify if results are being received)?_
**Automated Functional Acceptance Tests**
_Tests added to com.ibm.ws.rest.handler.validator_fat/fat/src/com/ibm/ws/rest/handler/validator/fat/ValidateDSCustomLoginModuleTest.java_
testJMSConnectionFactoryWithLoginModule - Use the validation REST endpoint to validate a single javax.jms.ConnectionFactory that is configured with a jaasLoginContextEntryRef.
testJMSConnectionFactoryWithLoginModuleNotUsed - Use the validation REST endpoint to validate a single javax.jms.ConnectionFactory that is configured with a jaasLoginContextEntryRef, but isn't used because application authentication is used instead.
testJMSQueueConnectionFactoryWithLoginModule - Use the validation REST endpoint to validate a single javax.jms.QueueConnectionFactory that is configured with a jaasLoginContextEntryRef.
testJMSQueueConnectionFactoryWithLoginModuleNotUsed - Use the validation REST endpoint to validate a single javax.jms.QueueConnectionFactory
that is configured with a jaasLoginContextEntryRef, but isn't used because application authentication is used instead.
testJMSTopicConnectionFactoryWithLoginModule - Use the validation REST endpoint to validate a single javax.jms.TopicConnectionFactory that is configured with a jaasLoginContextEntryRef.
testJMSTopicConnectionFactoryWithLoginModuleNotUsed - Use the validation REST endpoint to validate a single javax.jms.TopicConnectionFactory that is configured with a jaasLoginContextEntryRef, but isn't used because application authentication is used instead.
_Tests added to dev/com.ibm.ws.rest.handler.validator_fat/fat/src/com/ibm/ws/rest/handler/validator/fat/ValidateJCATest.java_
testJaasLoginModuleForContainerAuthWithLoginProperties - Validate a connectionFactory with a container authentication resource reference with login properties, and verify that it uses the login module indicated by the jaasLoginContextEntryRef to log in, supplying it with the login properties.
testJaasLoginModuleForContainerAuthWithoutLoginProperties - Validate a connectionFactory with a container authentication resource reference, and verify that it uses the login module indicated by the jaasLoginContextEntryRef to log in.
_Tests added to in dev/com.ibm.ws.rest.handler.validator_fat/fat/src/com/ibm/ws/rest/handler/validator/fat/ValidateJCATest.java_
testMultipleConnectionFactories - additional connection factory added
**Part 2:**
Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.
> Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:
> 0 - No automated testing delivered
> 1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.
> 2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths
> 3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.
> 4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.
> 5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.
> Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
Confidence:
Comments: | 1.0 | Feature Test Summary: APSFOUND-267: Liberty support for custom login modules on JCA and JMS connection factories - Please complete the following Feature Test Summary when you have completed all your testing. This will be used as part of the FAT Complete Review.
**Part 1:**
Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature. The description should include the positive and negative testing done, whether all testing is automated, what manual tests exist (if any) and where the tests are stored (source control). Automated testing is expected for all features with manual testing considered an exception to the rule.
> For any feature, be aware that only FAT tests (not unit or BVT) are executed in our cross platform testing. To ensure cross platform testing ensure you have sufficient FAT coverage to verify the feature.
> If delivering tests outside of the standard Liberty FAT framework, do the tests push the results into cognitive testing database (if not, consult with the CSI Team who can provide advice and verify if results are being received)?_
**Automated Functional Acceptance Tests**
_Tests added to com.ibm.ws.rest.handler.validator_fat/fat/src/com/ibm/ws/rest/handler/validator/fat/ValidateDSCustomLoginModuleTest.java_
testJMSConnectionFactoryWithLoginModule - Use the validation REST endpoint to validate a single javax.jms.ConnectionFactory that is configured with a jaasLoginContextEntryRef.
testJMSConnectionFactoryWithLoginModuleNotUsed - Use the validation REST endpoint to validate a single javax.jms.ConnectionFactory that is configured with a jaasLoginContextEntryRef, but isn't used because application authentication is used instead.
testJMSQueueConnectionFactoryWithLoginModule - Use the validation REST endpoint to validate a single javax.jms.QueueConnectionFactory that is configured with a jaasLoginContextEntryRef.
testJMSQueueConnectionFactoryWithLoginModuleNotUsed - Use the validation REST endpoint to validate a single javax.jms.QueueConnectionFactory
that is configured with a jaasLoginContextEntryRef, but isn't used because application authentication is used instead.
testJMSTopicConnectionFactoryWithLoginModule - Use the validation REST endpoint to validate a single javax.jms.TopicConnectionFactory that is configured with a jaasLoginContextEntryRef.
testJMSTopicConnectionFactoryWithLoginModuleNotUsed - Use the validation REST endpoint to validate a single javax.jms.TopicConnectionFactory that is configured with a jaasLoginContextEntryRef, but isn't used because application authentication is used instead.
_Tests added to dev/com.ibm.ws.rest.handler.validator_fat/fat/src/com/ibm/ws/rest/handler/validator/fat/ValidateJCATest.java_
testJaasLoginModuleForContainerAuthWithLoginProperties - Validate a connectionFactory with a container authentication resource reference with login properties, and verify that it uses the login module indicated by the jaasLoginContextEntryRef to log in, supplying it with the login properties.
testJaasLoginModuleForContainerAuthWithoutLoginProperties - Validate a connectionFactory with a container authentication resource reference, and verify that it uses the login module indicated by the jaasLoginContextEntryRef to log in.
_Tests added to in dev/com.ibm.ws.rest.handler.validator_fat/fat/src/com/ibm/ws/rest/handler/validator/fat/ValidateJCATest.java_
testMultipleConnectionFactories - additional connection factory added
**Part 2:**
Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.
> Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:
> 0 - No automated testing delivered
> 1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.
> 2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths
> 3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.
> 4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.
> 5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.
> Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
Confidence:
Comments: | test | feature test summary apsfound liberty support for custom login modules on jca and jms connection factories please complete the following feature test summary when you have completed all your testing this will be used as part of the fat complete review part describe the test strategy approach for this feature and describe how the approach verifies the functions delivered by this feature the description should include the positive and negative testing done whether all testing is automated what manual tests exist if any and where the tests are stored source control automated testing is expected for all features with manual testing considered an exception to the rule for any feature be aware that only fat tests not unit or bvt are executed in our cross platform testing to ensure cross platform testing ensure you have sufficient fat coverage to verify the feature if delivering tests outside of the standard liberty fat framework do the tests push the results into cognitive testing database if not consult with the csi team who can provide advice and verify if results are being received automated functional acceptance tests tests added to com ibm ws rest handler validator fat fat src com ibm ws rest handler validator fat validatedscustomloginmoduletest java testjmsconnectionfactorywithloginmodule use the validation rest endpoint to validate a single javax jms connectionfactory that is configured with a jaaslogincontextentryref testjmsconnectionfactorywithloginmodulenotused use the validation rest endpoint to validate a single javax jms connectionfactory that is configured with a jaaslogincontextentryref but isn t used because application authentication is used instead testjmsqueueconnectionfactorywithloginmodule use the validation rest endpoint to validate a single javax jms queueconnectionfactory that is configured with a jaaslogincontextentryref testjmsqueueconnectionfactorywithloginmodulenotused use the validation rest endpoint to validate a single javax jms queueconnectionfactory that is configured with a jaaslogincontextentryref but isn t used because application authentication is used instead testjmstopicconnectionfactorywithloginmodule use the validation rest endpoint to validate a single javax jms topicconnectionfactory that is configured with a jaaslogincontextentryref testjmstopicconnectionfactorywithloginmodulenotused use the validation rest endpoint to validate a single javax jms topicconnectionfactory that is configured with a jaaslogincontextentryref but isn t used because application authentication is used instead tests added to dev com ibm ws rest handler validator fat fat src com ibm ws rest handler validator fat validatejcatest java testjaasloginmoduleforcontainerauthwithloginproperties validate a connectionfactory with a container authentication resource reference with login properties and verify that it uses the login module indicated by the jaaslogincontextentryref to log in supplying it with the login properties testjaasloginmoduleforcontainerauthwithoutloginproperties validate a connectionfactory with a container authentication resource reference and verify that it uses the login module indicated by the jaaslogincontextentryref to log in tests added to in dev com ibm ws rest handler validator fat fat src com ibm ws rest handler validator fat validatejcatest java testmultipleconnectionfactories additional connection factory added part collectively as a team you need to assess your confidence in the testing delivered based on the values below this should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole please indicate your confidence in the testing up to and including fat delivered with this feature by selecting one of these values no automated testing delivered we have minimal automated coverage of the feature including golden paths there is a relatively high risk that defects or issues could be found in this feature we have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here error outlying scenarios are not really covered there are likely risks that issues may exist in the golden paths we have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error outlying scenarios there is a risk when the feature is used outside the golden paths however we are confident on the golden path note this may still be a valid end state for a feature things like beta features may well suffice at this level we have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error outlying scenarios while more testing of the error outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide we have delivered all automated testing we believe is needed for this feature the testing covers all golden path cases as well as all the error outlying scenarios that make sense we are not aware of any gaps in the testing at this time no manual testing is required to verify this feature based on your answer above for any answer other than a or please provide details of what drove your answer please be aware it may be perfectly reasonable in some scenarios to deliver with any value above we may accept no automated testing is needed for some features we may be happy with low levels of testing on samples for instance so please don t feel the need to drive to a we need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid what are the gaps what is the risk etc please also provide links to the follow on work that is needed to close the gaps should you deem it needed confidence comments | 1 |
218,729 | 17,018,285,313 | IssuesEvent | 2021-07-02 14:57:19 | Realm667/WolfenDoom | https://api.github.com/repos/Realm667/WolfenDoom | closed | Some aircrafts not animated anymore | actor playtesting | Using almost the latest git version (few days old) and gzdoom pre40 (and 46), in c1m3, some aircrafts aren't animated anymore.
These in the pics stay like this all the time and a few more come after some time that are animated.


| 1.0 | Some aircrafts not animated anymore - Using almost the latest git version (few days old) and gzdoom pre40 (and 46), in c1m3, some aircrafts aren't animated anymore.
These in the pics stay like this all the time and a few more come after some time that are animated.


| test | some aircrafts not animated anymore using almost the latest git version few days old and gzdoom and in some aircrafts aren t animated anymore these in the pics stay like this all the time and a few more come after some time that are animated | 1 |
156,838 | 12,339,493,322 | IssuesEvent | 2020-05-14 18:13:26 | netblue30/fdns | https://api.github.com/repos/netblue30/fdns | closed | support for multiple fdns instances | in testing | It would be greate if we can run multiple instances of fdns at the same time. Especially for #35 is this needed. | 1.0 | support for multiple fdns instances - It would be greate if we can run multiple instances of fdns at the same time. Especially for #35 is this needed. | test | support for multiple fdns instances it would be greate if we can run multiple instances of fdns at the same time especially for is this needed | 1 |
225,758 | 17,288,316,565 | IssuesEvent | 2021-07-24 06:55:57 | bbatsov/solarized-emacs | https://api.github.com/repos/bbatsov/solarized-emacs | closed | Document suggestet parenface/delimiter color settings | documentation solarized-config | Suggest to use parenface plus (or find a more generic mode to suggest).
Using the "comment" face for delimiters works well in many modes but particualy for lisp modes.
It's great combining it with something like `show-smartparens-global-mode` so that whats in focus is visible while the other things blend more into the background:
It works well for many other modes than lisp but not all..
Javascript is one instance where it instantly feels wrong. I'm guessing that it might be related to the combination of }})}))}})}} shit storms and the "required" use of semi colons which breaks up the conformity of parenface blocks.. (Writing this down right now gave me the idea to try this for js again while also adding the semi colon to the parenface list)
This is my parenface-plus config:
``` elisp
(progn
(defun paren-face-add-keyword-other ()
"Adds paren-face support to the mode."
(font-lock-add-keywords nil '(("\\[\\|\\]" . paren-face)))
(font-lock-add-keywords nil '(("{\\|}" . paren-face))))
(add-hook 'go-mode-hook 'paren-face-add-keyword)
(add-hook 'go-mode-hook 'paren-face-add-keyword-other)
(add-hook 'coffee-mode-hook 'paren-face-add-keyword)
(add-hook 'python-mode-hook 'paren-face-add-keyword)
(add-hook 'python-mode-hook 'paren-face-add-keyword-other)
(add-hook 'coffee-mode-hook 'paren-face-add-keyword)
(add-hook 'coffee-mode-hook 'paren-face-add-keyword-other))
```
## screenshots




| 1.0 | Document suggestet parenface/delimiter color settings - Suggest to use parenface plus (or find a more generic mode to suggest).
Using the "comment" face for delimiters works well in many modes but particualy for lisp modes.
It's great combining it with something like `show-smartparens-global-mode` so that whats in focus is visible while the other things blend more into the background:
It works well for many other modes than lisp but not all..
Javascript is one instance where it instantly feels wrong. I'm guessing that it might be related to the combination of }})}))}})}} shit storms and the "required" use of semi colons which breaks up the conformity of parenface blocks.. (Writing this down right now gave me the idea to try this for js again while also adding the semi colon to the parenface list)
This is my parenface-plus config:
``` elisp
(progn
(defun paren-face-add-keyword-other ()
"Adds paren-face support to the mode."
(font-lock-add-keywords nil '(("\\[\\|\\]" . paren-face)))
(font-lock-add-keywords nil '(("{\\|}" . paren-face))))
(add-hook 'go-mode-hook 'paren-face-add-keyword)
(add-hook 'go-mode-hook 'paren-face-add-keyword-other)
(add-hook 'coffee-mode-hook 'paren-face-add-keyword)
(add-hook 'python-mode-hook 'paren-face-add-keyword)
(add-hook 'python-mode-hook 'paren-face-add-keyword-other)
(add-hook 'coffee-mode-hook 'paren-face-add-keyword)
(add-hook 'coffee-mode-hook 'paren-face-add-keyword-other))
```
## screenshots




| non_test | document suggestet parenface delimiter color settings suggest to use parenface plus or find a more generic mode to suggest using the comment face for delimiters works well in many modes but particualy for lisp modes it s great combining it with something like show smartparens global mode so that whats in focus is visible while the other things blend more into the background it works well for many other modes than lisp but not all javascript is one instance where it instantly feels wrong i m guessing that it might be related to the combination of shit storms and the required use of semi colons which breaks up the conformity of parenface blocks writing this down right now gave me the idea to try this for js again while also adding the semi colon to the parenface list this is my parenface plus config elisp progn defun paren face add keyword other adds paren face support to the mode font lock add keywords nil paren face font lock add keywords nil paren face add hook go mode hook paren face add keyword add hook go mode hook paren face add keyword other add hook coffee mode hook paren face add keyword add hook python mode hook paren face add keyword add hook python mode hook paren face add keyword other add hook coffee mode hook paren face add keyword add hook coffee mode hook paren face add keyword other screenshots | 0 |
22,139 | 15,019,150,034 | IssuesEvent | 2021-02-01 13:11:10 | SDU-eScience/UCloud | https://api.github.com/repos/SDU-eScience/UCloud | closed | Improvements to Grafana Dashboard | Epic enhancement infrastructure | List of potential improvements (to be clarified later before we put this on the roadmap):
- Fix data loading issues
- Clean up/combine some of the data
- Improve response times, potentially, using caching of responses | 1.0 | Improvements to Grafana Dashboard - List of potential improvements (to be clarified later before we put this on the roadmap):
- Fix data loading issues
- Clean up/combine some of the data
- Improve response times, potentially, using caching of responses | non_test | improvements to grafana dashboard list of potential improvements to be clarified later before we put this on the roadmap fix data loading issues clean up combine some of the data improve response times potentially using caching of responses | 0 |
37,416 | 15,288,053,378 | IssuesEvent | 2021-02-23 16:26:13 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Catch-up release documentation - @mikecote | Meta Team:Alerting Services | Go over previous PRs that have been merged since Alerting GA to document anything that requires it.
List of PRs created by @mikecote requiring docs:
https://github.com/elastic/kibana/pulls?q=is%3Apr+label%3A%22Team%3AAlerting+Services%22+label%3Aneeds_docs+is%3Aclosed+author%3Amikecote | 1.0 | Catch-up release documentation - @mikecote - Go over previous PRs that have been merged since Alerting GA to document anything that requires it.
List of PRs created by @mikecote requiring docs:
https://github.com/elastic/kibana/pulls?q=is%3Apr+label%3A%22Team%3AAlerting+Services%22+label%3Aneeds_docs+is%3Aclosed+author%3Amikecote | non_test | catch up release documentation mikecote go over previous prs that have been merged since alerting ga to document anything that requires it list of prs created by mikecote requiring docs | 0 |
249,296 | 21,157,216,563 | IssuesEvent | 2022-04-07 05:31:35 | stores-cedcommerce/Internal-Diat-Food-Due-15th-March | https://api.github.com/repos/stores-cedcommerce/Internal-Diat-Food-Due-15th-March | closed | Quick view modal when we hover over the size dropdown then hand cursor is not coming. | Desktop Issue Inprogress Ready to test Fixed | **Actual result:**
Quick view modal when we hover over the size dropdown then hand cursor is not coming.

**Expected result:**
Quick view modal when we hover over the size dropdown then hand cursor have to come. | 1.0 | Quick view modal when we hover over the size dropdown then hand cursor is not coming. - **Actual result:**
Quick view modal when we hover over the size dropdown then hand cursor is not coming.

**Expected result:**
Quick view modal when we hover over the size dropdown then hand cursor have to come. | test | quick view modal when we hover over the size dropdown then hand cursor is not coming actual result quick view modal when we hover over the size dropdown then hand cursor is not coming expected result quick view modal when we hover over the size dropdown then hand cursor have to come | 1 |
345,418 | 30,809,726,121 | IssuesEvent | 2023-08-01 09:34:46 | ubtue/DatenProbleme | https://api.github.com/repos/ubtue/DatenProbleme | closed | ISSN 0718-9273 | Veritas (SciELO) | falscher Titel (Sprache) | ready for testing Zotero_SEMI-AUTO | #### URL
Heft: https://www.scielo.cl/scielo.php?script=sci_issuetoc&pid=0718-927320220003&lng=en&nrm=iso
Artikel: http://dx.doi.org/10.4067/S0718-92732022000300115
#### Import-Translator
Mehrfachimport:
ubtue_SciELO.js
### Problembeschreibung
Beim Mehrfach-Import werden oft die engl. Titel importiert statt der span.

Beim Einzel-Import wird es korrekt übernommen.

| 1.0 | ISSN 0718-9273 | Veritas (SciELO) | falscher Titel (Sprache) - #### URL
Heft: https://www.scielo.cl/scielo.php?script=sci_issuetoc&pid=0718-927320220003&lng=en&nrm=iso
Artikel: http://dx.doi.org/10.4067/S0718-92732022000300115
#### Import-Translator
Mehrfachimport:
ubtue_SciELO.js
### Problembeschreibung
Beim Mehrfach-Import werden oft die engl. Titel importiert statt der span.

Beim Einzel-Import wird es korrekt übernommen.

| test | issn veritas scielo falscher titel sprache url heft artikel import translator mehrfachimport ubtue scielo js problembeschreibung beim mehrfach import werden oft die engl titel importiert statt der span beim einzel import wird es korrekt übernommen | 1 |
618,726 | 19,485,749,334 | IssuesEvent | 2021-12-26 10:33:50 | bluelotus03/Save-the-Holiday | https://api.github.com/repos/bluelotus03/Save-the-Holiday | closed | Website Design | high-priority | **Page checklist:**
- [x] Home
- [x] Play
- [x] About
- [x] Credits
**Work on the styling with the following notes:**
- [x] Navbar background can be dark blue
- [x] Navbar sticky
- [x] Navbar tab text can be white
- [x] Navbar selected tab text can be light blue
- [x] Copyright section background can by white or blue (can test these) & text be opposite color
- [x] Copyright text - move to center
- [x] Cover photo for About & Credits pages is background image from game
- [x] Remove underline from links
- [x] About page built with buttons
- [x] Home page background color
- [x] Play screen "Exit" next to Back Arrow

**Reference for cover photo/page title:**

**Reference for width of content (just centered):**

| 1.0 | Website Design - **Page checklist:**
- [x] Home
- [x] Play
- [x] About
- [x] Credits
**Work on the styling with the following notes:**
- [x] Navbar background can be dark blue
- [x] Navbar sticky
- [x] Navbar tab text can be white
- [x] Navbar selected tab text can be light blue
- [x] Copyright section background can by white or blue (can test these) & text be opposite color
- [x] Copyright text - move to center
- [x] Cover photo for About & Credits pages is background image from game
- [x] Remove underline from links
- [x] About page built with buttons
- [x] Home page background color
- [x] Play screen "Exit" next to Back Arrow

**Reference for cover photo/page title:**

**Reference for width of content (just centered):**

| non_test | website design page checklist home play about credits work on the styling with the following notes navbar background can be dark blue navbar sticky navbar tab text can be white navbar selected tab text can be light blue copyright section background can by white or blue can test these text be opposite color copyright text move to center cover photo for about credits pages is background image from game remove underline from links about page built with buttons home page background color play screen exit next to back arrow reference for cover photo page title reference for width of content just centered | 0 |
63,244 | 6,830,489,374 | IssuesEvent | 2017-11-09 07:02:18 | hypermodules/hyperamp | https://api.github.com/repos/hypermodules/hyperamp | closed | Fix calling t.end() twice. | testing | We don't remove event emitter listeners when we end the test or something in the artwork cache which causes inconsistent test failures when t.end gets called twice.
https://github.com/hypermodules/hyperamp/blob/master/main/lib/artwork-cache/test.js | 1.0 | Fix calling t.end() twice. - We don't remove event emitter listeners when we end the test or something in the artwork cache which causes inconsistent test failures when t.end gets called twice.
https://github.com/hypermodules/hyperamp/blob/master/main/lib/artwork-cache/test.js | test | fix calling t end twice we don t remove event emitter listeners when we end the test or something in the artwork cache which causes inconsistent test failures when t end gets called twice | 1 |
255,499 | 21,929,998,867 | IssuesEvent | 2022-05-23 08:54:55 | xmos/sw_avona | https://api.github.com/repos/xmos/sw_avona | opened | Test Aaron's suggestions regarding bfp division | size:SM type:testing | Aaron gave me some tips on how to implement long division without consuming many cycles. | 1.0 | Test Aaron's suggestions regarding bfp division - Aaron gave me some tips on how to implement long division without consuming many cycles. | test | test aaron s suggestions regarding bfp division aaron gave me some tips on how to implement long division without consuming many cycles | 1 |
25,003 | 4,120,453,450 | IssuesEvent | 2016-06-08 17:58:30 | fossasia/open-event-orga-server | https://api.github.com/repos/fossasia/open-event-orga-server | closed | Fix POST API testing of event and session model by using custom DateTime field | Rest-API testing | Both Event and session model includes a datetime field which currently can't be tested because it is not possible to write datetime fields as string from SQLAlchemy in a SQLite database (used in Testing).
However, this issue can be fixed by a using a custom field for DateTime that returns the DateTime object on POST requests. Using a custom DateTime field will also give more control over the datetime 'format'.
I am currently think about 'YYYY-MM-DD HH:MM:SS' as the DateTime format to be used in the API .
@niranjan94 @SaptakS Is the format ok w.r.t UI ?
EDIT - Just found out restplus doesn't actually validate the (inbuilt) DateTime fields. It lets it go and psycopg throws the exception. (https://bpaste.net/show/e332f0095399)
| 1.0 | Fix POST API testing of event and session model by using custom DateTime field - Both Event and session model includes a datetime field which currently can't be tested because it is not possible to write datetime fields as string from SQLAlchemy in a SQLite database (used in Testing).
However, this issue can be fixed by a using a custom field for DateTime that returns the DateTime object on POST requests. Using a custom DateTime field will also give more control over the datetime 'format'.
I am currently think about 'YYYY-MM-DD HH:MM:SS' as the DateTime format to be used in the API .
@niranjan94 @SaptakS Is the format ok w.r.t UI ?
EDIT - Just found out restplus doesn't actually validate the (inbuilt) DateTime fields. It lets it go and psycopg throws the exception. (https://bpaste.net/show/e332f0095399)
| test | fix post api testing of event and session model by using custom datetime field both event and session model includes a datetime field which currently can t be tested because it is not possible to write datetime fields as string from sqlalchemy in a sqlite database used in testing however this issue can be fixed by a using a custom field for datetime that returns the datetime object on post requests using a custom datetime field will also give more control over the datetime format i am currently think about yyyy mm dd hh mm ss as the datetime format to be used in the api saptaks is the format ok w r t ui edit just found out restplus doesn t actually validate the inbuilt datetime fields it lets it go and psycopg throws the exception | 1 |
134,947 | 10,949,359,136 | IssuesEvent | 2019-11-26 10:41:31 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | opened | There are strings haven't been translated on 'Reset' dialog | 🌐 localization 🧪 testing | **Storage Explorer Version:** 1.11.0
**Build:** [20191126.2](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3273921)
**Branch:** master
**Language:** Chinese(zh-CN) / Chinese(zh-TW)
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/macOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select 'Chinese (simplified)' -> Restart Storage Explorer.
3. Help -> Click 'Reset'.
4. Check the dialog.
**Expect Experience:**
All strings are translated.
**Actual Experience:**
There are strings haven't been translated.

| 1.0 | There are strings haven't been translated on 'Reset' dialog - **Storage Explorer Version:** 1.11.0
**Build:** [20191126.2](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3273921)
**Branch:** master
**Language:** Chinese(zh-CN) / Chinese(zh-TW)
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/macOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select 'Chinese (simplified)' -> Restart Storage Explorer.
3. Help -> Click 'Reset'.
4. Check the dialog.
**Expect Experience:**
All strings are translated.
**Actual Experience:**
There are strings haven't been translated.

| test | there are strings haven t been translated on reset dialog storage explorer version build branch master language chinese zh cn chinese zh tw platform os windows linux ubuntu macos high sierra architecture regression from not a regression steps to reproduce launch storage explorer open settings application regional settings select chinese simplified restart storage explorer help click reset check the dialog expect experience all strings are translated actual experience there are strings haven t been translated | 1 |
761,462 | 26,682,079,987 | IssuesEvent | 2023-01-26 18:32:35 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | Feature Request - Add link for acknowledgment of harmful content to manage collection | Priority-High (Needed for work) Function-ObjectRecord Enhancement Display/Interface Priority - Wildfire Potential | Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html
**Is your feature request related to a problem? Please describe.**
As part of the new interface development, a link to the new Arctos Acknowledgment of Harmful Content (under review) will be added to the site-wide footer for arctos.database.museum. In addition a link will be made available in the site-wide main menu under the Help options.
**Describe what you're trying to accomplish**
Some collections may want the acknowledgement to be more prominent OR to provide one of their own.
**Describe the solution you'd like**
Add link and link text fields for content warning. The link text will appear in all of the collection's record headers along with license, terms, etc.
**Describe alternatives you've considered**
Creating pop-ups
**Additional context**
Add any other context or screenshots about the feature request here.
**Priority**
Please assign a priority-label. Unprioritized issues gets sent into a black hole of despair.
| 2.0 | Feature Request - Add link for acknowledgment of harmful content to manage collection - Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html
**Is your feature request related to a problem? Please describe.**
As part of the new interface development, a link to the new Arctos Acknowledgment of Harmful Content (under review) will be added to the site-wide footer for arctos.database.museum. In addition a link will be made available in the site-wide main menu under the Help options.
**Describe what you're trying to accomplish**
Some collections may want the acknowledgement to be more prominent OR to provide one of their own.
**Describe the solution you'd like**
Add link and link text fields for content warning. The link text will appear in all of the collection's record headers along with license, terms, etc.
**Describe alternatives you've considered**
Creating pop-ups
**Additional context**
Add any other context or screenshots about the feature request here.
**Priority**
Please assign a priority-label. Unprioritized issues gets sent into a black hole of despair.
| non_test | feature request add link for acknowledgment of harmful content to manage collection issue documentation is is your feature request related to a problem please describe as part of the new interface development a link to the new arctos acknowledgment of harmful content under review will be added to the site wide footer for arctos database museum in addition a link will be made available in the site wide main menu under the help options describe what you re trying to accomplish some collections may want the acknowledgement to be more prominent or to provide one of their own describe the solution you d like add link and link text fields for content warning the link text will appear in all of the collection s record headers along with license terms etc describe alternatives you ve considered creating pop ups additional context add any other context or screenshots about the feature request here priority please assign a priority label unprioritized issues gets sent into a black hole of despair | 0 |
272,209 | 23,661,290,451 | IssuesEvent | 2022-08-26 15:49:15 | meveo-org/meveo | https://api.github.com/repos/meveo-org/meveo | closed | Git repo doesn't pull from new branch | bug test | **Describe the bug**
When we try to change to a different branch and then try to pull from the new branch, it continues to pull from the old branch.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new repository linked to a github repository
2. Save the repository.
3. Open the repository and change the branch field on the form and save
4. Try to pull from the new branch.
5. It says the branch was successfully pulled but it actually pulled from the old branch.
**Expected behavior**
It should pull from the new branch.
| 1.0 | Git repo doesn't pull from new branch - **Describe the bug**
When we try to change to a different branch and then try to pull from the new branch, it continues to pull from the old branch.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new repository linked to a github repository
2. Save the repository.
3. Open the repository and change the branch field on the form and save
4. Try to pull from the new branch.
5. It says the branch was successfully pulled but it actually pulled from the old branch.
**Expected behavior**
It should pull from the new branch.
| test | git repo doesn t pull from new branch describe the bug when we try to change to a different branch and then try to pull from the new branch it continues to pull from the old branch to reproduce steps to reproduce the behavior create a new repository linked to a github repository save the repository open the repository and change the branch field on the form and save try to pull from the new branch it says the branch was successfully pulled but it actually pulled from the old branch expected behavior it should pull from the new branch | 1 |
193,278 | 14,645,657,815 | IssuesEvent | 2020-12-26 09:16:41 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | cyucelen/wirect: database/sniffer_test.go; 3 LoC | fresh test tiny |
Found a possible issue in [cyucelen/wirect](https://www.github.com/cyucelen/wirect) at [database/sniffer_test.go](https://github.com/cyucelen/wirect/blob/9e5c185561426e0c617ba003b840f7298d84399e/database/sniffer_test.go#L26-L28)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call at line 27 may store a reference to sniffer
[Click here to see the code in its original context.](https://github.com/cyucelen/wirect/blob/9e5c185561426e0c617ba003b840f7298d84399e/database/sniffer_test.go#L26-L28)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, sniffer := range sniffers {
s.db.CreateSniffer(&sniffer)
}
```
</details>
<details>
<summary>Click here to show extra information the analyzer produced.</summary>
```
The following dot graph describes paths through the callgraph that could lead to a function which writes a pointer argument:
no paths found; call may have ended in third-party code; stay tuned for diagnostics
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 9e5c185561426e0c617ba003b840f7298d84399e
| 1.0 | cyucelen/wirect: database/sniffer_test.go; 3 LoC -
Found a possible issue in [cyucelen/wirect](https://www.github.com/cyucelen/wirect) at [database/sniffer_test.go](https://github.com/cyucelen/wirect/blob/9e5c185561426e0c617ba003b840f7298d84399e/database/sniffer_test.go#L26-L28)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call at line 27 may store a reference to sniffer
[Click here to see the code in its original context.](https://github.com/cyucelen/wirect/blob/9e5c185561426e0c617ba003b840f7298d84399e/database/sniffer_test.go#L26-L28)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, sniffer := range sniffers {
s.db.CreateSniffer(&sniffer)
}
```
</details>
<details>
<summary>Click here to show extra information the analyzer produced.</summary>
```
The following dot graph describes paths through the callgraph that could lead to a function which writes a pointer argument:
no paths found; call may have ended in third-party code; stay tuned for diagnostics
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 9e5c185561426e0c617ba003b840f7298d84399e
| test | cyucelen wirect database sniffer test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call at line may store a reference to sniffer click here to show the line s of go which triggered the analyzer go for sniffer range sniffers s db createsniffer sniffer click here to show extra information the analyzer produced the following dot graph describes paths through the callgraph that could lead to a function which writes a pointer argument no paths found call may have ended in third party code stay tuned for diagnostics leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
5,746 | 2,792,418,686 | IssuesEvent | 2015-05-11 00:02:13 | mozilla/webmaker-android | https://api.github.com/repos/mozilla/webmaker-android | opened | When editing the Text Element copy, the device keyboard isn't shown by default. | Testing Feedback | You have to tap the field on the Edit Text prompt to bring up the keyboard. We should show the keyboard by default, since that's the main goal of the view. | 1.0 | When editing the Text Element copy, the device keyboard isn't shown by default. - You have to tap the field on the Edit Text prompt to bring up the keyboard. We should show the keyboard by default, since that's the main goal of the view. | test | when editing the text element copy the device keyboard isn t shown by default you have to tap the field on the edit text prompt to bring up the keyboard we should show the keyboard by default since that s the main goal of the view | 1 |
691,899 | 23,715,635,312 | IssuesEvent | 2022-08-30 11:32:32 | google/flax | https://api.github.com/repos/google/flax | closed | Slow training on TPU | Priority: P3 - no schedule | Does following this https://flax.readthedocs.io/en/latest/howtos/ensembling.html train the model on a GPU/TPU if you are connected to one on Google Colab?
I have these functions as per the docs above but the training seems to be very just slow just like in CPU
```
jax.devices()
# [TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0),
# ...
# TpuDevice(id=7, process_index=0, coords=(1,1,0), core_on_chip=1)]
@functools.partial(jax.pmap, axis_name='ensemble')
def apply_model(state, images, labels):
def loss_fn(params):
logits = CNN().apply({'params': params}, images)
one_hot = jax.nn.one_hot(labels, 2)
loss = optax.sigmoid_binary_cross_entropy(logits=logits, labels=one_hot).mean()
return loss, logits
grad_fn = jax.value_and_grad(loss_fn, has_aux=True)
(loss, logits), grads = grad_fn(state.params)
probs = jax.lax.pmean(jax.nn.softmax(logits), axis_name='ensemble')
accuracy = jnp.mean(jnp.argmax(probs, -1) == labels)
return grads,loss, accuracy
@jax.pmap
def update_model(state, grads):
return state.apply_gradients(grads=grads)
@functools.partial(jax.pmap, static_broadcasted_argnums=(1, 2))
def create_train_state(rng, learning_rate, momentum):
"""Creates initial `TrainState`."""
cnn = CNN()
params = cnn.init(rng, jnp.ones([1, size_image, size_image, 3]))['params']
tx = optax.sgd(learning_rate, momentum)
return train_state.TrainState.create(
apply_fn=cnn.apply, params=params, tx=tx)
def train_one_epoch(state, dataloader):
"""Train for 1 epoch on the training set."""
epoch_loss = []
epoch_accuracy = []
for cnt, (images, labels) in enumerate(dataloader):
images = images / 255.0
images = jax_utils.replicate(images)
labels = jax_utils.replicate(labels)
grads, loss, accuracy = apply_model(state, images, labels)
state = update_model(state, grads)
epoch_loss.append(jax_utils.unreplicate(loss))
epoch_accuracy.append(jax_utils.unreplicate(accuracy))
train_loss = np.mean(epoch_loss)
train_accuracy = np.mean(epoch_accuracy)
return state, train_loss, train_accuracy
for epoch in range(1, num_epochs + 1):
state, train_loss, train_accuracy = train_one_epoch(state, train_loader)
training_loss.append(train_loss)
training_accuracy.append(train_accuracy)
print(f"Train epoch: {epoch}, loss: {train_loss}, accuracy: {train_accuracy * 100}")
_, test_loss, test_accuracy = jax_utils.unreplicate(apply_model(state, test_images, test_labels))
testing_accuracy.append(test_accuracy)
testing_loss.append(test_loss)
print(f"Test epoch: {epoch}, loss: {test_loss}, accuracy: {test_accuracy* 100}")
````
What am I missing? | 1.0 | Slow training on TPU - Does following this https://flax.readthedocs.io/en/latest/howtos/ensembling.html train the model on a GPU/TPU if you are connected to one on Google Colab?
I have these functions as per the docs above but the training seems to be very just slow just like in CPU
```
jax.devices()
# [TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0),
# ...
# TpuDevice(id=7, process_index=0, coords=(1,1,0), core_on_chip=1)]
@functools.partial(jax.pmap, axis_name='ensemble')
def apply_model(state, images, labels):
def loss_fn(params):
logits = CNN().apply({'params': params}, images)
one_hot = jax.nn.one_hot(labels, 2)
loss = optax.sigmoid_binary_cross_entropy(logits=logits, labels=one_hot).mean()
return loss, logits
grad_fn = jax.value_and_grad(loss_fn, has_aux=True)
(loss, logits), grads = grad_fn(state.params)
probs = jax.lax.pmean(jax.nn.softmax(logits), axis_name='ensemble')
accuracy = jnp.mean(jnp.argmax(probs, -1) == labels)
return grads,loss, accuracy
@jax.pmap
def update_model(state, grads):
return state.apply_gradients(grads=grads)
@functools.partial(jax.pmap, static_broadcasted_argnums=(1, 2))
def create_train_state(rng, learning_rate, momentum):
"""Creates initial `TrainState`."""
cnn = CNN()
params = cnn.init(rng, jnp.ones([1, size_image, size_image, 3]))['params']
tx = optax.sgd(learning_rate, momentum)
return train_state.TrainState.create(
apply_fn=cnn.apply, params=params, tx=tx)
def train_one_epoch(state, dataloader):
"""Train for 1 epoch on the training set."""
epoch_loss = []
epoch_accuracy = []
for cnt, (images, labels) in enumerate(dataloader):
images = images / 255.0
images = jax_utils.replicate(images)
labels = jax_utils.replicate(labels)
grads, loss, accuracy = apply_model(state, images, labels)
state = update_model(state, grads)
epoch_loss.append(jax_utils.unreplicate(loss))
epoch_accuracy.append(jax_utils.unreplicate(accuracy))
train_loss = np.mean(epoch_loss)
train_accuracy = np.mean(epoch_accuracy)
return state, train_loss, train_accuracy
for epoch in range(1, num_epochs + 1):
state, train_loss, train_accuracy = train_one_epoch(state, train_loader)
training_loss.append(train_loss)
training_accuracy.append(train_accuracy)
print(f"Train epoch: {epoch}, loss: {train_loss}, accuracy: {train_accuracy * 100}")
_, test_loss, test_accuracy = jax_utils.unreplicate(apply_model(state, test_images, test_labels))
testing_accuracy.append(test_accuracy)
testing_loss.append(test_loss)
print(f"Test epoch: {epoch}, loss: {test_loss}, accuracy: {test_accuracy* 100}")
````
What am I missing? | non_test | slow training on tpu does following this train the model on a gpu tpu if you are connected to one on google colab i have these functions as per the docs above but the training seems to be very just slow just like in cpu jax devices tpudevice id process index coords core on chip tpudevice id process index coords core on chip functools partial jax pmap axis name ensemble def apply model state images labels def loss fn params logits cnn apply params params images one hot jax nn one hot labels loss optax sigmoid binary cross entropy logits logits labels one hot mean return loss logits grad fn jax value and grad loss fn has aux true loss logits grads grad fn state params probs jax lax pmean jax nn softmax logits axis name ensemble accuracy jnp mean jnp argmax probs labels return grads loss accuracy jax pmap def update model state grads return state apply gradients grads grads functools partial jax pmap static broadcasted argnums def create train state rng learning rate momentum creates initial trainstate cnn cnn params cnn init rng jnp ones tx optax sgd learning rate momentum return train state trainstate create apply fn cnn apply params params tx tx def train one epoch state dataloader train for epoch on the training set epoch loss epoch accuracy for cnt images labels in enumerate dataloader images images images jax utils replicate images labels jax utils replicate labels grads loss accuracy apply model state images labels state update model state grads epoch loss append jax utils unreplicate loss epoch accuracy append jax utils unreplicate accuracy train loss np mean epoch loss train accuracy np mean epoch accuracy return state train loss train accuracy for epoch in range num epochs state train loss train accuracy train one epoch state train loader training loss append train loss training accuracy append train accuracy print f train epoch epoch loss train loss accuracy train accuracy test loss test accuracy jax utils unreplicate apply model state test images test labels testing accuracy append test accuracy testing loss append test loss print f test epoch epoch loss test loss accuracy test accuracy what am i missing | 0 |
19,000 | 3,737,131,997 | IssuesEvent | 2016-03-08 18:12:21 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Bug529989_RenameCSharpIdentifierToInvalidVBIdentifier failing | Area-IDE Flaky Test | http://dotnet-ci.cloudapp.net/job/roslyn_future_win_dbg_unit32/lastCompletedBuild/testReport/Microsoft.CodeAnalysis.Editor.UnitTests.Rename/RenameEngineTests+VisualBasicConflicts/Bug529989_RenameCSharpIdentifierToInvalidVBIdentifier/
```
Stacktrace
MESSAGE:
Assert.Equal() Failure\r\n ↓ (pos 0)\r\nExpected: B\u0061r\r\nActual: ProgramCS\r\n ↑ (pos 0)
+++++++++++++++++++
STACK TRACE:
at Microsoft.CodeAnalysis.Editor.UnitTests.Rename.RenameEngineResult.AssertLocationReplacedWith(Location location, String replacementText, Boolean isRenameWithinStringOrComment) in d:\j\workspace\roslyn_future---ea06c229\src\EditorFeatures\Test2\Rename\RenameEngineResult.vb:line 203 at Microsoft.CodeAnalysis.Editor.UnitTests.Rename.RenameEngineResult.AssertLabeledSpansAre(String label, String replacement, Nullable`1 type, Boolean isRenameWithinStringOrComment) in d:\j\workspace\roslyn_future---ea06c229\src\EditorFeatures\Test2\Rename\RenameEngineResult.vb:line 134 at Microsoft.CodeAnalysis.Editor.UnitTests.Rename.RenameEngineTests.VisualBasicConflicts.Bug529989_RenameCSharpIdentifierToInvalidVBIdentifier() in d:\j\workspace\roslyn_future---ea06c229\src\EditorFeatures\Test2\Rename\RenameEngineTests.VisualBasicConflicts.vb:line 2333
``` | 1.0 | Bug529989_RenameCSharpIdentifierToInvalidVBIdentifier failing - http://dotnet-ci.cloudapp.net/job/roslyn_future_win_dbg_unit32/lastCompletedBuild/testReport/Microsoft.CodeAnalysis.Editor.UnitTests.Rename/RenameEngineTests+VisualBasicConflicts/Bug529989_RenameCSharpIdentifierToInvalidVBIdentifier/
```
Stacktrace
MESSAGE:
Assert.Equal() Failure\r\n ↓ (pos 0)\r\nExpected: B\u0061r\r\nActual: ProgramCS\r\n ↑ (pos 0)
+++++++++++++++++++
STACK TRACE:
at Microsoft.CodeAnalysis.Editor.UnitTests.Rename.RenameEngineResult.AssertLocationReplacedWith(Location location, String replacementText, Boolean isRenameWithinStringOrComment) in d:\j\workspace\roslyn_future---ea06c229\src\EditorFeatures\Test2\Rename\RenameEngineResult.vb:line 203 at Microsoft.CodeAnalysis.Editor.UnitTests.Rename.RenameEngineResult.AssertLabeledSpansAre(String label, String replacement, Nullable`1 type, Boolean isRenameWithinStringOrComment) in d:\j\workspace\roslyn_future---ea06c229\src\EditorFeatures\Test2\Rename\RenameEngineResult.vb:line 134 at Microsoft.CodeAnalysis.Editor.UnitTests.Rename.RenameEngineTests.VisualBasicConflicts.Bug529989_RenameCSharpIdentifierToInvalidVBIdentifier() in d:\j\workspace\roslyn_future---ea06c229\src\EditorFeatures\Test2\Rename\RenameEngineTests.VisualBasicConflicts.vb:line 2333
``` | test | renamecsharpidentifiertoinvalidvbidentifier failing stacktrace message assert equal failure r n ↓ pos r nexpected b r nactual programcs r n ↑ pos stack trace at microsoft codeanalysis editor unittests rename renameengineresult assertlocationreplacedwith location location string replacementtext boolean isrenamewithinstringorcomment in d j workspace roslyn future src editorfeatures rename renameengineresult vb line at microsoft codeanalysis editor unittests rename renameengineresult assertlabeledspansare string label string replacement nullable type boolean isrenamewithinstringorcomment in d j workspace roslyn future src editorfeatures rename renameengineresult vb line at microsoft codeanalysis editor unittests rename renameenginetests visualbasicconflicts renamecsharpidentifiertoinvalidvbidentifier in d j workspace roslyn future src editorfeatures rename renameenginetests visualbasicconflicts vb line | 1 |
230,480 | 17,618,543,908 | IssuesEvent | 2021-08-18 12:51:37 | FreddieBrown/blockchat | https://api.github.com/repos/FreddieBrown/blockchat | reopened | Add more extensive tests and documentation | documentation enhancement | Currently there are a number of tests, but this number needs to be increased. There is a lot of behaviour which isn't well documented and tested so this should be improved. | 1.0 | Add more extensive tests and documentation - Currently there are a number of tests, but this number needs to be increased. There is a lot of behaviour which isn't well documented and tested so this should be improved. | non_test | add more extensive tests and documentation currently there are a number of tests but this number needs to be increased there is a lot of behaviour which isn t well documented and tested so this should be improved | 0 |
175,307 | 13,546,014,160 | IssuesEvent | 2020-09-17 00:05:36 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | ccl/backupccl: TestBackupRestoreAppend failed | C-test-failure O-robot branch-release-20.2 | [(ccl/backupccl).TestBackupRestoreAppend failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2254874&tab=buildLog) on [release-20.2@d9ff57ed3d5891ab93b7756be657ba868ecc87a6](https://github.com/cockroachdb/cockroach/commits/d9ff57ed3d5891ab93b7756be657ba868ecc87a6):
```
=== RUN TestBackupRestoreAppend
test logs left over in: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestBackupRestoreAppend510221769
--- FAIL: TestBackupRestoreAppend (54.05s)
test_log_scope.go:154: test logs captured to: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestBackupRestoreAppend510221769
test_log_scope.go:63: use -show-logs to present logs inline
backup_test.go:543: error executing 'RESTORE DATABASE data FROM $4 IN ($1, $2, $3) AS OF SYSTEM TIME 1599628478495482195.0000000000': pq: validating table descriptor has not changed: unexpected value: raw_bytes:"(\260\212\235\003\n\364\002\n\004bank\030; :(\002:\n\010\306\234\235\312\356\263\301\231\026B\036\n\002id\020\001\032\014\010\001\020@\030\0000\000P\024`\000 \0000\000h\000p\000x\000B#\n\007balance\020\002\032\014\010\001\020@\030\0000\000P\024`\000 \0010\000h\000p\000x\000B#\n\007payload\020\003\032\014\010\007\020\000\030\0000\000P\031`\000 \0010\000h\000p\000x\000H\004RK\n\007primary\020\001\030\001\"\002id0\001@\000J\020\010\000\020\000\032\000 \000(\0000\0008\000@\000Z\000z\002\010\000\200\001\000\210\001\000\220\001\001\230\001\000\242\001\006\010\000\022\000\030\000\250\001\000\262\001\000\272\001\000`\002j\036\n\t\n\005admin\020\002\n\010\n\004root\020\002\022\005admin\030\001\200\001\001\210\001\003\230\001\003\262\001:\n\030fam_0_id_balance_payload\020\000\032\002id\032\007balance\032\007payload \001 \002 \003(\000\270\001\001\302\001\000\350\001\000\362\001\004\010\000\022\000\370\001\000\200\002\000\222\002\000\232\002\n\010\306\234\235\312\356\263\301\231\026\262\002\trestoring\270\002\000\300\002\035\310\002\000" timestamp:<wall_time:1599628509532278935 >
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestBackupRestoreAppend PKG=./pkg/ccl/backupccl TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
Related:
- #54039 ccl/backupccl: TestBackupRestoreAppend failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestBackupRestoreAppend.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 1.0 | ccl/backupccl: TestBackupRestoreAppend failed - [(ccl/backupccl).TestBackupRestoreAppend failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2254874&tab=buildLog) on [release-20.2@d9ff57ed3d5891ab93b7756be657ba868ecc87a6](https://github.com/cockroachdb/cockroach/commits/d9ff57ed3d5891ab93b7756be657ba868ecc87a6):
```
=== RUN TestBackupRestoreAppend
test logs left over in: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestBackupRestoreAppend510221769
--- FAIL: TestBackupRestoreAppend (54.05s)
test_log_scope.go:154: test logs captured to: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestBackupRestoreAppend510221769
test_log_scope.go:63: use -show-logs to present logs inline
backup_test.go:543: error executing 'RESTORE DATABASE data FROM $4 IN ($1, $2, $3) AS OF SYSTEM TIME 1599628478495482195.0000000000': pq: validating table descriptor has not changed: unexpected value: raw_bytes:"(\260\212\235\003\n\364\002\n\004bank\030; :(\002:\n\010\306\234\235\312\356\263\301\231\026B\036\n\002id\020\001\032\014\010\001\020@\030\0000\000P\024`\000 \0000\000h\000p\000x\000B#\n\007balance\020\002\032\014\010\001\020@\030\0000\000P\024`\000 \0010\000h\000p\000x\000B#\n\007payload\020\003\032\014\010\007\020\000\030\0000\000P\031`\000 \0010\000h\000p\000x\000H\004RK\n\007primary\020\001\030\001\"\002id0\001@\000J\020\010\000\020\000\032\000 \000(\0000\0008\000@\000Z\000z\002\010\000\200\001\000\210\001\000\220\001\001\230\001\000\242\001\006\010\000\022\000\030\000\250\001\000\262\001\000\272\001\000`\002j\036\n\t\n\005admin\020\002\n\010\n\004root\020\002\022\005admin\030\001\200\001\001\210\001\003\230\001\003\262\001:\n\030fam_0_id_balance_payload\020\000\032\002id\032\007balance\032\007payload \001 \002 \003(\000\270\001\001\302\001\000\350\001\000\362\001\004\010\000\022\000\370\001\000\200\002\000\222\002\000\232\002\n\010\306\234\235\312\356\263\301\231\026\262\002\trestoring\270\002\000\300\002\035\310\002\000" timestamp:<wall_time:1599628509532278935 >
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestBackupRestoreAppend PKG=./pkg/ccl/backupccl TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
Related:
- #54039 ccl/backupccl: TestBackupRestoreAppend failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestBackupRestoreAppend.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | ccl backupccl testbackuprestoreappend failed on run testbackuprestoreappend test logs left over in go src github com cockroachdb cockroach artifacts fail testbackuprestoreappend test log scope go test logs captured to go src github com cockroachdb cockroach artifacts test log scope go use show logs to present logs inline backup test go error executing restore database data from in as of system time pq validating table descriptor has not changed unexpected value raw bytes n n n n n n n n t n n n n id balance payload n trestoring timestamp more parameters goflags json make stressrace tests testbackuprestoreappend pkg pkg ccl backupccl testtimeout stressflags timeout related ccl backupccl testbackuprestoreappend failed powered by | 1 |
349,611 | 31,815,787,334 | IssuesEvent | 2023-09-13 20:20:44 | pops64/Courseplay_FS22 | https://api.github.com/repos/pops64/Courseplay_FS22 | closed | [BUG_SP] Two unloaders can be called at once | bug Fixed pending testing | It currently relies on the nearest trailer it is aiming at for fill level. This could cause two unloaders to be called if there are 2 available and the current one gets full. Specifically the switch from chase mode to drive beside as calling next is prohibited in chase mode
| 1.0 | [BUG_SP] Two unloaders can be called at once - It currently relies on the nearest trailer it is aiming at for fill level. This could cause two unloaders to be called if there are 2 available and the current one gets full. Specifically the switch from chase mode to drive beside as calling next is prohibited in chase mode
| test | two unloaders can be called at once it currently relies on the nearest trailer it is aiming at for fill level this could cause two unloaders to be called if there are available and the current one gets full specifically the switch from chase mode to drive beside as calling next is prohibited in chase mode | 1 |
155,788 | 13,633,041,677 | IssuesEvent | 2020-09-24 20:41:51 | golang/go | https://api.github.com/repos/golang/go | closed | spec: a small imperfection in spec | Documentation NeedsFix | There is one line in spec:
```
x == y+1 && <-chanPtr > 0
```
The name `chanPtr` looks not very nature. Is `chanInt` better? | 1.0 | spec: a small imperfection in spec - There is one line in spec:
```
x == y+1 && <-chanPtr > 0
```
The name `chanPtr` looks not very nature. Is `chanInt` better? | non_test | spec a small imperfection in spec there is one line in spec x y the name chanptr looks not very nature is chanint better | 0 |
327,766 | 28,081,802,785 | IssuesEvent | 2023-03-30 07:02:22 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix general.test_set_min_base | Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4521700112/jobs/7963588588" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4531737044/jobs/7982254556" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4542706733/jobs/8006468701" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4545898018/jobs/8013844883" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
| 1.0 | Fix general.test_set_min_base - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4521700112/jobs/7963588588" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4531737044/jobs/7982254556" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4542706733/jobs/8006468701" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4545898018/jobs/8013844883" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
| test | fix general test set min base tensorflow img src torch img src numpy img src jax img src not found not found | 1 |
5,194 | 2,764,944,268 | IssuesEvent | 2015-04-29 18:05:04 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | Test failure in CI build 1909 | test-failure | The following test appears to have failed:
[#1909](https://circleci.com/gh/cockroachdb/cockroach/1909):
```
I0429 17:58:20.808996 259 multiraft.go:693] HardState updated: {Term:6 Vote:4294967297 Commit:41 XXX_unrecognized:[]}
I0429 17:58:20.809139 259 multiraft.go:696] New Entry[0]: 6/40 EntryNormal 0000000000000000036529231f9da003: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809278 259 multiraft.go:696] New Entry[1]: 6/41 EntryNormal 00000000000000005359ab8439abb31b: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809412 259 multiraft.go:699] Committed Entry[0]: 6/40 EntryNormal 0000000000000000036529231f9da003: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809550 259 multiraft.go:699] Committed Entry[1]: 6/41 EntryNormal 00000000000000005359ab8439abb31b: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
panic: test timed out after 30s
goroutine 3072 [running]:
testing.func·008()
/usr/src/go/src/testing/testing.go:681 +0x12f
created by time.goFunc
/usr/src/go/src/time/sleep.go:129 +0x4b
goroutine 1 [chan receive]:
testing.RunTests(0x107a4a0, 0x1516320, 0xa1, 0xa1, 0xc207ffb901)
/usr/src/go/src/testing/testing.go:556 +0xad6
--
/go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:487 +0x90c
github.com/cockroachdb/cockroach/util.func·005()
/go/src/github.com/cockroachdb/cockroach/util/stopper.go:73 +0x51
created by github.com/cockroachdb/cockroach/util.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stopper.go:74 +0xe3
FAIL github.com/cockroachdb/cockroach/storage 30.017s
=== RUN TestBatchBasics
I0429 17:59:00.495384 361 rocksdb.go:88] opening in-memory rocksdb instance
I0429 17:59:00.496556 361 rocksdb.go:112] closing rocksdb instance at ""
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
I0429 17:59:00.497481 361 rocksdb.go:88] opening in-memory rocksdb instance
I0429 17:59:00.497807 361 rocksdb.go:112] closing rocksdb instance at ""
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
I0429 17:59:00.498722 361 rocksdb.go:88] opening in-memory rocksdb instance
I0429 17:58:20.808996 259 multiraft.go:693] HardState updated: {Term:6 Vote:4294967297 Commit:41 XXX_unrecognized:[]}
I0429 17:58:20.809139 259 multiraft.go:696] New Entry[0]: 6/40 EntryNormal 0000000000000000036529231f9da003: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809278 259 multiraft.go:696] New Entry[1]: 6/41 EntryNormal 00000000000000005359ab8439abb31b: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809412 259 multiraft.go:699] Committed Entry[0]: 6/40 EntryNormal 0000000000000000036529231f9da003: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809550 259 multiraft.go:699] Committed Entry[1]: 6/41 EntryNormal 00000000000000005359ab8439abb31b: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
panic: test timed out after 30s
goroutine 3072 [running]:
testing.func·008()
/usr/src/go/src/testing/testing.go:681 +0x12f
created by time.goFunc
/usr/src/go/src/time/sleep.go:129 +0x4b
goroutine 1 [chan receive]:
testing.RunTests(0x107a4a0, 0x1516320, 0xa1, 0xa1, 0xc207ffb901)
/usr/src/go/src/testing/testing.go:556 +0xad6
--
/go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:487 +0x90c
github.com/cockroachdb/cockroach/util.func·005()
/go/src/github.com/cockroachdb/cockroach/util/stopper.go:73 +0x51
created by github.com/cockroachdb/cockroach/util.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stopper.go:74 +0xe3
FAIL github.com/cockroachdb/cockroach/storage 30.017s
=== RUN TestBatchBasics
I0429 17:59:00.495384 361 rocksdb.go:88] opening in-memory rocksdb instance
I0429 17:59:00.496556 361 rocksdb.go:112] closing rocksdb instance at ""
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
I0429 17:59:00.497481 361 rocksdb.go:88] opening in-memory rocksdb instance
I0429 17:59:00.497807 361 rocksdb.go:112] closing rocksdb instance at ""
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
I0429 17:59:00.498722 361 rocksdb.go:88] opening in-memory rocksdb instance
```
Please assign, take a look and update the issue accordingly. | 1.0 | Test failure in CI build 1909 - The following test appears to have failed:
[#1909](https://circleci.com/gh/cockroachdb/cockroach/1909):
```
I0429 17:58:20.808996 259 multiraft.go:693] HardState updated: {Term:6 Vote:4294967297 Commit:41 XXX_unrecognized:[]}
I0429 17:58:20.809139 259 multiraft.go:696] New Entry[0]: 6/40 EntryNormal 0000000000000000036529231f9da003: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809278 259 multiraft.go:696] New Entry[1]: 6/41 EntryNormal 00000000000000005359ab8439abb31b: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809412 259 multiraft.go:699] Committed Entry[0]: 6/40 EntryNormal 0000000000000000036529231f9da003: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809550 259 multiraft.go:699] Committed Entry[1]: 6/41 EntryNormal 00000000000000005359ab8439abb31b: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
panic: test timed out after 30s
goroutine 3072 [running]:
testing.func·008()
/usr/src/go/src/testing/testing.go:681 +0x12f
created by time.goFunc
/usr/src/go/src/time/sleep.go:129 +0x4b
goroutine 1 [chan receive]:
testing.RunTests(0x107a4a0, 0x1516320, 0xa1, 0xa1, 0xc207ffb901)
/usr/src/go/src/testing/testing.go:556 +0xad6
--
/go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:487 +0x90c
github.com/cockroachdb/cockroach/util.func·005()
/go/src/github.com/cockroachdb/cockroach/util/stopper.go:73 +0x51
created by github.com/cockroachdb/cockroach/util.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stopper.go:74 +0xe3
FAIL github.com/cockroachdb/cockroach/storage 30.017s
=== RUN TestBatchBasics
I0429 17:59:00.495384 361 rocksdb.go:88] opening in-memory rocksdb instance
I0429 17:59:00.496556 361 rocksdb.go:112] closing rocksdb instance at ""
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
I0429 17:59:00.497481 361 rocksdb.go:88] opening in-memory rocksdb instance
I0429 17:59:00.497807 361 rocksdb.go:112] closing rocksdb instance at ""
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
I0429 17:59:00.498722 361 rocksdb.go:88] opening in-memory rocksdb instance
I0429 17:58:20.808996 259 multiraft.go:693] HardState updated: {Term:6 Vote:4294967297 Commit:41 XXX_unrecognized:[]}
I0429 17:58:20.809139 259 multiraft.go:696] New Entry[0]: 6/40 EntryNormal 0000000000000000036529231f9da003: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809278 259 multiraft.go:696] New Entry[1]: 6/41 EntryNormal 00000000000000005359ab8439abb31b: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809412 259 multiraft.go:699] Committed Entry[0]: 6/40 EntryNormal 0000000000000000036529231f9da003: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
I0429 17:58:20.809550 259 multiraft.go:699] Committed Entry[1]: 6/41 EntryNormal 00000000000000005359ab8439abb31b: raft_id:1 origin_node_id:4294967297 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:52 > cmd_id:<wall_time:0 random:0 > key:"\00
panic: test timed out after 30s
goroutine 3072 [running]:
testing.func·008()
/usr/src/go/src/testing/testing.go:681 +0x12f
created by time.goFunc
/usr/src/go/src/time/sleep.go:129 +0x4b
goroutine 1 [chan receive]:
testing.RunTests(0x107a4a0, 0x1516320, 0xa1, 0xa1, 0xc207ffb901)
/usr/src/go/src/testing/testing.go:556 +0xad6
--
/go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:487 +0x90c
github.com/cockroachdb/cockroach/util.func·005()
/go/src/github.com/cockroachdb/cockroach/util/stopper.go:73 +0x51
created by github.com/cockroachdb/cockroach/util.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stopper.go:74 +0xe3
FAIL github.com/cockroachdb/cockroach/storage 30.017s
=== RUN TestBatchBasics
I0429 17:59:00.495384 361 rocksdb.go:88] opening in-memory rocksdb instance
I0429 17:59:00.496556 361 rocksdb.go:112] closing rocksdb instance at ""
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
I0429 17:59:00.497481 361 rocksdb.go:88] opening in-memory rocksdb instance
I0429 17:59:00.497807 361 rocksdb.go:112] closing rocksdb instance at ""
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
I0429 17:59:00.498722 361 rocksdb.go:88] opening in-memory rocksdb instance
```
Please assign, take a look and update the issue accordingly. | test | test failure in ci build the following test appears to have failed multiraft go hardstate updated term vote commit xxx unrecognized multiraft go new entry entrynormal raft id origin node id cmd cmd id key multiraft go new entry entrynormal raft id origin node id cmd cmd id key multiraft go committed entry entrynormal raft id origin node id cmd cmd id key multiraft go committed entry entrynormal raft id origin node id cmd cmd id key panic test timed out after goroutine testing func· usr src go src testing testing go created by time gofunc usr src go src time sleep go goroutine testing runtests usr src go src testing testing go go src github com cockroachdb cockroach kv txn coord sender go github com cockroachdb cockroach util func· go src github com cockroachdb cockroach util stopper go created by github com cockroachdb cockroach util stopper runworker go src github com cockroachdb cockroach util stopper go fail github com cockroachdb cockroach storage run testbatchbasics rocksdb go opening in memory rocksdb instance rocksdb go closing rocksdb instance at pass testbatchbasics run testbatchget rocksdb go opening in memory rocksdb instance rocksdb go closing rocksdb instance at pass testbatchget run testbatchmerge rocksdb go opening in memory rocksdb instance multiraft go hardstate updated term vote commit xxx unrecognized multiraft go new entry entrynormal raft id origin node id cmd cmd id key multiraft go new entry entrynormal raft id origin node id cmd cmd id key multiraft go committed entry entrynormal raft id origin node id cmd cmd id key multiraft go committed entry entrynormal raft id origin node id cmd cmd id key panic test timed out after goroutine testing func· usr src go src testing testing go created by time gofunc usr src go src time sleep go goroutine testing runtests usr src go src testing testing go go src github com cockroachdb cockroach kv txn coord sender go github com cockroachdb cockroach util func· go src github com cockroachdb cockroach util stopper go created by github com cockroachdb cockroach util stopper runworker go src github com cockroachdb cockroach util stopper go fail github com cockroachdb cockroach storage run testbatchbasics rocksdb go opening in memory rocksdb instance rocksdb go closing rocksdb instance at pass testbatchbasics run testbatchget rocksdb go opening in memory rocksdb instance rocksdb go closing rocksdb instance at pass testbatchget run testbatchmerge rocksdb go opening in memory rocksdb instance please assign take a look and update the issue accordingly | 1 |
12,634 | 2,712,177,016 | IssuesEvent | 2015-04-09 12:08:57 | xgenvn/android-vnc-server | https://api.github.com/repos/xgenvn/android-vnc-server | closed | Return application | auto-migrated Priority-Medium Type-Defect | ```
When i've an application (Ragnarok Valkyrie Uprising open) and when i click in
the game, ofter time, the click seems to click on the return button on my
Samsung Galaxy S3 and ask to exit the game.
```
Original issue reported on code.google.com by `vincent....@gmail.com` on 3 Jul 2013 at 8:52 | 1.0 | Return application - ```
When i've an application (Ragnarok Valkyrie Uprising open) and when i click in
the game, ofter time, the click seems to click on the return button on my
Samsung Galaxy S3 and ask to exit the game.
```
Original issue reported on code.google.com by `vincent....@gmail.com` on 3 Jul 2013 at 8:52 | non_test | return application when i ve an application ragnarok valkyrie uprising open and when i click in the game ofter time the click seems to click on the return button on my samsung galaxy and ask to exit the game original issue reported on code google com by vincent gmail com on jul at | 0 |
230,023 | 18,473,583,265 | IssuesEvent | 2021-10-18 02:48:20 | Tencent/bk-sops | https://api.github.com/repos/Tencent/bk-sops | closed | 【3.9.0】v1引擎执行任务时,查看尚未执行任务的子流程节点报错 | type/test-stage-bug | 问题描述
=======
<!-- 这里写问题描述 -->
重现方法
=======
<!-- 列出如何重现的方法或操作步骤 -->

url:
o/bk_sops/taskflow/execute/6/?instance_id=151379
<!-- **重要提醒**: 请优先尝试部署使用最新发布的版本 (发布清单: https://github.com/Tencent/bk-sops/releases), 如果问题不能在最新发布的版本里重现,说明此问题已经被修复。 -->
关键信息
=======
<!-- **重要提醒**: 这些关键信息会辅助我们快速定位问题。 -->
请提供以下信息:
- [x] bk-sops 版本 (发布版本号 或 git tag): <!-- `示例: V3.1.32-ce 或者 git sha. 请不要使用 "最新版本" 或 "当前版本"等无法准确定位代码版本的语句描述` -->
- [ ] 蓝鲸PaaS 版本:<!-- `<示例:PaaS 3.0.58、PaaSAgent 3.0.9` -->
- [ ] bk_sops 异常日志:
| 1.0 | 【3.9.0】v1引擎执行任务时,查看尚未执行任务的子流程节点报错 - 问题描述
=======
<!-- 这里写问题描述 -->
重现方法
=======
<!-- 列出如何重现的方法或操作步骤 -->

url:
o/bk_sops/taskflow/execute/6/?instance_id=151379
<!-- **重要提醒**: 请优先尝试部署使用最新发布的版本 (发布清单: https://github.com/Tencent/bk-sops/releases), 如果问题不能在最新发布的版本里重现,说明此问题已经被修复。 -->
关键信息
=======
<!-- **重要提醒**: 这些关键信息会辅助我们快速定位问题。 -->
请提供以下信息:
- [x] bk-sops 版本 (发布版本号 或 git tag): <!-- `示例: V3.1.32-ce 或者 git sha. 请不要使用 "最新版本" 或 "当前版本"等无法准确定位代码版本的语句描述` -->
- [ ] 蓝鲸PaaS 版本:<!-- `<示例:PaaS 3.0.58、PaaSAgent 3.0.9` -->
- [ ] bk_sops 异常日志:
| test | 【 】 ,查看尚未执行任务的子流程节点报错 问题描述 重现方法 url o bk sops taskflow execute instance id 关键信息 请提供以下信息 bk sops 版本 发布版本号 或 git tag : 蓝鲸paas 版本: bk sops 异常日志: | 1 |
76,457 | 21,443,501,301 | IssuesEvent | 2022-04-25 01:59:46 | PaddlePaddle/Paddle | https://api.github.com/repos/PaddlePaddle/Paddle | closed | [hackathon] GPU版本源码编译Unable to establish SSL connection出错 | status/new-issue type/build | ### 问题描述 Issue Description
报错信息:
Unable to establish SSL connection.
paddle/fluid/framework/new_executor/CMakeFiles/download_program.dir/build.make:57: recipe for target 'paddle/fluid/framework/new_executor/CMakeFiles/download_program' failed
make[2]: *** [paddle/fluid/framework/new_executor/CMakeFiles/download_program] Error 4
CMakeFiles/Makefile2:288074: recipe for target 'paddle/fluid/framework/new_executor/CMakeFiles/download_program.dir/all' failed
make[1]: *** [paddle/fluid/framework/new_executor/CMakeFiles/download_program.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
尝试安装libssl解决:无效
apt-get update
apt-get install openssl
apt-get install libssl-dev
麻烦提示可能出错的原因
### 版本&环境信息 Version & Environment Information
docker :registry.baidubce.com/paddlepaddle/paddle:2.2.2-gpu-cuda11.2-cudnn8
机器:Tesla P4 .CUDA version 11.4 | 1.0 | [hackathon] GPU版本源码编译Unable to establish SSL connection出错 - ### 问题描述 Issue Description
报错信息:
Unable to establish SSL connection.
paddle/fluid/framework/new_executor/CMakeFiles/download_program.dir/build.make:57: recipe for target 'paddle/fluid/framework/new_executor/CMakeFiles/download_program' failed
make[2]: *** [paddle/fluid/framework/new_executor/CMakeFiles/download_program] Error 4
CMakeFiles/Makefile2:288074: recipe for target 'paddle/fluid/framework/new_executor/CMakeFiles/download_program.dir/all' failed
make[1]: *** [paddle/fluid/framework/new_executor/CMakeFiles/download_program.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
尝试安装libssl解决:无效
apt-get update
apt-get install openssl
apt-get install libssl-dev
麻烦提示可能出错的原因
### 版本&环境信息 Version & Environment Information
docker :registry.baidubce.com/paddlepaddle/paddle:2.2.2-gpu-cuda11.2-cudnn8
机器:Tesla P4 .CUDA version 11.4 | non_test | gpu版本源码编译unable to establish ssl connection出错 问题描述 issue description 报错信息: unable to establish ssl connection paddle fluid framework new executor cmakefiles download program dir build make recipe for target paddle fluid framework new executor cmakefiles download program failed make error cmakefiles recipe for target paddle fluid framework new executor cmakefiles download program dir all failed make error make waiting for unfinished jobs 尝试安装libssl解决:无效 apt get update apt get install openssl apt get install libssl dev 麻烦提示可能出错的原因 版本 环境信息 version environment information docker registry baidubce com paddlepaddle paddle gpu 机器:tesla cuda version | 0 |
23,452 | 11,886,614,613 | IssuesEvent | 2020-03-27 22:26:03 | microsoft/vscode-cpptools | https://api.github.com/repos/microsoft/vscode-cpptools | closed | Stability about Cpptools behavior | Language Service bug investigate reliability | Currently cpptools behavior is not stable and it depends on configuration in c_cpp_properties.json. If I have set compile_command.json, its behavior is better and more stable. But in many cases, I have no the compile_command.json or project folder lacks of some header files, maybe codes can't compile correctly.
You can download linux source code in window and you will find cpptools have some obvious errors, especially "go to definition" and “Error Squiggles”.
In some cases, "go to definition" goes to wrong position. In some cases, it shows wrong error squiggles but it can go to definition correctly. I can't list all scenarios here, I think you can test in the linux source code which is located on [https://github.com/torvalds/linux](url) | 1.0 | Stability about Cpptools behavior - Currently cpptools behavior is not stable and it depends on configuration in c_cpp_properties.json. If I have set compile_command.json, its behavior is better and more stable. But in many cases, I have no the compile_command.json or project folder lacks of some header files, maybe codes can't compile correctly.
You can download linux source code in window and you will find cpptools have some obvious errors, especially "go to definition" and “Error Squiggles”.
In some cases, "go to definition" goes to wrong position. In some cases, it shows wrong error squiggles but it can go to definition correctly. I can't list all scenarios here, I think you can test in the linux source code which is located on [https://github.com/torvalds/linux](url) | non_test | stability about cpptools behavior currently cpptools behavior is not stable and it depends on configuration in c cpp properties json if i have set compile command json its behavior is better and more stable but in many cases i have no the compile command json or project folder lacks of some header files maybe codes can t compile correctly you can download linux source code in window and you will find cpptools have some obvious errors especially go to definition and “error squiggles” in some cases go to definition goes to wrong position in some cases it shows wrong error squiggles but it can go to definition correctly i can t list all scenarios here i think you can test in the linux source code which is located on url | 0 |
275,407 | 23,913,592,843 | IssuesEvent | 2022-09-09 10:30:16 | junit-team/junit5 | https://api.github.com/repos/junit-team/junit5 | closed | AllBooleanCombinationsSource arguments provider | component: Jupiter status: waiting-for-feedback theme: parameterized tests status: new | A common use case we have is a test that take a bunch of boolean flags a check very slightly (mostly in expected value) based on this flag.
Classic for a ParameterizedTest, which is what we use today, but for the source we currently use CsvSource with the boolean options:
```
@ParameterizedTest
@CsvSource({
"true, true",
"true, false",
"false, true",
"false, false"
})
public void testMethod(boolean flag1, boolean flag2)
```
Which is annoying, verbose and error-prone.
So I wanted to implement an annotation like `@AllBooleanCombinationsSource(int)` that takes the number of boolean params and generate all possible combinations.
Though this might be a bit too specific, so wanted to get your opinion, is that something that you think belongs in Junit? | 1.0 | AllBooleanCombinationsSource arguments provider - A common use case we have is a test that take a bunch of boolean flags a check very slightly (mostly in expected value) based on this flag.
Classic for a ParameterizedTest, which is what we use today, but for the source we currently use CsvSource with the boolean options:
```
@ParameterizedTest
@CsvSource({
"true, true",
"true, false",
"false, true",
"false, false"
})
public void testMethod(boolean flag1, boolean flag2)
```
Which is annoying, verbose and error-prone.
So I wanted to implement an annotation like `@AllBooleanCombinationsSource(int)` that takes the number of boolean params and generate all possible combinations.
Though this might be a bit too specific, so wanted to get your opinion, is that something that you think belongs in Junit? | test | allbooleancombinationssource arguments provider a common use case we have is a test that take a bunch of boolean flags a check very slightly mostly in expected value based on this flag classic for a parameterizedtest which is what we use today but for the source we currently use csvsource with the boolean options parameterizedtest csvsource true true true false false true false false public void testmethod boolean boolean which is annoying verbose and error prone so i wanted to implement an annotation like allbooleancombinationssource int that takes the number of boolean params and generate all possible combinations though this might be a bit too specific so wanted to get your opinion is that something that you think belongs in junit | 1 |
229,149 | 18,286,563,353 | IssuesEvent | 2021-10-05 10:57:44 | Moonshine-IDE/Moonshine-IDE | https://api.github.com/repos/Moonshine-IDE/Moonshine-IDE | closed | Selecting file extensions in Project Search gives bad user experience | bug test-ready | 1. Go to `Project > Search` or `Cmd + Shift + F`
2. Click `Select` (File name patterns)
- MINOR BUG: File extensions are not sorted. It's hard to find what you're looking for.
- MINOR BUG: When you click on the checkboxes nothing happens. To select/deselect an item you need to click on TEXT instead. | 1.0 | Selecting file extensions in Project Search gives bad user experience - 1. Go to `Project > Search` or `Cmd + Shift + F`
2. Click `Select` (File name patterns)
- MINOR BUG: File extensions are not sorted. It's hard to find what you're looking for.
- MINOR BUG: When you click on the checkboxes nothing happens. To select/deselect an item you need to click on TEXT instead. | test | selecting file extensions in project search gives bad user experience go to project search or cmd shift f click select file name patterns minor bug file extensions are not sorted it s hard to find what you re looking for minor bug when you click on the checkboxes nothing happens to select deselect an item you need to click on text instead | 1 |
228,884 | 7,569,032,105 | IssuesEvent | 2018-04-23 01:43:33 | vmware/harbor | https://api.github.com/repos/vmware/harbor | closed | The customized login background image is not correctly displayed | area/ui kind/bug priority/high target/post-1.5.0 |
<img width="1041" alt="screen shot 2018-04-17 at 19 00 15" src="https://user-images.githubusercontent.com/5753287/38865863-b562b0e4-4271-11e8-8d5b-531e73c2b371.png">
| 1.0 | The customized login background image is not correctly displayed -
<img width="1041" alt="screen shot 2018-04-17 at 19 00 15" src="https://user-images.githubusercontent.com/5753287/38865863-b562b0e4-4271-11e8-8d5b-531e73c2b371.png">
| non_test | the customized login background image is not correctly displayed img width alt screen shot at src | 0 |
786,047 | 27,632,546,835 | IssuesEvent | 2023-03-10 12:02:42 | zowe/zowe-cli-standalone-package | https://api.github.com/repos/zowe/zowe-cli-standalone-package | closed | [husky] `repacakge_bundle.sh` may need updated after a plugin is husky-"enabled" 😋 | enhancement priority-low | Similar to #55, we may need to remove the `prepare` script if it contains `husky install` whenever we try to `npm pack` a CLI plugin | 1.0 | [husky] `repacakge_bundle.sh` may need updated after a plugin is husky-"enabled" 😋 - Similar to #55, we may need to remove the `prepare` script if it contains `husky install` whenever we try to `npm pack` a CLI plugin | non_test | repacakge bundle sh may need updated after a plugin is husky enabled 😋 similar to we may need to remove the prepare script if it contains husky install whenever we try to npm pack a cli plugin | 0 |
28,143 | 5,198,755,378 | IssuesEvent | 2017-01-23 19:00:26 | project8/katydid | https://api.github.com/repos/project8/katydid | closed | TestLinearDensityProbe does not build | Defect High Priority | Building on a recent branch off of develop on a Mac.
```
[ 84%] Building CXX object Source/Executables/Validation/CMakeFiles/TestLinearDensityProbe.dir/TestLinearDensityProbe.cc.o
/Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:140:16: error: no member named 'SetStepSizeBig' in
'Katydid::KTLinearDensityProbeFit'
lineFitter.SetStepSizeBig( 0.2e6 );
~~~~~~~~~~ ^
/Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:141:16: error: no member named 'SetStepSizeSmall' in
'Katydid::KTLinearDensityProbeFit'
lineFitter.SetStepSizeSmall( 0.004e6 );
~~~~~~~~~~ ^
/Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:145:21: error: no member named 'Calculate' in 'Katydid::KTLinearDensityProbeFit'
if( !lineFitter.Calculate( tr, threshPts ) )
~~~~~~~~~~ ^
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:8:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:12:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Data/SpectrumAnalysis/KTGainVariationData.hh:11:
In file included from /Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Data/KTData.hh:11:
/Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Utility/KTExtensibleStruct.hh:163:13: error: 'const Katydid::KTPSCollectionData' is an incomplete type
if (dynamic_cast<const XStructType*>(this))
^ ~~~~~~~~~~~~~~~~~~~~
/Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:290:21: note: in instantiation of function template specialization
'Nymph::KTExtensibleStructCore<Nymph::KTDataCore>::Has<Katydid::KTPSCollectionData>' requested here
if (! data->Has< KTPSCollectionData >())
^
/Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:28:11: note: forward declaration of 'Katydid::KTPSCollectionData'
class KTPSCollectionData;
^
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:8:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:12:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Data/SpectrumAnalysis/KTGainVariationData.hh:11:
In file included from /Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Data/KTData.hh:11:
/Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Utility/KTExtensibleStruct.hh:123:31: error: 'Katydid::KTPSCollectionData' is an incomplete type
XStructType* target = dynamic_cast<XStructType*>(this);
^ ~~~~~~~~~~~~~~
/Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:297:116: note: in instantiation of function template specialization
'Nymph::KTExtensibleStructCore<Nymph::KTDataCore>::Of<Katydid::KTPSCollectionData>' requested here
if( !ChooseAlgorithm( data->Of< KTProcessedTrackData >(), data->Of< KTDiscriminatedPoints2DData >(), data->Of< KTPSCollectionData >() ) )
^
/Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:28:11: note: forward declaration of 'Katydid::KTPSCollectionData'
class KTPSCollectionData;
^
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:8:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:12:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Data/SpectrumAnalysis/KTGainVariationData.hh:11:
In file included from /Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Data/KTData.hh:11:
/Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Utility/KTExtensibleStruct.hh:131:25: error: allocation of incomplete type 'Katydid::KTPSCollectionData'
fNext = new XStructType();
^~~~~~~~~~~
/Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:28:11: note: forward declaration of 'Katydid::KTPSCollectionData'
class KTPSCollectionData;
^
6 errors generated.
make[2]: *** [Source/Executables/Validation/CMakeFiles/TestLinearDensityProbe.dir/TestLinearDensityProbe.cc.o] Error 1
make[1]: *** [Source/Executables/Validation/CMakeFiles/TestLinearDensityProbe.dir/all] Error 2
make: *** [all] Error 2
``` | 1.0 | TestLinearDensityProbe does not build - Building on a recent branch off of develop on a Mac.
```
[ 84%] Building CXX object Source/Executables/Validation/CMakeFiles/TestLinearDensityProbe.dir/TestLinearDensityProbe.cc.o
/Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:140:16: error: no member named 'SetStepSizeBig' in
'Katydid::KTLinearDensityProbeFit'
lineFitter.SetStepSizeBig( 0.2e6 );
~~~~~~~~~~ ^
/Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:141:16: error: no member named 'SetStepSizeSmall' in
'Katydid::KTLinearDensityProbeFit'
lineFitter.SetStepSizeSmall( 0.004e6 );
~~~~~~~~~~ ^
/Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:145:21: error: no member named 'Calculate' in 'Katydid::KTLinearDensityProbeFit'
if( !lineFitter.Calculate( tr, threshPts ) )
~~~~~~~~~~ ^
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:8:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:12:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Data/SpectrumAnalysis/KTGainVariationData.hh:11:
In file included from /Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Data/KTData.hh:11:
/Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Utility/KTExtensibleStruct.hh:163:13: error: 'const Katydid::KTPSCollectionData' is an incomplete type
if (dynamic_cast<const XStructType*>(this))
^ ~~~~~~~~~~~~~~~~~~~~
/Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:290:21: note: in instantiation of function template specialization
'Nymph::KTExtensibleStructCore<Nymph::KTDataCore>::Has<Katydid::KTPSCollectionData>' requested here
if (! data->Has< KTPSCollectionData >())
^
/Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:28:11: note: forward declaration of 'Katydid::KTPSCollectionData'
class KTPSCollectionData;
^
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:8:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:12:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Data/SpectrumAnalysis/KTGainVariationData.hh:11:
In file included from /Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Data/KTData.hh:11:
/Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Utility/KTExtensibleStruct.hh:123:31: error: 'Katydid::KTPSCollectionData' is an incomplete type
XStructType* target = dynamic_cast<XStructType*>(this);
^ ~~~~~~~~~~~~~~
/Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:297:116: note: in instantiation of function template specialization
'Nymph::KTExtensibleStructCore<Nymph::KTDataCore>::Of<Katydid::KTPSCollectionData>' requested here
if( !ChooseAlgorithm( data->Of< KTProcessedTrackData >(), data->Of< KTDiscriminatedPoints2DData >(), data->Of< KTPSCollectionData >() ) )
^
/Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:28:11: note: forward declaration of 'Katydid::KTPSCollectionData'
class KTPSCollectionData;
^
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Executables/Validation/TestLinearDensityProbe.cc:8:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:12:
In file included from /Users/obla999/Work/Project8/Software/katydid/Source/Data/SpectrumAnalysis/KTGainVariationData.hh:11:
In file included from /Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Data/KTData.hh:11:
/Users/obla999/Work/Project8/Software/katydid/Nymph/Library/Utility/KTExtensibleStruct.hh:131:25: error: allocation of incomplete type 'Katydid::KTPSCollectionData'
fNext = new XStructType();
^~~~~~~~~~~
/Users/obla999/Work/Project8/Software/katydid/Source/EventAnalysis/KTLinearDensityProbeFit.hh:28:11: note: forward declaration of 'Katydid::KTPSCollectionData'
class KTPSCollectionData;
^
6 errors generated.
make[2]: *** [Source/Executables/Validation/CMakeFiles/TestLinearDensityProbe.dir/TestLinearDensityProbe.cc.o] Error 1
make[1]: *** [Source/Executables/Validation/CMakeFiles/TestLinearDensityProbe.dir/all] Error 2
make: *** [all] Error 2
``` | non_test | testlineardensityprobe does not build building on a recent branch off of develop on a mac building cxx object source executables validation cmakefiles testlineardensityprobe dir testlineardensityprobe cc o users work software katydid source executables validation testlineardensityprobe cc error no member named setstepsizebig in katydid ktlineardensityprobefit linefitter setstepsizebig users work software katydid source executables validation testlineardensityprobe cc error no member named setstepsizesmall in katydid ktlineardensityprobefit linefitter setstepsizesmall users work software katydid source executables validation testlineardensityprobe cc error no member named calculate in katydid ktlineardensityprobefit if linefitter calculate tr threshpts in file included from users work software katydid source executables validation testlineardensityprobe cc in file included from users work software katydid source eventanalysis ktlineardensityprobefit hh in file included from users work software katydid source data spectrumanalysis ktgainvariationdata hh in file included from users work software katydid nymph library data ktdata hh users work software katydid nymph library utility ktextensiblestruct hh error const katydid ktpscollectiondata is an incomplete type if dynamic cast this users work software katydid source eventanalysis ktlineardensityprobefit hh note in instantiation of function template specialization nymph ktextensiblestructcore has requested here if data has users work software katydid source eventanalysis ktlineardensityprobefit hh note forward declaration of katydid ktpscollectiondata class ktpscollectiondata in file included from users work software katydid source executables validation testlineardensityprobe cc in file included from users work software katydid source eventanalysis ktlineardensityprobefit hh in file included from users work software katydid source data spectrumanalysis ktgainvariationdata hh in file included from users work software katydid nymph library data ktdata hh users work software katydid nymph library utility ktextensiblestruct hh error katydid ktpscollectiondata is an incomplete type xstructtype target dynamic cast this users work software katydid source eventanalysis ktlineardensityprobefit hh note in instantiation of function template specialization nymph ktextensiblestructcore of requested here if choosealgorithm data of data of data of users work software katydid source eventanalysis ktlineardensityprobefit hh note forward declaration of katydid ktpscollectiondata class ktpscollectiondata in file included from users work software katydid source executables validation testlineardensityprobe cc in file included from users work software katydid source eventanalysis ktlineardensityprobefit hh in file included from users work software katydid source data spectrumanalysis ktgainvariationdata hh in file included from users work software katydid nymph library data ktdata hh users work software katydid nymph library utility ktextensiblestruct hh error allocation of incomplete type katydid ktpscollectiondata fnext new xstructtype users work software katydid source eventanalysis ktlineardensityprobefit hh note forward declaration of katydid ktpscollectiondata class ktpscollectiondata errors generated make error make error make error | 0 |
439,437 | 30,696,482,900 | IssuesEvent | 2023-07-26 19:01:43 | aeon-toolkit/aeon | https://api.github.com/repos/aeon-toolkit/aeon | closed | [DOC] Examples notebooks | documentation | ### Describe the issue linked to the documentation
our docs
https://www.aeon-toolkit.org/en/latest/examples.html
link into the example notebooks. Restructure is to create a sub directory of notebooks for each module, then link them from the examples. This is happening in #506
To do:
- [ ] Restructure forecasting
- [x] Remove redundant notebooks
- [ ] Make sure examples.md is correctly and completely linked in
Missing notebooks
- [x] Regression
Missing images
- [x] Early classification
I'll just update this list as I find missing things
### Suggest a potential alternative/fix
_No response_ | 1.0 | [DOC] Examples notebooks - ### Describe the issue linked to the documentation
our docs
https://www.aeon-toolkit.org/en/latest/examples.html
link into the example notebooks. Restructure is to create a sub directory of notebooks for each module, then link them from the examples. This is happening in #506
To do:
- [ ] Restructure forecasting
- [x] Remove redundant notebooks
- [ ] Make sure examples.md is correctly and completely linked in
Missing notebooks
- [x] Regression
Missing images
- [x] Early classification
I'll just update this list as I find missing things
### Suggest a potential alternative/fix
_No response_ | non_test | examples notebooks describe the issue linked to the documentation our docs link into the example notebooks restructure is to create a sub directory of notebooks for each module then link them from the examples this is happening in to do restructure forecasting remove redundant notebooks make sure examples md is correctly and completely linked in missing notebooks regression missing images early classification i ll just update this list as i find missing things suggest a potential alternative fix no response | 0 |
154,734 | 12,226,892,480 | IssuesEvent | 2020-05-03 13:01:37 | RomanKondratev90/Test-project | https://api.github.com/repos/RomanKondratev90/Test-project | opened | Тест - кейс №3 (ручка). Проверка размера стержня. | test case (Pass) | Приоритет - средний
Шаги
1 Взять стержень в руку.
2 Измерить от одного конца до другого.
Ожидаемы результат
Длина - 10 см.
Статус
Положительный результат | 1.0 | Тест - кейс №3 (ручка). Проверка размера стержня. - Приоритет - средний
Шаги
1 Взять стержень в руку.
2 Измерить от одного конца до другого.
Ожидаемы результат
Длина - 10 см.
Статус
Положительный результат | test | тест кейс № ручка проверка размера стержня приоритет средний шаги взять стержень в руку измерить от одного конца до другого ожидаемы результат длина см статус положительный результат | 1 |
243,780 | 26,288,020,983 | IssuesEvent | 2023-01-08 03:14:09 | ilan-WS/m3 | https://api.github.com/repos/ilan-WS/m3 | reopened | CVE-2021-43565 (High) detected in github.com/golang/crypto/ssh-c07d793c2f9aacf728fe68cbd7acd73adbd04159 | security vulnerability | ## CVE-2021-43565 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/crypto/ssh-c07d793c2f9aacf728fe68cbd7acd73adbd04159</b></p></summary>
<p>[mirror] Go supplementary cryptography libraries</p>
<p>
Dependency Hierarchy:
- github.com/fossas/fossa-cli/cmd/fossa-4fe7d838f3a61541af13233ee03519d3a20e5ef8 (Root Library)
- github.com/fossas/fossa-cli/api/fossa-4fe7d838f3a61541af13233ee03519d3a20e5ef8
- github.com/fossas/fossa-cli/config-v1.1.8
- github.com/fossas/fossa-cli/vcs-4fe7d838f3a61541af13233ee03519d3a20e5ef8
- gopkg.in/src-d/go-git.v4-v4.13.1
- github.com/src-d/go-git/plumbing/protocol/packp-v4.13.1
- github.com/src-d/go-git/plumbing-v4.13.1
- github.com/src-d/go-git/plumbing/transport-v4.13.1
- github.com/src-d/go-git/plumbing/transport/ssh-v4.13.1
- :x: **github.com/golang/crypto/ssh-c07d793c2f9aacf728fe68cbd7acd73adbd04159** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ilan-WS/m3/commit/a62d2ead44380e2c1668bbbf026d5385b98d56ec">a62d2ead44380e2c1668bbbf026d5385b98d56ec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The x/crypto/ssh package before 0.0.0-20211202192323-5770296d904e of golang.org/x/crypto allows an attacker to panic an SSH server.
<p>Publish Date: 2022-09-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-43565>CVE-2021-43565</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43565">https://nvd.nist.gov/vuln/detail/CVE-2021-43565</a></p>
<p>Release Date: 2021-11-10</p>
<p>Fix Resolution: golang-golang-x-crypto-dev - 1:0.0~git20211202.5770296-1;golang-go.crypto-dev - 1:0.0~git20211202.5770296-1</p>
</p>
</details>
<p></p>
| True | CVE-2021-43565 (High) detected in github.com/golang/crypto/ssh-c07d793c2f9aacf728fe68cbd7acd73adbd04159 - ## CVE-2021-43565 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/crypto/ssh-c07d793c2f9aacf728fe68cbd7acd73adbd04159</b></p></summary>
<p>[mirror] Go supplementary cryptography libraries</p>
<p>
Dependency Hierarchy:
- github.com/fossas/fossa-cli/cmd/fossa-4fe7d838f3a61541af13233ee03519d3a20e5ef8 (Root Library)
- github.com/fossas/fossa-cli/api/fossa-4fe7d838f3a61541af13233ee03519d3a20e5ef8
- github.com/fossas/fossa-cli/config-v1.1.8
- github.com/fossas/fossa-cli/vcs-4fe7d838f3a61541af13233ee03519d3a20e5ef8
- gopkg.in/src-d/go-git.v4-v4.13.1
- github.com/src-d/go-git/plumbing/protocol/packp-v4.13.1
- github.com/src-d/go-git/plumbing-v4.13.1
- github.com/src-d/go-git/plumbing/transport-v4.13.1
- github.com/src-d/go-git/plumbing/transport/ssh-v4.13.1
- :x: **github.com/golang/crypto/ssh-c07d793c2f9aacf728fe68cbd7acd73adbd04159** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ilan-WS/m3/commit/a62d2ead44380e2c1668bbbf026d5385b98d56ec">a62d2ead44380e2c1668bbbf026d5385b98d56ec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The x/crypto/ssh package before 0.0.0-20211202192323-5770296d904e of golang.org/x/crypto allows an attacker to panic an SSH server.
<p>Publish Date: 2022-09-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-43565>CVE-2021-43565</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43565">https://nvd.nist.gov/vuln/detail/CVE-2021-43565</a></p>
<p>Release Date: 2021-11-10</p>
<p>Fix Resolution: golang-golang-x-crypto-dev - 1:0.0~git20211202.5770296-1;golang-go.crypto-dev - 1:0.0~git20211202.5770296-1</p>
</p>
</details>
<p></p>
| non_test | cve high detected in github com golang crypto ssh cve high severity vulnerability vulnerable library github com golang crypto ssh go supplementary cryptography libraries dependency hierarchy github com fossas fossa cli cmd fossa root library github com fossas fossa cli api fossa github com fossas fossa cli config github com fossas fossa cli vcs gopkg in src d go git github com src d go git plumbing protocol packp github com src d go git plumbing github com src d go git plumbing transport github com src d go git plumbing transport ssh x github com golang crypto ssh vulnerable library found in head commit a href found in base branch master vulnerability details the x crypto ssh package before of golang org x crypto allows an attacker to panic an ssh server publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution golang golang x crypto dev golang go crypto dev | 0 |
198,271 | 14,970,776,775 | IssuesEvent | 2021-01-27 20:07:35 | rook/rook | https://api.github.com/repos/rook/rook | closed | Cassandra Tests are unstable | bug cassandra test wontfix | Cassandra integration tests are unstable (mostly observed on `aws_1.15.x`, but also on others).
* https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-6081/1/pipeline/
* https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-6081/3/pipeline/
* https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-6082/2/pipeline/
* https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-6082/3/pipeline/ | 1.0 | Cassandra Tests are unstable - Cassandra integration tests are unstable (mostly observed on `aws_1.15.x`, but also on others).
* https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-6081/1/pipeline/
* https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-6081/3/pipeline/
* https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-6082/2/pipeline/
* https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-6082/3/pipeline/ | test | cassandra tests are unstable cassandra integration tests are unstable mostly observed on aws x but also on others | 1 |
317,592 | 27,246,137,878 | IssuesEvent | 2023-02-22 02:16:18 | datafuselabs/databend | https://api.github.com/repos/datafuselabs/databend | closed | Make databend-test work in docker-compose node | C-testing | **Summary**
Description for this feature.
Currently `databend-test` depends on
1. python env
2. 1-N `databend-query` processors
3. 1-N `databend-meta` processors
4. Mocked s3/azure env like [minio](https://github.com/minio/minio) to simulate cloud services
5. aws/azure cmd cli ...
5. .... in the future
So it's better to migrate the testing tools into `docker-compose` files to prepare the env.
`dev_setup.sh` is good to set up development env, but for testing, I think it may be not enough.
| 1.0 | Make databend-test work in docker-compose node - **Summary**
Description for this feature.
Currently `databend-test` depends on
1. python env
2. 1-N `databend-query` processors
3. 1-N `databend-meta` processors
4. Mocked s3/azure env like [minio](https://github.com/minio/minio) to simulate cloud services
5. aws/azure cmd cli ...
5. .... in the future
So it's better to migrate the testing tools into `docker-compose` files to prepare the env.
`dev_setup.sh` is good to set up development env, but for testing, I think it may be not enough.
| test | make databend test work in docker compose node summary description for this feature currently databend test depends on python env n databend query processors n databend meta processors mocked azure env like to simulate cloud services aws azure cmd cli in the future so it s better to migrate the testing tools into docker compose files to prepare the env dev setup sh is good to set up development env but for testing i think it may be not enough | 1 |
507,244 | 14,679,942,515 | IssuesEvent | 2020-12-31 08:33:41 | k8smeetup/website-tasks | https://api.github.com/repos/k8smeetup/website-tasks | opened | /docs/reference/glossary/downstream.md | lang/zh priority/P0 sync/update version/master welcome | Source File: [/docs/reference/glossary/downstream.md](https://github.com/kubernetes/website/blob/master/content/en/docs/reference/glossary/downstream.md)
Diff 命令参考:
```bash
# 查看原始文档与翻译文档更新差异
git diff --no-index -- content/en/docs/reference/glossary/downstream.md content/zh/docs/reference/glossary/downstream.md
# 跨分支持查看原始文档更新差异
git diff release-1.19 master -- content/en/docs/reference/glossary/downstream.md
``` | 1.0 | /docs/reference/glossary/downstream.md - Source File: [/docs/reference/glossary/downstream.md](https://github.com/kubernetes/website/blob/master/content/en/docs/reference/glossary/downstream.md)
Diff 命令参考:
```bash
# 查看原始文档与翻译文档更新差异
git diff --no-index -- content/en/docs/reference/glossary/downstream.md content/zh/docs/reference/glossary/downstream.md
# 跨分支持查看原始文档更新差异
git diff release-1.19 master -- content/en/docs/reference/glossary/downstream.md
``` | non_test | docs reference glossary downstream md source file diff 命令参考 bash 查看原始文档与翻译文档更新差异 git diff no index content en docs reference glossary downstream md content zh docs reference glossary downstream md 跨分支持查看原始文档更新差异 git diff release master content en docs reference glossary downstream md | 0 |
548,209 | 16,060,246,197 | IssuesEvent | 2021-04-23 11:29:30 | kubernetes/test-infra | https://api.github.com/repos/kubernetes/test-infra | closed | 1.21 release branch is missing presubmits | area/jobs kind/bug priority/critical-urgent sig/release | https://github.com/kubernetes/test-infra/issues/21606#issuecomment-812094062
sorry I don't have time to dig into this, I need to wrap up some other things and then I'm OOO for a week, but I want to make sure this is tracked.
/sig release
/area jobs
/priority critical-urgent | 1.0 | 1.21 release branch is missing presubmits - https://github.com/kubernetes/test-infra/issues/21606#issuecomment-812094062
sorry I don't have time to dig into this, I need to wrap up some other things and then I'm OOO for a week, but I want to make sure this is tracked.
/sig release
/area jobs
/priority critical-urgent | non_test | release branch is missing presubmits sorry i don t have time to dig into this i need to wrap up some other things and then i m ooo for a week but i want to make sure this is tracked sig release area jobs priority critical urgent | 0 |
173,330 | 13,398,057,588 | IssuesEvent | 2020-09-03 12:36:52 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: restore2TB/nodes=10 failed | C-test-failure O-roachtest O-robot branch-provisional_202007081918_v20.2.0-alpha.2 release-blocker | [(roachtest).restore2TB/nodes=10 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2076004&tab=buildLog) on [provisional_202007081918_v20.2.0-alpha.2@cf051c8c91d5e1d5196e6d2fdc9382364e1d44e8](https://github.com/cockroachdb/cockroach/commits/cf051c8c91d5e1d5196e6d2fdc9382364e1d44e8):
```
| WITH into_db = 'restore2tb'" returned
| stderr:
| ERROR: job 570827708717006849: could not mark as reverting: importing 43777 ranges: importing span /Table/51/1/2502{1438/0-4710}: addsstable [/Table/53/1/25023074/0,/Table/53/1/25024709/0/NULL): remote wall time is too far ahead (802.405667ms) to be trustworthy: log-job: remote wall time is too far ahead (800.201619ms) to be trustworthy
| Failed running "sql"
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 1. Command with error:
| | ```
| | ./cockroach sql --insecure -e "
| | RESTORE csv.bank FROM
| | 'gs://cockroach-fixtures/workload/bank/version=1.0.0,payload-bytes=10240,ranges=0,rows=65104166,seed=1/bank'
| | WITH into_db = 'restore2tb'"
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (5) exit status 20
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError
cluster.go:2471,restore.go:262,test_runner.go:757: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2459
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2467
| main.registerRestore.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/restore.go:262
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:757
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2515
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2429
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5420
| runtime.main
| /usr/local/go/src/runtime/proc.go:190
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1373
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withMessage (3) *withstack.withStack (4) *errutil.withMessage (5) *withstack.withStack (6) *errors.errorString
Failed to find issue assignee:
couldn't find GitHub commits for user email david@cockroachlabs.com
```
<details><summary>More</summary><p>
Artifacts: [/restore2TB/nodes=10](https://teamcity.cockroachdb.com/viewLog.html?buildId=2076004&tab=artifacts#/restore2TB/nodes=10)
Related:
- #51127 roachtest: restore2TB/nodes=10 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202007071743_v20.2.0-alpha.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202007071743_v20.2.0-alpha.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50886 roachtest: restore2TB/nodes=10 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #49503 roachtest: restore2TB/nodes=10 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #49480 roachtest: restore2TB/nodes=10 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #49440 roachtest: restore2TB/nodes=10 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Arestore2TB%2Fnodes%3D10.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: restore2TB/nodes=10 failed - [(roachtest).restore2TB/nodes=10 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2076004&tab=buildLog) on [provisional_202007081918_v20.2.0-alpha.2@cf051c8c91d5e1d5196e6d2fdc9382364e1d44e8](https://github.com/cockroachdb/cockroach/commits/cf051c8c91d5e1d5196e6d2fdc9382364e1d44e8):
```
| WITH into_db = 'restore2tb'" returned
| stderr:
| ERROR: job 570827708717006849: could not mark as reverting: importing 43777 ranges: importing span /Table/51/1/2502{1438/0-4710}: addsstable [/Table/53/1/25023074/0,/Table/53/1/25024709/0/NULL): remote wall time is too far ahead (802.405667ms) to be trustworthy: log-job: remote wall time is too far ahead (800.201619ms) to be trustworthy
| Failed running "sql"
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 1. Command with error:
| | ```
| | ./cockroach sql --insecure -e "
| | RESTORE csv.bank FROM
| | 'gs://cockroach-fixtures/workload/bank/version=1.0.0,payload-bytes=10240,ranges=0,rows=65104166,seed=1/bank'
| | WITH into_db = 'restore2tb'"
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (5) exit status 20
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError
cluster.go:2471,restore.go:262,test_runner.go:757: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2459
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2467
| main.registerRestore.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/restore.go:262
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:757
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2515
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2429
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5420
| runtime.main
| /usr/local/go/src/runtime/proc.go:190
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1373
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withMessage (3) *withstack.withStack (4) *errutil.withMessage (5) *withstack.withStack (6) *errors.errorString
Failed to find issue assignee:
couldn't find GitHub commits for user email david@cockroachlabs.com
```
<details><summary>More</summary><p>
Artifacts: [/restore2TB/nodes=10](https://teamcity.cockroachdb.com/viewLog.html?buildId=2076004&tab=artifacts#/restore2TB/nodes=10)
Related:
- #51127 roachtest: restore2TB/nodes=10 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202007071743_v20.2.0-alpha.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202007071743_v20.2.0-alpha.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50886 roachtest: restore2TB/nodes=10 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #49503 roachtest: restore2TB/nodes=10 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #49480 roachtest: restore2TB/nodes=10 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #49440 roachtest: restore2TB/nodes=10 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Arestore2TB%2Fnodes%3D10.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | roachtest nodes failed on with into db returned stderr error job could not mark as reverting importing ranges importing span table addsstable table table null remote wall time is too far ahead to be trustworthy log job remote wall time is too far ahead to be trustworthy failed running sql error command problem exit status command problem wraps node command with error cockroach sql insecure e restore csv bank from gs cockroach fixtures workload bank version payload bytes ranges rows seed bank with into db wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack safedetails withsafedetails errutil withmessage main withcommanddetails exec exiterror cluster go restore go test runner go monitor failure monitor task failed t fatal was called attached stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registerrestore home agent work go src github com cockroachdb cockroach pkg cmd roachtest restore go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withmessage withstack withstack errutil withmessage withstack withstack errors errorstring failed to find issue assignee couldn t find github commits for user email david cockroachlabs com more artifacts related roachtest nodes failed roachtest nodes failed roachtest nodes failed roachtest nodes failed roachtest nodes failed powered by | 1 |
170,554 | 13,192,743,202 | IssuesEvent | 2020-08-13 14:13:43 | physiopy/phys2bids | https://api.github.com/repos/physiopy/phys2bids | opened | Trigger plots won't save when on automatic testing mode | Bug Testing | <!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
<!--- NECESSARY -->
<!--- Describe what one would expect from the buggy code -->
When running pytest on CircleCI, Travis and Azure, the trigger plots should get saved.
## Actual Behavior
<!--- NECESSARY -->
<!--- Describe what the buggy code is actually doing/returning -->
<!--- Do not hesitate and share screenshots and code snippets that could help understand the issue -->
Trigger plots don't get saved when running pytest on CircleCI, Travis and Azure. The `plt.savefig()` function gets stuck.
## Specifications
<!--- Point out the version of phys2bids you are running and your OS version -->
- Python version: 3.6 and 3.7
- phys2bids version: v2.1.0+29.g2007d8b
- Platform: CircleCI, Travis and Azure Pipelines
| 1.0 | Trigger plots won't save when on automatic testing mode - <!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
<!--- NECESSARY -->
<!--- Describe what one would expect from the buggy code -->
When running pytest on CircleCI, Travis and Azure, the trigger plots should get saved.
## Actual Behavior
<!--- NECESSARY -->
<!--- Describe what the buggy code is actually doing/returning -->
<!--- Do not hesitate and share screenshots and code snippets that could help understand the issue -->
Trigger plots don't get saved when running pytest on CircleCI, Travis and Azure. The `plt.savefig()` function gets stuck.
## Specifications
<!--- Point out the version of phys2bids you are running and your OS version -->
- Python version: 3.6 and 3.7
- phys2bids version: v2.1.0+29.g2007d8b
- Platform: CircleCI, Travis and Azure Pipelines
| test | trigger plots won t save when on automatic testing mode expected behavior when running pytest on circleci travis and azure the trigger plots should get saved actual behavior trigger plots don t get saved when running pytest on circleci travis and azure the plt savefig function gets stuck specifications python version and version platform circleci travis and azure pipelines | 1 |
157,687 | 12,380,076,021 | IssuesEvent | 2020-05-19 13:30:00 | mozilla-mobile/firefox-ios | https://api.github.com/repos/mozilla-mobile/firefox-ios | closed | [XCUITest] Fix testCustomEngineFromIncorrectTemplate | Test Automation :robot: | This test is failing on BB due to timing issues waiting for the Paste button to appear. | 1.0 | [XCUITest] Fix testCustomEngineFromIncorrectTemplate - This test is failing on BB due to timing issues waiting for the Paste button to appear. | test | fix testcustomenginefromincorrecttemplate this test is failing on bb due to timing issues waiting for the paste button to appear | 1 |
133,372 | 18,297,373,229 | IssuesEvent | 2021-10-05 21:54:12 | vipinsun/blockchain-carbon-accounting | https://api.github.com/repos/vipinsun/blockchain-carbon-accounting | closed | WS-2016-0075 (Medium) detected in github.com/smartystreets/goconvey-v1.6.4 - autoclosed | security vulnerability | ## WS-2016-0075 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/smartystreets/goconvey-v1.6.4</b></p></summary>
<p>Go testing in the browser. Integrates with `go test`. Write behavioral tests in Go.</p>
<p>
Dependency Hierarchy:
- github.com/hyperledger/fabric-v1.4.1 (Root Library)
- github.com/spf13/viper-v1.7.1
- github.com/go-ini/ini-v1.51.0
- :x: **github.com/smartystreets/goconvey-v1.6.4** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vipinsun/blockchain-carbon-accounting/commit/d388e16464e00b9ce84df0d247029f534a429b90">d388e16464e00b9ce84df0d247029f534a429b90</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular expression denial of service vulnerability in the moment package, by using a specific 40 characters long string in the "format" method.
<p>Publish Date: 2016-10-24
<p>URL: <a href=https://github.com/moment/moment/commit/663f33e333212b3800b63592cd8e237ac8fabdb9>WS-2016-0075</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/moment/moment/pull/3525">https://github.com/moment/moment/pull/3525</a></p>
<p>Release Date: 2016-10-24</p>
<p>Fix Resolution: 2.15.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2016-0075 (Medium) detected in github.com/smartystreets/goconvey-v1.6.4 - autoclosed - ## WS-2016-0075 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/smartystreets/goconvey-v1.6.4</b></p></summary>
<p>Go testing in the browser. Integrates with `go test`. Write behavioral tests in Go.</p>
<p>
Dependency Hierarchy:
- github.com/hyperledger/fabric-v1.4.1 (Root Library)
- github.com/spf13/viper-v1.7.1
- github.com/go-ini/ini-v1.51.0
- :x: **github.com/smartystreets/goconvey-v1.6.4** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vipinsun/blockchain-carbon-accounting/commit/d388e16464e00b9ce84df0d247029f534a429b90">d388e16464e00b9ce84df0d247029f534a429b90</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular expression denial of service vulnerability in the moment package, by using a specific 40 characters long string in the "format" method.
<p>Publish Date: 2016-10-24
<p>URL: <a href=https://github.com/moment/moment/commit/663f33e333212b3800b63592cd8e237ac8fabdb9>WS-2016-0075</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/moment/moment/pull/3525">https://github.com/moment/moment/pull/3525</a></p>
<p>Release Date: 2016-10-24</p>
<p>Fix Resolution: 2.15.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | ws medium detected in github com smartystreets goconvey autoclosed ws medium severity vulnerability vulnerable library github com smartystreets goconvey go testing in the browser integrates with go test write behavioral tests in go dependency hierarchy github com hyperledger fabric root library github com viper github com go ini ini x github com smartystreets goconvey vulnerable library found in head commit a href found in base branch main vulnerability details regular expression denial of service vulnerability in the moment package by using a specific characters long string in the format method publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
374,249 | 26,108,282,205 | IssuesEvent | 2022-12-27 15:57:55 | TrixiEther/SmileBot | https://api.github.com/repos/TrixiEther/SmileBot | closed | Command processing mechanism | documentation enhancement | **Develop a bot management mechanism on the server**
Comand template:
![bot control word] [main commands] [flags]
Implement(for now):
1. Bot initialization on the server
2. Removing server-related information
3. Bot reinitialization on the server
4. Emoji usage statistics(type, used in message, used as reaction, summary)
| 1.0 | Command processing mechanism - **Develop a bot management mechanism on the server**
Comand template:
![bot control word] [main commands] [flags]
Implement(for now):
1. Bot initialization on the server
2. Removing server-related information
3. Bot reinitialization on the server
4. Emoji usage statistics(type, used in message, used as reaction, summary)
| non_test | command processing mechanism develop a bot management mechanism on the server comand template implement for now bot initialization on the server removing server related information bot reinitialization on the server emoji usage statistics type used in message used as reaction summary | 0 |
339,096 | 10,241,918,660 | IssuesEvent | 2019-08-20 02:34:41 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | API/SDK configure CUSTOM_LOGO doesn't take affect | component:api priority:low state:needs_devel type:bug | ##### ISSUE TYPE
- Bug Report
##### SUMMARY
When utilizing the UI, one can head to Settings/UI and upload an image file. This "replaces" the Image file on the login screen. One can also view this image in a "data:url" format at api/v2/settings/ui CUSTOM_LOGO: "data:image/png;base64,Base64EncodedStringhere="
Attempting the same via the API or tower-cli (example below) produces no "error" and updates the CUSTOM_LOGO attribute properly when viewed in the API, but doesn't actually "take effect" in that the logo is not actually changed at login.
##### ENVIRONMENT
* AWX version: 6.1.0
* AWX install method: openshift
* Ansible version: Whatever comes built-in with 6.1.0's ansible virtualenv
* Operating System: Openshift 3.11 running on RHEL 7.6
* Web Browser: Firefox
##### STEPS TO REPRODUCE
Reproduce using the tower-cli:
```
with open("/path/to/a/png/file.png, 'rb') as logofile:
logo_64_encode = base64.b64encode(logofile.read()).decode()
created_data_url = 'data:image/png;base64,{}'.format(logo_64_encode)
# Placeholder: Setup AWX auth - however you do it since there's 10,000 ways
# Set a bunch of other settings as examples - these all WORK FINE
# Assuming you have some valid data in login_message and custom_venv_paths variables
settings_resource.modify('CUSTOM_LOGIN_INFO', login_message)
settings_resource.modify('CUSTOM_VENV_PATHS', json.dumps(custom_venv_paths))
# Setup AWX Custom Logo using above data:url - this shows a successful return and updates CUSTOM_LOGO - but never displays
settings_resource.modify('CUSTOM_LOGO', created_data_url)
```
##### EXPECTED RESULTS
if the tower-cli call succeeded (no errors) an i can see the encoded data:url in CUSTOM_LOGO - i would expect a Custom Image to be displayed at the login prompt.
##### ACTUAL RESULTS
No image is displayed. Uploading the same image via the UI's "browse" option causes it to display immediately, same apparent data:url object written to CUSTOM_LOGO
##### ADDITIONAL INFORMATION
N/A
| 1.0 | API/SDK configure CUSTOM_LOGO doesn't take affect - ##### ISSUE TYPE
- Bug Report
##### SUMMARY
When utilizing the UI, one can head to Settings/UI and upload an image file. This "replaces" the Image file on the login screen. One can also view this image in a "data:url" format at api/v2/settings/ui CUSTOM_LOGO: "data:image/png;base64,Base64EncodedStringhere="
Attempting the same via the API or tower-cli (example below) produces no "error" and updates the CUSTOM_LOGO attribute properly when viewed in the API, but doesn't actually "take effect" in that the logo is not actually changed at login.
##### ENVIRONMENT
* AWX version: 6.1.0
* AWX install method: openshift
* Ansible version: Whatever comes built-in with 6.1.0's ansible virtualenv
* Operating System: Openshift 3.11 running on RHEL 7.6
* Web Browser: Firefox
##### STEPS TO REPRODUCE
Reproduce using the tower-cli:
```
with open("/path/to/a/png/file.png, 'rb') as logofile:
logo_64_encode = base64.b64encode(logofile.read()).decode()
created_data_url = 'data:image/png;base64,{}'.format(logo_64_encode)
# Placeholder: Setup AWX auth - however you do it since there's 10,000 ways
# Set a bunch of other settings as examples - these all WORK FINE
# Assuming you have some valid data in login_message and custom_venv_paths variables
settings_resource.modify('CUSTOM_LOGIN_INFO', login_message)
settings_resource.modify('CUSTOM_VENV_PATHS', json.dumps(custom_venv_paths))
# Setup AWX Custom Logo using above data:url - this shows a successful return and updates CUSTOM_LOGO - but never displays
settings_resource.modify('CUSTOM_LOGO', created_data_url)
```
##### EXPECTED RESULTS
if the tower-cli call succeeded (no errors) an i can see the encoded data:url in CUSTOM_LOGO - i would expect a Custom Image to be displayed at the login prompt.
##### ACTUAL RESULTS
No image is displayed. Uploading the same image via the UI's "browse" option causes it to display immediately, same apparent data:url object written to CUSTOM_LOGO
##### ADDITIONAL INFORMATION
N/A
| non_test | api sdk configure custom logo doesn t take affect issue type bug report summary when utilizing the ui one can head to settings ui and upload an image file this replaces the image file on the login screen one can also view this image in a data url format at api settings ui custom logo data image png attempting the same via the api or tower cli example below produces no error and updates the custom logo attribute properly when viewed in the api but doesn t actually take effect in that the logo is not actually changed at login environment awx version awx install method openshift ansible version whatever comes built in with s ansible virtualenv operating system openshift running on rhel web browser firefox steps to reproduce reproduce using the tower cli with open path to a png file png rb as logofile logo encode logofile read decode created data url data image png format logo encode placeholder setup awx auth however you do it since there s ways set a bunch of other settings as examples these all work fine assuming you have some valid data in login message and custom venv paths variables settings resource modify custom login info login message settings resource modify custom venv paths json dumps custom venv paths setup awx custom logo using above data url this shows a successful return and updates custom logo but never displays settings resource modify custom logo created data url expected results if the tower cli call succeeded no errors an i can see the encoded data url in custom logo i would expect a custom image to be displayed at the login prompt actual results no image is displayed uploading the same image via the ui s browse option causes it to display immediately same apparent data url object written to custom logo additional information n a | 0 |
564,815 | 16,742,176,557 | IssuesEvent | 2021-06-11 11:13:36 | ICSM/ampere | https://api.github.com/repos/ICSM/ampere | closed | Implement priors | high priority likelihood model improvements optimsers | Implement a range of generic priors for all models (e.g. flat, flat within some bounds, etc). Not all possible optimisers will be able to understand this, so it may additional code in optimiser objects to maintain uniformity. | 1.0 | Implement priors - Implement a range of generic priors for all models (e.g. flat, flat within some bounds, etc). Not all possible optimisers will be able to understand this, so it may additional code in optimiser objects to maintain uniformity. | non_test | implement priors implement a range of generic priors for all models e g flat flat within some bounds etc not all possible optimisers will be able to understand this so it may additional code in optimiser objects to maintain uniformity | 0 |
244,644 | 20,682,278,918 | IssuesEvent | 2022-03-10 14:55:25 | cernanalysispreservation/analysispreservation.cern.ch | https://api.github.com/repos/cernanalysispreservation/analysispreservation.cern.ch | closed | ui: Test for the AsyncSelect TextWidget | Topic: UI Need: software tests | - [ ] Fill in the cadi_id and on success check that the information is updated to the `information from cadi_id database` tab
- [ ] Fill in a random cadi_id and check that the informations is updated to undefined value to the `information from cadi_id database` tab
- [ ] The information to the `Infotmation from cadi database` should be always undefined unless there is success to the autofill axios call | 1.0 | ui: Test for the AsyncSelect TextWidget - - [ ] Fill in the cadi_id and on success check that the information is updated to the `information from cadi_id database` tab
- [ ] Fill in a random cadi_id and check that the informations is updated to undefined value to the `information from cadi_id database` tab
- [ ] The information to the `Infotmation from cadi database` should be always undefined unless there is success to the autofill axios call | test | ui test for the asyncselect textwidget fill in the cadi id and on success check that the information is updated to the information from cadi id database tab fill in a random cadi id and check that the informations is updated to undefined value to the information from cadi id database tab the information to the infotmation from cadi database should be always undefined unless there is success to the autofill axios call | 1 |
324,421 | 27,808,339,720 | IssuesEvent | 2023-03-17 22:46:55 | microsoft/playwright | https://api.github.com/repos/microsoft/playwright | closed | [BUG]Process.env variables lost from project dependencies | feature-test-runner v1.32 | <!-- ⚠️⚠️ Do not delete this template ⚠️⚠️ -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Playwright release to see if your issue has already been fixed -->
<!-- 💡 Provide enough information for us to be able to reproduce your issue locally -->
### System info
- Playwright Version: [v1.31]
- Operating System: [Windows 11]
- Browser: [All]
- Other info: Code in TS
### Source code
- [X] I provided exact source code that allows reproducing the issue locally.
<!-- For simple cases, please provide a self-contained test file along with the config file -->
<!-- For larger cases, you can provide a GitHub repo you created for this issue -->
<!-- If we can not reproduce the problem locally, we won't be able to act on it -->
<!-- You can still file without the exact code and we will try to help, but if we can't repro, it will be closed -->
**Config file**
```js
// playwright.config.ts
const defaultConfiguration: any = {
actionTimeout: process.env.DEBUG ? 30_000 : 60_000,
navigationTimeout: process.env.DEBUG ? 30_000 : 60_000,
headless: process.env.CI === 'true' ? true : false,
locale: 'en-US',
viewport: { width: 1920, height: 1080 },
ignoreHTTPSErrors: true,
baseURL: Global.getBaseUrl(),
launchOptions: {
args: ['--incognito'],
slowMo: 100,
},
screenshot: {
mode: 'only-on-failure',
fullPage: true,
},
trace: 'retain-on-failure',
video: 'retain-on-failure',
};
const config: PlaywrightTestConfig = {
projects: [
{
name: 'UI_TESTS_SETUP',
testDir: './tests/ui',
testMatch: '*global.ts',
use: defaultConfiguration,
expect: {
timeout: process.env.DEBUG ? 30_000 : 60_000,
},
},
{
name: 'UI_SESSIONS',
testDir: './tests/ui',
testMatch: '*envcheck.test.ts',
use: defaultConfiguration,
expect: {
timeout: process.env.DEBUG ? 30_000 : 60_000,
},
dependencies: ['UI_TESTS_SETUP'],
},
```
**Test file (self-contained)**
2 files: setup & test
setup:
import test from '@playwright/test';
import { logger } from '../../logger';
test.describe('Setup', async () => {
test('SM-SETUP/STEP-1 venue setup', async () => {
logger.info('UI setup started');
process.env['A'] = 'TEST';
logger.info(`Setup->reading process.env["A"]=${process.env['A']}`);
});
});
test:
import test from '@playwright/test';
import { logger } from '../../logger';
test.describe.only('test', async () => {
test(`test`, async () => {
logger.info(`Project Y->reading process.env["A"]=${process.env['A']}`);
});
});
**Steps**
Run the tests
**Expected**
Process.env vars propagated between projects if there is dependency
**Actual**
Process.env vars not propagated between projects if there is dependency
INFO
**Setup->reading process.env["A"]=TEST**
INFO
**Project Y->reading process.env["A"]=undefined**
| 1.0 | [BUG]Process.env variables lost from project dependencies - <!-- ⚠️⚠️ Do not delete this template ⚠️⚠️ -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Playwright release to see if your issue has already been fixed -->
<!-- 💡 Provide enough information for us to be able to reproduce your issue locally -->
### System info
- Playwright Version: [v1.31]
- Operating System: [Windows 11]
- Browser: [All]
- Other info: Code in TS
### Source code
- [X] I provided exact source code that allows reproducing the issue locally.
<!-- For simple cases, please provide a self-contained test file along with the config file -->
<!-- For larger cases, you can provide a GitHub repo you created for this issue -->
<!-- If we can not reproduce the problem locally, we won't be able to act on it -->
<!-- You can still file without the exact code and we will try to help, but if we can't repro, it will be closed -->
**Config file**
```js
// playwright.config.ts
const defaultConfiguration: any = {
actionTimeout: process.env.DEBUG ? 30_000 : 60_000,
navigationTimeout: process.env.DEBUG ? 30_000 : 60_000,
headless: process.env.CI === 'true' ? true : false,
locale: 'en-US',
viewport: { width: 1920, height: 1080 },
ignoreHTTPSErrors: true,
baseURL: Global.getBaseUrl(),
launchOptions: {
args: ['--incognito'],
slowMo: 100,
},
screenshot: {
mode: 'only-on-failure',
fullPage: true,
},
trace: 'retain-on-failure',
video: 'retain-on-failure',
};
const config: PlaywrightTestConfig = {
projects: [
{
name: 'UI_TESTS_SETUP',
testDir: './tests/ui',
testMatch: '*global.ts',
use: defaultConfiguration,
expect: {
timeout: process.env.DEBUG ? 30_000 : 60_000,
},
},
{
name: 'UI_SESSIONS',
testDir: './tests/ui',
testMatch: '*envcheck.test.ts',
use: defaultConfiguration,
expect: {
timeout: process.env.DEBUG ? 30_000 : 60_000,
},
dependencies: ['UI_TESTS_SETUP'],
},
```
**Test file (self-contained)**
2 files: setup & test
setup:
import test from '@playwright/test';
import { logger } from '../../logger';
test.describe('Setup', async () => {
test('SM-SETUP/STEP-1 venue setup', async () => {
logger.info('UI setup started');
process.env['A'] = 'TEST';
logger.info(`Setup->reading process.env["A"]=${process.env['A']}`);
});
});
test:
import test from '@playwright/test';
import { logger } from '../../logger';
test.describe.only('test', async () => {
test(`test`, async () => {
logger.info(`Project Y->reading process.env["A"]=${process.env['A']}`);
});
});
**Steps**
Run the tests
**Expected**
Process.env vars propagated between projects if there is dependency
**Actual**
Process.env vars not propagated between projects if there is dependency
INFO
**Setup->reading process.env["A"]=TEST**
INFO
**Project Y->reading process.env["A"]=undefined**
| test | process env variables lost from project dependencies system info playwright version operating system browser other info code in ts source code i provided exact source code that allows reproducing the issue locally config file js playwright config ts const defaultconfiguration any actiontimeout process env debug navigationtimeout process env debug headless process env ci true true false locale en us viewport width height ignorehttpserrors true baseurl global getbaseurl launchoptions args slowmo screenshot mode only on failure fullpage true trace retain on failure video retain on failure const config playwrighttestconfig projects name ui tests setup testdir tests ui testmatch global ts use defaultconfiguration expect timeout process env debug name ui sessions testdir tests ui testmatch envcheck test ts use defaultconfiguration expect timeout process env debug dependencies test file self contained files setup test setup import test from playwright test import logger from logger test describe setup async test sm setup step venue setup async logger info ui setup started process env test logger info setup reading process env process env test import test from playwright test import logger from logger test describe only test async test test async logger info project y reading process env process env steps run the tests expected process env vars propagated between projects if there is dependency actual process env vars not propagated between projects if there is dependency info setup reading process env test info project y reading process env undefined | 1 |
258,187 | 27,563,865,338 | IssuesEvent | 2023-03-08 01:12:00 | billmcchesney1/flow | https://api.github.com/repos/billmcchesney1/flow | opened | CVE-2020-9492 (High) detected in hadoop-core-1.2.1.jar | security vulnerability | ## CVE-2020-9492 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hadoop-core-1.2.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar</p>
<p>
Dependency Hierarchy:
- lzo-hadoop-1.0.6.jar (Root Library)
- :x: **hadoop-core-1.2.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Hadoop 3.2.0 to 3.2.1, 3.0.0-alpha1 to 3.1.3, and 2.0.0-alpha to 2.10.0, WebHDFS client might send SPNEGO authorization header to remote URL without proper verification.
<p>Publish Date: 2021-01-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-9492>CVE-2020-9492</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rca4516b00b55b347905df45e5d0432186248223f30497db87aba8710@%3Cannounce.apache.org%3E">https://lists.apache.org/thread.html/rca4516b00b55b347905df45e5d0432186248223f30497db87aba8710@%3Cannounce.apache.org%3E</a></p>
<p>Release Date: 2021-01-26</p>
<p>Fix Resolution: org.apache.hadoop:hadoop-hdfs-client:2.10.1,org.apache.hadoop:hadoop-hdfs-client:3.1.4,org.apache.hadoop:hadoop-hdfs-client:3.2.2</p>
</p>
</details>
<p></p>
| True | CVE-2020-9492 (High) detected in hadoop-core-1.2.1.jar - ## CVE-2020-9492 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hadoop-core-1.2.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar</p>
<p>
Dependency Hierarchy:
- lzo-hadoop-1.0.6.jar (Root Library)
- :x: **hadoop-core-1.2.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Hadoop 3.2.0 to 3.2.1, 3.0.0-alpha1 to 3.1.3, and 2.0.0-alpha to 2.10.0, WebHDFS client might send SPNEGO authorization header to remote URL without proper verification.
<p>Publish Date: 2021-01-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-9492>CVE-2020-9492</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rca4516b00b55b347905df45e5d0432186248223f30497db87aba8710@%3Cannounce.apache.org%3E">https://lists.apache.org/thread.html/rca4516b00b55b347905df45e5d0432186248223f30497db87aba8710@%3Cannounce.apache.org%3E</a></p>
<p>Release Date: 2021-01-26</p>
<p>Fix Resolution: org.apache.hadoop:hadoop-hdfs-client:2.10.1,org.apache.hadoop:hadoop-hdfs-client:3.1.4,org.apache.hadoop:hadoop-hdfs-client:3.2.2</p>
</p>
</details>
<p></p>
| non_test | cve high detected in hadoop core jar cve high severity vulnerability vulnerable library hadoop core jar path to dependency file pom xml path to vulnerable library home wss scanner repository org apache hadoop hadoop core hadoop core jar dependency hierarchy lzo hadoop jar root library x hadoop core jar vulnerable library found in base branch master vulnerability details in apache hadoop to to and alpha to webhdfs client might send spnego authorization header to remote url without proper verification publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache hadoop hadoop hdfs client org apache hadoop hadoop hdfs client org apache hadoop hadoop hdfs client | 0 |
223,287 | 17,579,510,220 | IssuesEvent | 2021-08-16 04:30:14 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Make windows tests resilient to multiple cloud/installer scenarios w taints, selectors, rc's | area/test kind/feature sig/windows lifecycle/rotten needs-triage | <!-- Please only use this template for submitting enhancement requests -->
#### What would you like to be added:
- potentially use runtimeClass as the common mechanism for all e2es
- add tolerations for windows nodes to all windows e2es
- add node selectors for all windows nodes to all windows e2es
#### Why is this needed:
Right now, tests have variable results on EKS/AKS/TKG... different clouds, because some clouds may taint, some may label, and some may require runtime classes
cc @ravisantoshgudimetla | 1.0 | Make windows tests resilient to multiple cloud/installer scenarios w taints, selectors, rc's - <!-- Please only use this template for submitting enhancement requests -->
#### What would you like to be added:
- potentially use runtimeClass as the common mechanism for all e2es
- add tolerations for windows nodes to all windows e2es
- add node selectors for all windows nodes to all windows e2es
#### Why is this needed:
Right now, tests have variable results on EKS/AKS/TKG... different clouds, because some clouds may taint, some may label, and some may require runtime classes
cc @ravisantoshgudimetla | test | make windows tests resilient to multiple cloud installer scenarios w taints selectors rc s what would you like to be added potentially use runtimeclass as the common mechanism for all add tolerations for windows nodes to all windows add node selectors for all windows nodes to all windows why is this needed right now tests have variable results on eks aks tkg different clouds because some clouds may taint some may label and some may require runtime classes cc ravisantoshgudimetla | 1 |
404,214 | 27,454,251,593 | IssuesEvent | 2023-03-02 19:55:11 | notaryproject/notaryproject.dev | https://api.github.com/repos/notaryproject/notaryproject.dev | closed | Information Architecture Proposal | p1-high documentation | Please find an editable version of the IA Proposal here:
[notaryproject.dev Information Architecture Proposal](https://hackmd.io/@nate-double-u/H1JkSDYWF)
(Once the draft is complete, I'll bring it back here to be used for issue tracking) | 1.0 | Information Architecture Proposal - Please find an editable version of the IA Proposal here:
[notaryproject.dev Information Architecture Proposal](https://hackmd.io/@nate-double-u/H1JkSDYWF)
(Once the draft is complete, I'll bring it back here to be used for issue tracking) | non_test | information architecture proposal please find an editable version of the ia proposal here once the draft is complete i ll bring it back here to be used for issue tracking | 0 |
133,107 | 10,790,092,050 | IssuesEvent | 2019-11-05 13:34:05 | ICIJ/datashare | https://api.github.com/repos/ICIJ/datashare | closed | Stars disappear on page reload | bug front need testing | Way to reproduce :
- Open a page on your favourite project
- Star as many documents as wanted
- Refresh your page
=> All the stars disappeared ! | 1.0 | Stars disappear on page reload - Way to reproduce :
- Open a page on your favourite project
- Star as many documents as wanted
- Refresh your page
=> All the stars disappeared ! | test | stars disappear on page reload way to reproduce open a page on your favourite project star as many documents as wanted refresh your page all the stars disappeared | 1 |
357,559 | 25,176,404,904 | IssuesEvent | 2022-11-11 09:39:03 | comicalromance/pe | https://api.github.com/repos/comicalromance/pe | opened | FAQ section Could be more Fleshed Out | type.DocumentationBug severity.Low | 
The FAQ only contains one answered question, which can leave users with other questions without an answer.
<!--session: 1668157838279-3acbd46c-7df4-45b6-ba4f-71cbf0ea8058-->
<!--Version: Web v3.4.4--> | 1.0 | FAQ section Could be more Fleshed Out - 
The FAQ only contains one answered question, which can leave users with other questions without an answer.
<!--session: 1668157838279-3acbd46c-7df4-45b6-ba4f-71cbf0ea8058-->
<!--Version: Web v3.4.4--> | non_test | faq section could be more fleshed out the faq only contains one answered question which can leave users with other questions without an answer | 0 |
6,261 | 2,830,869,153 | IssuesEvent | 2015-05-24 04:41:37 | Neefay/BromA-A3-Framework-Mark3 | https://api.github.com/repos/Neefay/BromA-A3-Framework-Mark3 | closed | Camps are spawning wrong loadouts on headless client | bug needs testing | Box ammo boxes and units who are spawned to support are fucked up. | 1.0 | Camps are spawning wrong loadouts on headless client - Box ammo boxes and units who are spawned to support are fucked up. | test | camps are spawning wrong loadouts on headless client box ammo boxes and units who are spawned to support are fucked up | 1 |
263,803 | 23,083,518,747 | IssuesEvent | 2022-07-26 09:20:17 | stargate/stargate | https://api.github.com/repos/stargate/stargate | opened | `StargateV1ConfigurationSourceProviderTest` failing on v2 branch | test | ```
Step #4 - "build": [INFO] Running io.stargate.metrics.jersey.dwconfig.StargateV1ConfigurationSourceProviderTest
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No value for configuration override System Property 'stargate.configurationFile.testapi'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No configuration override file '/workspace/metrics-jersey/testapi-config.yaml' found
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No configuration overrides found, will use the default config resource 'testapi-config.yaml'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=nosuchmodule): No value for configuration override System Property 'stargate.configurationFile.nosuchmodule'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=nosuchmodule): No configuration override file '/workspace/metrics-jersey/noconfig.yaml' found
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=nosuchmodule): No configuration overrides found, will use the default config resource 'noconfig.yaml'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): Found configuration override System Property 'stargate.configurationFile.testapi'; will use config file 'src/test/resources/alt-config.yaml'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No value for configuration override System Property 'stargate.configurationFile.testapi'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No configuration override file '/workspace/metrics-jersey/testapi-config.yaml' found
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No configuration overrides found, will use the default config resource 'testapi-config.yaml'
Step #4 - "build": [ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.139 s <<< FAILURE! - in io.stargate.metrics.jersey.dwconfig.StargateV1ConfigurationSourceProviderTest
Step #4 - "build": [ERROR] testNamingConventionBased Time elapsed: 0.016 s <<< FAILURE!
Step #4 - "build": org.opentest4j.AssertionFailedError:
Step #4 - "build":
Step #4 - "build": expected: "config: custom"
Step #4 - "build": but was: "config: default"
Step #4 - "build": at io.stargate.metrics.jersey.dwconfig.StargateV1ConfigurationSourceProviderTest.testNamingConventionBased(StargateV1ConfigurationSourceProviderTest.java:57)
Step #4 - "build":
Step #4 - "build": [INFO] Running io.stargate.metrics.jersey.listener.CounterRequestEventListenerTest
Step #4 - "build": [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.65 s - in io.stargate.metrics.jersey.listener.CounterRequestEventListenerTest
Step #4 - "build": [INFO] Running io.stargate.metrics.jersey.listener.CounterApplicationEventListenerTest
Step #4 - "build": [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.766 s - in io.stargate.metrics.jersey.listener.CounterApplicationEventListenerTest
Step #4 - "build": [INFO] Running io.stargate.metrics.jersey.config.SystemPropsMetricsListenerConfigTest
Step #4 - "build": [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 s - in io.stargate.metrics.jersey.config.SystemPropsMetricsListenerConfigTest
Step #4 - "build": [INFO] Running io.stargate.metrics.jersey.MetricsBinderTest
Step #4 - "build": [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.794 s - in io.stargate.metrics.jersey.MetricsBinderTest
Step #4 - "build": [INFO]
Step #4 - "build": [INFO] Results:
Step #4 - "build": [INFO]
Step #4 - "build": [ERROR] Failures:
Step #4 - "build": [ERROR] StargateV1ConfigurationSourceProviderTest.testNamingConventionBased:57
Step #4 - "build": expected: "config: custom"
Step #4 - "build": but was: "config: default"
Step #4 - "build": [INFO]
Step #4 - "build": [ERROR] Tests run: 50, Failures: 1, Errors: 0, Skipped: 0
``` | 1.0 | `StargateV1ConfigurationSourceProviderTest` failing on v2 branch - ```
Step #4 - "build": [INFO] Running io.stargate.metrics.jersey.dwconfig.StargateV1ConfigurationSourceProviderTest
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No value for configuration override System Property 'stargate.configurationFile.testapi'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No configuration override file '/workspace/metrics-jersey/testapi-config.yaml' found
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No configuration overrides found, will use the default config resource 'testapi-config.yaml'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=nosuchmodule): No value for configuration override System Property 'stargate.configurationFile.nosuchmodule'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=nosuchmodule): No configuration override file '/workspace/metrics-jersey/noconfig.yaml' found
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=nosuchmodule): No configuration overrides found, will use the default config resource 'noconfig.yaml'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): Found configuration override System Property 'stargate.configurationFile.testapi'; will use config file 'src/test/resources/alt-config.yaml'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No value for configuration override System Property 'stargate.configurationFile.testapi'
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No configuration override file '/workspace/metrics-jersey/testapi-config.yaml' found
Step #4 - "build": INFO [StargateV1ConfigurationSourceProvider](module=testapi): No configuration overrides found, will use the default config resource 'testapi-config.yaml'
Step #4 - "build": [ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.139 s <<< FAILURE! - in io.stargate.metrics.jersey.dwconfig.StargateV1ConfigurationSourceProviderTest
Step #4 - "build": [ERROR] testNamingConventionBased Time elapsed: 0.016 s <<< FAILURE!
Step #4 - "build": org.opentest4j.AssertionFailedError:
Step #4 - "build":
Step #4 - "build": expected: "config: custom"
Step #4 - "build": but was: "config: default"
Step #4 - "build": at io.stargate.metrics.jersey.dwconfig.StargateV1ConfigurationSourceProviderTest.testNamingConventionBased(StargateV1ConfigurationSourceProviderTest.java:57)
Step #4 - "build":
Step #4 - "build": [INFO] Running io.stargate.metrics.jersey.listener.CounterRequestEventListenerTest
Step #4 - "build": [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.65 s - in io.stargate.metrics.jersey.listener.CounterRequestEventListenerTest
Step #4 - "build": [INFO] Running io.stargate.metrics.jersey.listener.CounterApplicationEventListenerTest
Step #4 - "build": [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.766 s - in io.stargate.metrics.jersey.listener.CounterApplicationEventListenerTest
Step #4 - "build": [INFO] Running io.stargate.metrics.jersey.config.SystemPropsMetricsListenerConfigTest
Step #4 - "build": [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 s - in io.stargate.metrics.jersey.config.SystemPropsMetricsListenerConfigTest
Step #4 - "build": [INFO] Running io.stargate.metrics.jersey.MetricsBinderTest
Step #4 - "build": [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.794 s - in io.stargate.metrics.jersey.MetricsBinderTest
Step #4 - "build": [INFO]
Step #4 - "build": [INFO] Results:
Step #4 - "build": [INFO]
Step #4 - "build": [ERROR] Failures:
Step #4 - "build": [ERROR] StargateV1ConfigurationSourceProviderTest.testNamingConventionBased:57
Step #4 - "build": expected: "config: custom"
Step #4 - "build": but was: "config: default"
Step #4 - "build": [INFO]
Step #4 - "build": [ERROR] Tests run: 50, Failures: 1, Errors: 0, Skipped: 0
``` | test | failing on branch step build running io stargate metrics jersey dwconfig step build info module testapi no value for configuration override system property stargate configurationfile testapi step build info module testapi no configuration override file workspace metrics jersey testapi config yaml found step build info module testapi no configuration overrides found will use the default config resource testapi config yaml step build info module nosuchmodule no value for configuration override system property stargate configurationfile nosuchmodule step build info module nosuchmodule no configuration override file workspace metrics jersey noconfig yaml found step build info module nosuchmodule no configuration overrides found will use the default config resource noconfig yaml step build info module testapi found configuration override system property stargate configurationfile testapi will use config file src test resources alt config yaml step build info module testapi no value for configuration override system property stargate configurationfile testapi step build info module testapi no configuration override file workspace metrics jersey testapi config yaml found step build info module testapi no configuration overrides found will use the default config resource testapi config yaml step build tests run failures errors skipped time elapsed s failure in io stargate metrics jersey dwconfig step build testnamingconventionbased time elapsed s failure step build org assertionfailederror step build step build expected config custom step build but was config default step build at io stargate metrics jersey dwconfig testnamingconventionbased java step build step build running io stargate metrics jersey listener counterrequesteventlistenertest step build tests run failures errors skipped time elapsed s in io stargate metrics jersey listener counterrequesteventlistenertest step build running io stargate metrics jersey listener counterapplicationeventlistenertest step build tests run failures errors skipped time elapsed s in io stargate metrics jersey listener counterapplicationeventlistenertest step build running io stargate metrics jersey config systempropsmetricslistenerconfigtest step build tests run failures errors skipped time elapsed s in io stargate metrics jersey config systempropsmetricslistenerconfigtest step build running io stargate metrics jersey metricsbindertest step build tests run failures errors skipped time elapsed s in io stargate metrics jersey metricsbindertest step build step build results step build step build failures step build testnamingconventionbased step build expected config custom step build but was config default step build step build tests run failures errors skipped | 1 |
55,865 | 6,494,278,774 | IssuesEvent | 2017-08-21 21:02:04 | Hiryus/screeps-projet | https://api.github.com/repos/Hiryus/screeps-projet | closed | Check if it is possible to code unit tests | test / poc | * Check if unit testing is possible, at least for some parts of the code.
* Check if it is possible to run entire game ticks based on initial situation and assert results. | 1.0 | Check if it is possible to code unit tests - * Check if unit testing is possible, at least for some parts of the code.
* Check if it is possible to run entire game ticks based on initial situation and assert results. | test | check if it is possible to code unit tests check if unit testing is possible at least for some parts of the code check if it is possible to run entire game ticks based on initial situation and assert results | 1 |
138,497 | 30,875,013,509 | IssuesEvent | 2023-08-03 13:47:47 | microsoft/devhome | https://api.github.com/repos/microsoft/devhome | closed | DevHome.Settings projects includes helper folder that does not exist | Issue-Bug Area-Code-Health In-PR | ### Dev Home version
Commit 800e3543df573fc023729c47fd0095127dc03d01
### Windows build number
N/A
### Other software
_No response_
### Steps to reproduce the bug
Open the DevHome.sln and open the DevHome.Settings project. Observe that the helpers folders does not exist but is show because it is included [here](https://github.com/microsoft/devhome/blob/main/settings/DevHome.Settings/DevHome.Settings.csproj#L69).
### Expected result
_No response_
### Actual result
Folder should not be included or actually exist.
### Included System Information
_No response_
### Included Extensions Information
_No response_
### Additional information
I can create a PR for this if that's fine. | 1.0 | DevHome.Settings projects includes helper folder that does not exist - ### Dev Home version
Commit 800e3543df573fc023729c47fd0095127dc03d01
### Windows build number
N/A
### Other software
_No response_
### Steps to reproduce the bug
Open the DevHome.sln and open the DevHome.Settings project. Observe that the helpers folders does not exist but is show because it is included [here](https://github.com/microsoft/devhome/blob/main/settings/DevHome.Settings/DevHome.Settings.csproj#L69).
### Expected result
_No response_
### Actual result
Folder should not be included or actually exist.
### Included System Information
_No response_
### Included Extensions Information
_No response_
### Additional information
I can create a PR for this if that's fine. | non_test | devhome settings projects includes helper folder that does not exist dev home version commit windows build number n a other software no response steps to reproduce the bug open the devhome sln and open the devhome settings project observe that the helpers folders does not exist but is show because it is included expected result no response actual result folder should not be included or actually exist included system information no response included extensions information no response additional information i can create a pr for this if that s fine | 0 |
815,171 | 30,540,354,862 | IssuesEvent | 2023-07-19 20:49:06 | GoogleCloudPlatform/cloud-sql-proxy | https://api.github.com/repos/GoogleCloudPlatform/cloud-sql-proxy | closed | How to use term_timeout | priority: p2 type: docs type: cleanup | ### Description
The help lists -term_timeout is an option that can be added to the command to allow it to self-terminate when all connections are either completed or when it times out in the seconds given.
There are no syntax usage examples given, anywhere, on the internet.
I can get it working locally on my mac, but I cannot get it working in the gce_proxy container provided by Google for use in K8s.
### Potential Solution
A potential solution to this would probably be actually providing example usage in all your documentation rather than leaving us to guess every single time we have to do literally anything in GCP.
### Additional Details
Additionally, it might be ok to just simply tell us why this doesn't work in the container like it does locally on my mac. | 1.0 | How to use term_timeout - ### Description
The help lists -term_timeout is an option that can be added to the command to allow it to self-terminate when all connections are either completed or when it times out in the seconds given.
There are no syntax usage examples given, anywhere, on the internet.
I can get it working locally on my mac, but I cannot get it working in the gce_proxy container provided by Google for use in K8s.
### Potential Solution
A potential solution to this would probably be actually providing example usage in all your documentation rather than leaving us to guess every single time we have to do literally anything in GCP.
### Additional Details
Additionally, it might be ok to just simply tell us why this doesn't work in the container like it does locally on my mac. | non_test | how to use term timeout description the help lists term timeout is an option that can be added to the command to allow it to self terminate when all connections are either completed or when it times out in the seconds given there are no syntax usage examples given anywhere on the internet i can get it working locally on my mac but i cannot get it working in the gce proxy container provided by google for use in potential solution a potential solution to this would probably be actually providing example usage in all your documentation rather than leaving us to guess every single time we have to do literally anything in gcp additional details additionally it might be ok to just simply tell us why this doesn t work in the container like it does locally on my mac | 0 |
224,750 | 17,200,098,587 | IssuesEvent | 2021-07-17 03:36:48 | kubernetes/kubeadm | https://api.github.com/repos/kubernetes/kubeadm | opened | update references to legacy artifacts locations (/bazel, /ci-cross, etc) | help wanted kind/documentation priority/backlog | see this PR:
https://github.com/kubernetes/kubeadm/pull/2528
> e.g. many of the uris in here are already dead, let alone things like
/bazel and /ci-cross not existing in gs://k8s-release-dev
there is a new GCS bucket for artifacts that is missing some of these paths:
https://console.cloud.google.com/storage/browser/k8s-release-dev
- search in this repository
- replace `/ci-cross` or `/bazel` with `/ci`
- use example paths such as `k8s-release-dev/ci/<version>/bin/linux/amd64`
| 1.0 | update references to legacy artifacts locations (/bazel, /ci-cross, etc) - see this PR:
https://github.com/kubernetes/kubeadm/pull/2528
> e.g. many of the uris in here are already dead, let alone things like
/bazel and /ci-cross not existing in gs://k8s-release-dev
there is a new GCS bucket for artifacts that is missing some of these paths:
https://console.cloud.google.com/storage/browser/k8s-release-dev
- search in this repository
- replace `/ci-cross` or `/bazel` with `/ci`
- use example paths such as `k8s-release-dev/ci/<version>/bin/linux/amd64`
| non_test | update references to legacy artifacts locations bazel ci cross etc see this pr e g many of the uris in here are already dead let alone things like bazel and ci cross not existing in gs release dev there is a new gcs bucket for artifacts that is missing some of these paths search in this repository replace ci cross or bazel with ci use example paths such as release dev ci bin linux | 0 |
36,777 | 5,081,996,212 | IssuesEvent | 2016-12-29 13:30:56 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | QueryAdvancedTest.testSecondMemberAfterAddingIndexes | Team: Core Type: Test-Failure | ```
java.lang.AssertionError: expected:<23> but was:<22>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at com.hazelcast.map.impl.query.QueryBasicTest.doFunctionalQueryTest(QueryBasicTest.java:538)
at com.hazelcast.map.impl.query.QueryAdvancedTest.testSecondMemberAfterAddingIndexes(QueryAdvancedTest.java:401)
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-IbmJDK1.8/com.hazelcast$hazelcast/377/testReport/junit/com.hazelcast.map.impl.query/QueryAdvancedTest/testSecondMemberAfterAddingIndexes/
| 1.0 | QueryAdvancedTest.testSecondMemberAfterAddingIndexes - ```
java.lang.AssertionError: expected:<23> but was:<22>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at com.hazelcast.map.impl.query.QueryBasicTest.doFunctionalQueryTest(QueryBasicTest.java:538)
at com.hazelcast.map.impl.query.QueryAdvancedTest.testSecondMemberAfterAddingIndexes(QueryAdvancedTest.java:401)
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-IbmJDK1.8/com.hazelcast$hazelcast/377/testReport/junit/com.hazelcast.map.impl.query/QueryAdvancedTest/testSecondMemberAfterAddingIndexes/
| test | queryadvancedtest testsecondmemberafteraddingindexes java lang assertionerror expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at org junit assert assertequals assert java at com hazelcast map impl query querybasictest dofunctionalquerytest querybasictest java at com hazelcast map impl query queryadvancedtest testsecondmemberafteraddingindexes queryadvancedtest java | 1 |
203,264 | 15,359,634,196 | IssuesEvent | 2021-03-01 16:03:32 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | closed | Port XMI Custom Bindings Session Bean Cache Test | in:EJB Container team:Blizzard test delivery | XMI Custom Bindings Session Bean Cache Test to Open Liberty : suite.r50.base.cache
Note: these exist in ejb2x_fat, but with java colon lookups, we have decided to port them to OL and convert them back to legacy lookups
should go into legacy_fat
2 test classes and 1 EJB.jar | 1.0 | Port XMI Custom Bindings Session Bean Cache Test - XMI Custom Bindings Session Bean Cache Test to Open Liberty : suite.r50.base.cache
Note: these exist in ejb2x_fat, but with java colon lookups, we have decided to port them to OL and convert them back to legacy lookups
should go into legacy_fat
2 test classes and 1 EJB.jar | test | port xmi custom bindings session bean cache test xmi custom bindings session bean cache test to open liberty suite base cache note these exist in fat but with java colon lookups we have decided to port them to ol and convert them back to legacy lookups should go into legacy fat test classes and ejb jar | 1 |
57,215 | 3,081,249,057 | IssuesEvent | 2015-08-22 14:40:06 | bitfighter/bitfighter | https://api.github.com/repos/bitfighter/bitfighter | closed | Variable size testitems | 020 bug duplicate imported Priority-Medium | _From [watusim...@bitfighter.org](https://code.google.com/u/105427273526970468779/) on June 01, 2014 05:37:26_
Do we want to add variable-size testItems? If so... do it!
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=437_ | 1.0 | Variable size testitems - _From [watusim...@bitfighter.org](https://code.google.com/u/105427273526970468779/) on June 01, 2014 05:37:26_
Do we want to add variable-size testItems? If so... do it!
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=437_ | non_test | variable size testitems from on june do we want to add variable size testitems if so do it original issue | 0 |
252,385 | 21,574,434,309 | IssuesEvent | 2022-05-02 12:17:54 | stores-cedcommerce/Karan-Patel-Retail---Internal--May-4- | https://api.github.com/repos/stores-cedcommerce/Karan-Patel-Retail---Internal--May-4- | closed | The the quantity box inside the quick view in search page, the designing of input field is not equal. | fixed Ready to test Search page | **Actual result:**
The the quantity box inside the quick view in search page, the designing of input field is not equal.
When we are clicking on the input field of the quantity then the quantity is increasing.
The video issue: https://www.awesomescreenshot.com/video/8659552?key=b8f9cb439d35a115bb975abd679e383b

**Expected result:**
The quantity input field have to be working properly on clicking, the quantity is increasing on clicking it should not work like this. | 1.0 | The the quantity box inside the quick view in search page, the designing of input field is not equal. - **Actual result:**
The the quantity box inside the quick view in search page, the designing of input field is not equal.
When we are clicking on the input field of the quantity then the quantity is increasing.
The video issue: https://www.awesomescreenshot.com/video/8659552?key=b8f9cb439d35a115bb975abd679e383b

**Expected result:**
The quantity input field have to be working properly on clicking, the quantity is increasing on clicking it should not work like this. | test | the the quantity box inside the quick view in search page the designing of input field is not equal actual result the the quantity box inside the quick view in search page the designing of input field is not equal when we are clicking on the input field of the quantity then the quantity is increasing the video issue expected result the quantity input field have to be working properly on clicking the quantity is increasing on clicking it should not work like this | 1 |
301,104 | 26,016,742,882 | IssuesEvent | 2022-12-21 09:08:58 | dusk-network/rusk | https://api.github.com/repos/dusk-network/rusk | closed | No logs are written during the configuration load | fix:bug module:rusk mark:testnet | **Describe the bug**
If any error happen during the loading of rusk configuration, they are not logged anywhere
**To Reproduce**
`target/release/rusk --generator IwillFAIL`
**Expected behaviour**
It should log this line
`WARN rusk::config::wallet: Failed parsing <generator>. Defaulting to Dusk's key`
**Additional context**
The logger doesn't exists yet because it is instantiated using the configuration itself
| 1.0 | No logs are written during the configuration load - **Describe the bug**
If any error happen during the loading of rusk configuration, they are not logged anywhere
**To Reproduce**
`target/release/rusk --generator IwillFAIL`
**Expected behaviour**
It should log this line
`WARN rusk::config::wallet: Failed parsing <generator>. Defaulting to Dusk's key`
**Additional context**
The logger doesn't exists yet because it is instantiated using the configuration itself
| test | no logs are written during the configuration load describe the bug if any error happen during the loading of rusk configuration they are not logged anywhere to reproduce target release rusk generator iwillfail expected behaviour it should log this line warn rusk config wallet failed parsing defaulting to dusk s key additional context the logger doesn t exists yet because it is instantiated using the configuration itself | 1 |
132,047 | 10,729,644,429 | IssuesEvent | 2019-10-28 15:56:59 | rucio/rucio | https://api.github.com/repos/rucio/rucio | closed | Python 3.* tests for clients | Testing enhancement | Motivation
----------
Enlarge the clients Python 3.* tests
Modification
------------
- Move CLIENTS 3.6 test to *main*
- Add CLIENTS 3.7 test to *allow failures*
- Add CLIENTS 3.8 test to *allow failures* (Will probably fail terribly) | 1.0 | Python 3.* tests for clients - Motivation
----------
Enlarge the clients Python 3.* tests
Modification
------------
- Move CLIENTS 3.6 test to *main*
- Add CLIENTS 3.7 test to *allow failures*
- Add CLIENTS 3.8 test to *allow failures* (Will probably fail terribly) | test | python tests for clients motivation enlarge the clients python tests modification move clients test to main add clients test to allow failures add clients test to allow failures will probably fail terribly | 1 |
281,915 | 24,435,428,917 | IssuesEvent | 2022-10-06 11:07:50 | rancher/cis-operator | https://api.github.com/repos/rancher/cis-operator | closed | STIG Profile requires mounting the `/etc/rancher` directory to the scan pod. | [zube]: To Test team/area3 team/infracloud feature/charts-cis-benchmark team/rke2 | ## Issue
For Rancher Federal's STIG implementation to work on top of CIS operator, the scan job requires having the `/etc/rancher` directory added to its list of mounted volumes so it has the access it needs to perform STIG-level scans.
## Suggested Fix
Add the `/etc/rancher` volume mount in the `job.go` file in addition the existing mounts.
| 1.0 | STIG Profile requires mounting the `/etc/rancher` directory to the scan pod. - ## Issue
For Rancher Federal's STIG implementation to work on top of CIS operator, the scan job requires having the `/etc/rancher` directory added to its list of mounted volumes so it has the access it needs to perform STIG-level scans.
## Suggested Fix
Add the `/etc/rancher` volume mount in the `job.go` file in addition the existing mounts.
| test | stig profile requires mounting the etc rancher directory to the scan pod issue for rancher federal s stig implementation to work on top of cis operator the scan job requires having the etc rancher directory added to its list of mounted volumes so it has the access it needs to perform stig level scans suggested fix add the etc rancher volume mount in the job go file in addition the existing mounts | 1 |
259,501 | 19,601,756,840 | IssuesEvent | 2022-01-06 02:41:48 | aitos-io/BoAT-X-Framework | https://api.github.com/repos/aitos-io/BoAT-X-Framework | closed | Open <YanFei Root>/cmake/toolchain-gcc.cmake Add the following two lines as below: | documentation Priority/P4 | **Describe the bug**
Open <YanFei Root>/cmake/toolchain-gcc.cmake Add the following two lines as below:
Kindly add "code " if applicable:
Open <YanFei Root>/cmake/toolchain-gcc.cmake Add the following two lines of code as below:
| 1.0 | Open <YanFei Root>/cmake/toolchain-gcc.cmake Add the following two lines as below: - **Describe the bug**
Open <YanFei Root>/cmake/toolchain-gcc.cmake Add the following two lines as below:
Kindly add "code " if applicable:
Open <YanFei Root>/cmake/toolchain-gcc.cmake Add the following two lines of code as below:
| non_test | open cmake toolchain gcc cmake add the following two lines as below describe the bug open cmake toolchain gcc cmake add the following two lines as below kindly add code if applicable open cmake toolchain gcc cmake add the following two lines of code as below | 0 |
11,775 | 2,664,696,943 | IssuesEvent | 2015-03-20 15:59:34 | holahmeds/remotedroid | https://api.github.com/repos/holahmeds/remotedroid | closed | Clicking D-Pad on Moto Droid freezes server | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Have a Motorola Droid
2. Click the D-Pad
3. watch as the server freezes
What is the expected output? What do you see instead?
Please use labels and text to provide additional information.
```
Original issue reported on code.google.com by `gall.bla...@gmail.com` on 6 Jan 2010 at 9:57 | 1.0 | Clicking D-Pad on Moto Droid freezes server - ```
What steps will reproduce the problem?
1. Have a Motorola Droid
2. Click the D-Pad
3. watch as the server freezes
What is the expected output? What do you see instead?
Please use labels and text to provide additional information.
```
Original issue reported on code.google.com by `gall.bla...@gmail.com` on 6 Jan 2010 at 9:57 | non_test | clicking d pad on moto droid freezes server what steps will reproduce the problem have a motorola droid click the d pad watch as the server freezes what is the expected output what do you see instead please use labels and text to provide additional information original issue reported on code google com by gall bla gmail com on jan at | 0 |
427,375 | 12,394,264,864 | IssuesEvent | 2020-05-20 16:39:29 | nativescript-vue/nativescript-vue | https://api.github.com/repos/nativescript-vue/nativescript-vue | closed | Vue Router and transition | priority:low | Hi guys,
I read a lot of problems with vue routers but I think that obviating the closure of the app with the back button of the smartphone is usable. The problem of the transition remains that I just can not make it work. There's news about that? For the problem of the back button I solved by adding this code in the main.js:
```
import * as application from 'tns-core-modules/application'
import {exit} from 'nativescript-exit';
application.android.on(application.AndroidApplication.activityBackPressedEvent, (args) => {
args.cancel = true; //this cancels the normal backbutton behaviour
if(router.history.current.path =='/signin' || router.history.current.path =='customers/home' || router.history.current.path =='operators/home')
exit()
});
``` | 1.0 | Vue Router and transition - Hi guys,
I read a lot of problems with vue routers but I think that obviating the closure of the app with the back button of the smartphone is usable. The problem of the transition remains that I just can not make it work. There's news about that? For the problem of the back button I solved by adding this code in the main.js:
```
import * as application from 'tns-core-modules/application'
import {exit} from 'nativescript-exit';
application.android.on(application.AndroidApplication.activityBackPressedEvent, (args) => {
args.cancel = true; //this cancels the normal backbutton behaviour
if(router.history.current.path =='/signin' || router.history.current.path =='customers/home' || router.history.current.path =='operators/home')
exit()
});
``` | non_test | vue router and transition hi guys i read a lot of problems with vue routers but i think that obviating the closure of the app with the back button of the smartphone is usable the problem of the transition remains that i just can not make it work there s news about that for the problem of the back button i solved by adding this code in the main js import as application from tns core modules application import exit from nativescript exit application android on application androidapplication activitybackpressedevent args args cancel true this cancels the normal backbutton behaviour if router history current path signin router history current path customers home router history current path operators home exit | 0 |
10,719 | 12,686,831,276 | IssuesEvent | 2020-06-20 13:18:06 | ValveSoftware/Proton | https://api.github.com/repos/ValveSoftware/Proton | closed | Tekken 7 (389730) not detecting controller on Proton newer than 3.16 | Game compatibility Regression | #### Your system information
* Steam client version (build number or date): Jul 17 2019
* Distribution (e.g. Ubuntu): GeckoLinux Rolling (Opensuse Tumbleweed) - but this was also an issue on Ubuntu (16.04 and 18.04)
* Opted into Steam client beta?: [Yes/No] No
* Have you checked for system updates?: [Yes/No] Yes
#### Please describe your issue in as much detail as possible:
Using Proton older than 4.x series, ie. 3.16, Tekken 7 correctly detects my controller as "Mad catz Inc. Mad Catz Fightstick Alpha PS4" and everything works. On Proton 4.2 or 4.11, it can't find it and simply says "No controller detected".
I have tried to check the boxes for Generic and Playstation controllers in Steam Settings, with no difference. Steam itself doesn't seem to detect it at all, as "Guide button chord configuration" also says "No controller attached".
#### Steps for reproducing this issue:
1. Use Proton newer than 3.16 | True | Tekken 7 (389730) not detecting controller on Proton newer than 3.16 - #### Your system information
* Steam client version (build number or date): Jul 17 2019
* Distribution (e.g. Ubuntu): GeckoLinux Rolling (Opensuse Tumbleweed) - but this was also an issue on Ubuntu (16.04 and 18.04)
* Opted into Steam client beta?: [Yes/No] No
* Have you checked for system updates?: [Yes/No] Yes
#### Please describe your issue in as much detail as possible:
Using Proton older than 4.x series, ie. 3.16, Tekken 7 correctly detects my controller as "Mad catz Inc. Mad Catz Fightstick Alpha PS4" and everything works. On Proton 4.2 or 4.11, it can't find it and simply says "No controller detected".
I have tried to check the boxes for Generic and Playstation controllers in Steam Settings, with no difference. Steam itself doesn't seem to detect it at all, as "Guide button chord configuration" also says "No controller attached".
#### Steps for reproducing this issue:
1. Use Proton newer than 3.16 | non_test | tekken not detecting controller on proton newer than your system information steam client version build number or date jul distribution e g ubuntu geckolinux rolling opensuse tumbleweed but this was also an issue on ubuntu and opted into steam client beta no have you checked for system updates yes please describe your issue in as much detail as possible using proton older than x series ie tekken correctly detects my controller as mad catz inc mad catz fightstick alpha and everything works on proton or it can t find it and simply says no controller detected i have tried to check the boxes for generic and playstation controllers in steam settings with no difference steam itself doesn t seem to detect it at all as guide button chord configuration also says no controller attached steps for reproducing this issue use proton newer than | 0 |
194,240 | 14,672,747,387 | IssuesEvent | 2020-12-30 11:22:23 | bfieldtools/bfieldtools | https://api.github.com/repos/bfieldtools/bfieldtools | closed | Github Actions CI migration | testing | **TL;DR: Travis CI is no longer free for open source projects. Instead you get a free trial good for 1k build minutes, and can email them to beg for more when those run out.**
Starting today Travis will apparently change their pricing, making their 'free' plan into a trial-only affair. For open-source projects, one has to apply for "OSS-only credits" "that will be reviewed and allocated on a case by case basis". In other words, every time the OSS-only credits run out, one has to re-apply, there is no monthly top-up or similar.
Read more here: https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing
The most relevant part for this project:
> We love our OSS teams who choose to build using TravisCI and we fully want to support that community. However, recently we have encountered significant abuse of the intention of this offering. Abusers have been tying up our build queues and causing performance reductions for everyone. In order to bring the rules back to fair playing grounds, we are implementing some changes for our public build repositories.
> - For those of you who have been building on public repositories (on travis-ci.com, with no paid subscription), we will upgrade you to our trial (free) plan with a 10K credit allotment (which allows around 1000 minutes in a Linux environment).
>
> - You will not need to change your build definitions when you are pointed to the new plan
> - When your credit allotment runs out - we’d love for you to consider which of our plans will meet your needs.
> - We will be offering an allotment of OSS minutes that will be reviewed and allocated on a case by case basis. Should you want to apply for these credits please open a request with Travis CI support stating that you’d like to be considered for the OSS allotment. Please include:
> - Your account name and VCS provider (like travis-ci.com/github/[your account name] )
> - How many credits (build minutes) you’d like to request (should your run out of credits again you can repeat the process to request more)
> - Usage will be tracked under your account information so that you can better understand how many credits/minutes are being used
>
For the time being, the credit system does not seem to work yet. Both my personal account and the bfieldtools organization has credits available, but builds are not running, I am shown the message below
> "Builds have been temporarily disabled for public repositories due to a negative credit balance. Please go to the Plan page to replenish your credit balance or alter your Consume paid credits for OSS setting."
Now the question is, should we start applying for these OSS-only credits and see how things move forward, or should we move to another CI provider that is friendlier towards open-source, e.g. GitHub Actions?
ping @anttimc @jiivana | 1.0 | Github Actions CI migration - **TL;DR: Travis CI is no longer free for open source projects. Instead you get a free trial good for 1k build minutes, and can email them to beg for more when those run out.**
Starting today Travis will apparently change their pricing, making their 'free' plan into a trial-only affair. For open-source projects, one has to apply for "OSS-only credits" "that will be reviewed and allocated on a case by case basis". In other words, every time the OSS-only credits run out, one has to re-apply, there is no monthly top-up or similar.
Read more here: https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing
The most relevant part for this project:
> We love our OSS teams who choose to build using TravisCI and we fully want to support that community. However, recently we have encountered significant abuse of the intention of this offering. Abusers have been tying up our build queues and causing performance reductions for everyone. In order to bring the rules back to fair playing grounds, we are implementing some changes for our public build repositories.
> - For those of you who have been building on public repositories (on travis-ci.com, with no paid subscription), we will upgrade you to our trial (free) plan with a 10K credit allotment (which allows around 1000 minutes in a Linux environment).
>
> - You will not need to change your build definitions when you are pointed to the new plan
> - When your credit allotment runs out - we’d love for you to consider which of our plans will meet your needs.
> - We will be offering an allotment of OSS minutes that will be reviewed and allocated on a case by case basis. Should you want to apply for these credits please open a request with Travis CI support stating that you’d like to be considered for the OSS allotment. Please include:
> - Your account name and VCS provider (like travis-ci.com/github/[your account name] )
> - How many credits (build minutes) you’d like to request (should your run out of credits again you can repeat the process to request more)
> - Usage will be tracked under your account information so that you can better understand how many credits/minutes are being used
>
For the time being, the credit system does not seem to work yet. Both my personal account and the bfieldtools organization has credits available, but builds are not running, I am shown the message below
> "Builds have been temporarily disabled for public repositories due to a negative credit balance. Please go to the Plan page to replenish your credit balance or alter your Consume paid credits for OSS setting."
Now the question is, should we start applying for these OSS-only credits and see how things move forward, or should we move to another CI provider that is friendlier towards open-source, e.g. GitHub Actions?
ping @anttimc @jiivana | test | github actions ci migration tl dr travis ci is no longer free for open source projects instead you get a free trial good for build minutes and can email them to beg for more when those run out starting today travis will apparently change their pricing making their free plan into a trial only affair for open source projects one has to apply for oss only credits that will be reviewed and allocated on a case by case basis in other words every time the oss only credits run out one has to re apply there is no monthly top up or similar read more here the most relevant part for this project we love our oss teams who choose to build using travisci and we fully want to support that community however recently we have encountered significant abuse of the intention of this offering abusers have been tying up our build queues and causing performance reductions for everyone in order to bring the rules back to fair playing grounds we are implementing some changes for our public build repositories for those of you who have been building on public repositories on travis ci com with no paid subscription we will upgrade you to our trial free plan with a credit allotment which allows around minutes in a linux environment you will not need to change your build definitions when you are pointed to the new plan when your credit allotment runs out we’d love for you to consider which of our plans will meet your needs we will be offering an allotment of oss minutes that will be reviewed and allocated on a case by case basis should you want to apply for these credits please open a request with travis ci support stating that you’d like to be considered for the oss allotment please include your account name and vcs provider like travis ci com github how many credits build minutes you’d like to request should your run out of credits again you can repeat the process to request more usage will be tracked under your account information so that you can better understand how many credits minutes are being used for the time being the credit system does not seem to work yet both my personal account and the bfieldtools organization has credits available but builds are not running i am shown the message below builds have been temporarily disabled for public repositories due to a negative credit balance please go to the plan page to replenish your credit balance or alter your consume paid credits for oss setting now the question is should we start applying for these oss only credits and see how things move forward or should we move to another ci provider that is friendlier towards open source e g github actions ping anttimc jiivana | 1 |
229,533 | 18,362,351,986 | IssuesEvent | 2021-10-09 12:45:59 | nanocurrency/nano-node | https://api.github.com/repos/nanocurrency/nano-node | opened | Unit test election.quorum_minimum_update_weight_before_quorum_checks fails sometimes | bug unit test | This unit test fails in CI and I can also make it fail on my laptop.
It usually fails within the first few hundred tries.
It fails more on a quiet PC, extra load makes it fail less.
The following assert fails:
ASSERT_TIMELY (10s, !node1.rep_crawler.response (channel, vote2));
My suspicion is that node2 is not always ready to respond. | 1.0 | Unit test election.quorum_minimum_update_weight_before_quorum_checks fails sometimes - This unit test fails in CI and I can also make it fail on my laptop.
It usually fails within the first few hundred tries.
It fails more on a quiet PC, extra load makes it fail less.
The following assert fails:
ASSERT_TIMELY (10s, !node1.rep_crawler.response (channel, vote2));
My suspicion is that node2 is not always ready to respond. | test | unit test election quorum minimum update weight before quorum checks fails sometimes this unit test fails in ci and i can also make it fail on my laptop it usually fails within the first few hundred tries it fails more on a quiet pc extra load makes it fail less the following assert fails assert timely rep crawler response channel my suspicion is that is not always ready to respond | 1 |
14,830 | 3,423,738,836 | IssuesEvent | 2015-12-09 08:50:58 | centreon/centreon | https://api.github.com/repos/centreon/centreon | opened | Pagination issue in commands menu | BetaTest Kind/Bug | In the configuration check commands menu, If I search for a non available command, the result is correct with no pagination.
But if I change menu from "check" to "misc" and I go back to "check" the result is empty because command doesn't exist but pagination appear
| 1.0 | Pagination issue in commands menu - In the configuration check commands menu, If I search for a non available command, the result is correct with no pagination.
But if I change menu from "check" to "misc" and I go back to "check" the result is empty because command doesn't exist but pagination appear
| test | pagination issue in commands menu in the configuration check commands menu if i search for a non available command the result is correct with no pagination but if i change menu from check to misc and i go back to check the result is empty because command doesn t exist but pagination appear | 1 |
349,891 | 10,475,288,380 | IssuesEvent | 2019-09-23 16:01:36 | duo-labs/cloudmapper | https://api.github.com/repos/duo-labs/cloudmapper | closed | Fix unexpected admin for sts:AssumeRoleWithWebIdentity | HighPriority audit | The action `sts:AssumeRoleWithWebIdentity` should not be identified as an "unexpected action" at https://github.com/duo-labs/cloudmapper/blob/fda0b674077951be55d6b64b448f7b1985c855f2/shared/iam_audit.py#L336 | 1.0 | Fix unexpected admin for sts:AssumeRoleWithWebIdentity - The action `sts:AssumeRoleWithWebIdentity` should not be identified as an "unexpected action" at https://github.com/duo-labs/cloudmapper/blob/fda0b674077951be55d6b64b448f7b1985c855f2/shared/iam_audit.py#L336 | non_test | fix unexpected admin for sts assumerolewithwebidentity the action sts assumerolewithwebidentity should not be identified as an unexpected action at | 0 |
27,089 | 6,813,538,561 | IssuesEvent | 2017-11-06 09:40:08 | BTDF/DeploymentFramework | https://api.github.com/repos/BTDF/DeploymentFramework | closed | Add <OutputPath> for FilesToXmlPreProcess | bug CodePlexMigrationInitiated Impact: Low | As discussed here (http://biztalkdeployment.codeplex.com/discussions/237559), it would be great to have an <OutputPath> property next to <LocationPath> in <FilesToXmlPreProcess>. This way we can replicate the PortBindingsMaster.xml \ PortBinding.xml separation approach for other files.
For example, I would like an esbMaster.config to create an esb.config file with the tokens replaced.
The justification for this is that I want to mitigate the risk of users checking in to source control a file that has already had the tokens replaced.
#### This work item was migrated from CodePlex
CodePlex work item ID: '9357'
Vote count: '2'
| 1.0 | Add <OutputPath> for FilesToXmlPreProcess - As discussed here (http://biztalkdeployment.codeplex.com/discussions/237559), it would be great to have an <OutputPath> property next to <LocationPath> in <FilesToXmlPreProcess>. This way we can replicate the PortBindingsMaster.xml \ PortBinding.xml separation approach for other files.
For example, I would like an esbMaster.config to create an esb.config file with the tokens replaced.
The justification for this is that I want to mitigate the risk of users checking in to source control a file that has already had the tokens replaced.
#### This work item was migrated from CodePlex
CodePlex work item ID: '9357'
Vote count: '2'
| non_test | add for filestoxmlpreprocess as discussed here it would be great to have an property next to in this way we can replicate the portbindingsmaster xml portbinding xml separation approach for other files for example i would like an esbmaster config to create an esb config file with the tokens replaced the justification for this is that i want to mitigate the risk of users checking in to source control a file that has already had the tokens replaced this work item was migrated from codeplex codeplex work item id vote count | 0 |
73,083 | 13,966,178,781 | IssuesEvent | 2020-10-26 01:34:16 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | Test failure: JIT\\Methodical\\switch\\switch5\\switch5.cmd | area-CodeGen-coreclr | failed in job: [runtime 20201025.11 ](https://dev.azure.com/dnceng/public/_build/results?buildId=865450&view=ms.vss-test-web.build-test-results-tab&runId=27714174&resultId=101078&paneView=debug)
CoreCLR Windows_NT x64 Checked no_tiered_compilation @ Windows.10.Amd64.Open
Error message
~~~
Assert failure(PID 5236 [0x00001474], Thread: 3752 [0x0ea8]): Attempting to take a lock at shutdown that is not CRST_TAKEN_DURING_SHUTDOWN
CORECLR! CrstBase::PreEnter + 0x5C (0x00007ffaa24df3dc)
CORECLR! CrstBase::Enter + 0x202 (0x00007ffaa24de622)
CORECLR! ClrEnterCriticalSection + 0xF4 (0x00007ffaa27a67f0)
CORECLR! StressLog::Enter + 0x46 (0x00007ffaa2b28d52)
CORECLR! StressLog::CreateThreadStressLog + 0x1C6 (0x00007ffaa2b2892e)
CORECLR! StressLog::LogMsg + 0x76 (0x00007ffaa2b2909a)
CORECLR! EEShutDownHelper + 0x78C (0x00007ffaa2c6f6f4)
CORECLR! EEShutDown + 0x24B (0x00007ffaa2c6eee7)
CORECLR! EEDllMain'::5'::__Body::Run + 0x14D (0x00007ffaa2c71ea9)
CORECLR! EEDllMain + 0x1F (0x00007ffaa2c6ec5f)
File: F:\workspace\_work\1\s\src\coreclr\src\vm\crst.cpp Line: 402
Image: C:\h\w\A24308FE\p\CoreRun.exe
Return code: 1
Raw output file: C:\h\w\A24308FE\w\BC0A0A3B\e\JIT\Methodical\Reports\JIT.Methodical\switch\switch5\switch5.output.txt
Raw output:
BEGIN EXECUTION
"C:\h\w\A24308FE\p\corerun.exe" switch5.dll
Test passed
Expected: 100
Actual: -1073740286
END EXECUTION - FAILED
FAILED
Test Harness Exitcode is : 1
To run the test:
> set CORE_ROOT=C:\h\w\A24308FE\p
> C:\h\w\A24308FE\w\BC0A0A3B\e\JIT\Methodical\switch\switch5\switch5.cmd
Expected: True
Actual: False
Stack trace
at JIT_Methodical._switch_switch5_switch5_._switch_switch5_switch5_cmd() in F:\workspace\_work\1\s\artifacts\tests\coreclr\Windows_NT.x64.Checked\TestWrappers\JIT.Methodical\JIT.Methodical.XUnitWrapper.cs:line 65906
~~~ | 1.0 | Test failure: JIT\\Methodical\\switch\\switch5\\switch5.cmd - failed in job: [runtime 20201025.11 ](https://dev.azure.com/dnceng/public/_build/results?buildId=865450&view=ms.vss-test-web.build-test-results-tab&runId=27714174&resultId=101078&paneView=debug)
CoreCLR Windows_NT x64 Checked no_tiered_compilation @ Windows.10.Amd64.Open
Error message
~~~
Assert failure(PID 5236 [0x00001474], Thread: 3752 [0x0ea8]): Attempting to take a lock at shutdown that is not CRST_TAKEN_DURING_SHUTDOWN
CORECLR! CrstBase::PreEnter + 0x5C (0x00007ffaa24df3dc)
CORECLR! CrstBase::Enter + 0x202 (0x00007ffaa24de622)
CORECLR! ClrEnterCriticalSection + 0xF4 (0x00007ffaa27a67f0)
CORECLR! StressLog::Enter + 0x46 (0x00007ffaa2b28d52)
CORECLR! StressLog::CreateThreadStressLog + 0x1C6 (0x00007ffaa2b2892e)
CORECLR! StressLog::LogMsg + 0x76 (0x00007ffaa2b2909a)
CORECLR! EEShutDownHelper + 0x78C (0x00007ffaa2c6f6f4)
CORECLR! EEShutDown + 0x24B (0x00007ffaa2c6eee7)
CORECLR! EEDllMain'::5'::__Body::Run + 0x14D (0x00007ffaa2c71ea9)
CORECLR! EEDllMain + 0x1F (0x00007ffaa2c6ec5f)
File: F:\workspace\_work\1\s\src\coreclr\src\vm\crst.cpp Line: 402
Image: C:\h\w\A24308FE\p\CoreRun.exe
Return code: 1
Raw output file: C:\h\w\A24308FE\w\BC0A0A3B\e\JIT\Methodical\Reports\JIT.Methodical\switch\switch5\switch5.output.txt
Raw output:
BEGIN EXECUTION
"C:\h\w\A24308FE\p\corerun.exe" switch5.dll
Test passed
Expected: 100
Actual: -1073740286
END EXECUTION - FAILED
FAILED
Test Harness Exitcode is : 1
To run the test:
> set CORE_ROOT=C:\h\w\A24308FE\p
> C:\h\w\A24308FE\w\BC0A0A3B\e\JIT\Methodical\switch\switch5\switch5.cmd
Expected: True
Actual: False
Stack trace
at JIT_Methodical._switch_switch5_switch5_._switch_switch5_switch5_cmd() in F:\workspace\_work\1\s\artifacts\tests\coreclr\Windows_NT.x64.Checked\TestWrappers\JIT.Methodical\JIT.Methodical.XUnitWrapper.cs:line 65906
~~~ | non_test | test failure jit methodical switch cmd failed in job coreclr windows nt checked no tiered compilation windows open error message assert failure pid thread attempting to take a lock at shutdown that is not crst taken during shutdown coreclr crstbase preenter coreclr crstbase enter coreclr clrentercriticalsection coreclr stresslog enter coreclr stresslog createthreadstresslog coreclr stresslog logmsg coreclr eeshutdownhelper coreclr eeshutdown coreclr eedllmain body run coreclr eedllmain file f workspace work s src coreclr src vm crst cpp line image c h w p corerun exe return code raw output file c h w w e jit methodical reports jit methodical switch output txt raw output begin execution c h w p corerun exe dll test passed expected actual end execution failed failed test harness exitcode is to run the test set core root c h w p c h w w e jit methodical switch cmd expected true actual false stack trace at jit methodical switch switch cmd in f workspace work s artifacts tests coreclr windows nt checked testwrappers jit methodical jit methodical xunitwrapper cs line | 0 |
73,662 | 7,348,682,182 | IssuesEvent | 2018-03-08 07:46:04 | EugenMayer/docker-sync | https://api.github.com/repos/EugenMayer/docker-sync | closed | Write tests, rubymasters please help! | Tests feature help wanted | Its not like i do not write tests, but this here is pretty special. Containers, remote file sync and so on.
Any rubyist available to explain, how this would be done in a considerable amount of time?
Anyone who feels happy to structure the test-idea for this project, please go on!
| 1.0 | Write tests, rubymasters please help! - Its not like i do not write tests, but this here is pretty special. Containers, remote file sync and so on.
Any rubyist available to explain, how this would be done in a considerable amount of time?
Anyone who feels happy to structure the test-idea for this project, please go on!
| test | write tests rubymasters please help its not like i do not write tests but this here is pretty special containers remote file sync and so on any rubyist available to explain how this would be done in a considerable amount of time anyone who feels happy to structure the test idea for this project please go on | 1 |
314,684 | 27,016,899,586 | IssuesEvent | 2023-02-10 20:19:08 | LLNL/axom | https://api.github.com/repos/LLNL/axom | closed | Implement replacement rules for IntersectionShaper. | Testing Klee | The IntersectionShaper class needs to implement the "replaces" and "does_not_replace" replacement rules that help govern how the material volume fractions are written into the fields produced by the shaper.
* Implement the rules
* Create datasets that show the rules work for revolved contours used by IntersectionShaper.
* Create CI tests | 1.0 | Implement replacement rules for IntersectionShaper. - The IntersectionShaper class needs to implement the "replaces" and "does_not_replace" replacement rules that help govern how the material volume fractions are written into the fields produced by the shaper.
* Implement the rules
* Create datasets that show the rules work for revolved contours used by IntersectionShaper.
* Create CI tests | test | implement replacement rules for intersectionshaper the intersectionshaper class needs to implement the replaces and does not replace replacement rules that help govern how the material volume fractions are written into the fields produced by the shaper implement the rules create datasets that show the rules work for revolved contours used by intersectionshaper create ci tests | 1 |
25,807 | 3,965,570,101 | IssuesEvent | 2016-05-03 08:58:01 | saurabhd/hk_realestate | https://api.github.com/repos/saurabhd/hk_realestate | closed | Mobile Menu | critical Design | **Site Area** : Mobile , **Priority** : High
**Description :**
Find a solution for a mobile menu that contains main and footer menu and is cache able.
| 1.0 | Mobile Menu - **Site Area** : Mobile , **Priority** : High
**Description :**
Find a solution for a mobile menu that contains main and footer menu and is cache able.
| non_test | mobile menu site area mobile priority high description find a solution for a mobile menu that contains main and footer menu and is cache able | 0 |
321,192 | 27,512,142,409 | IssuesEvent | 2023-03-06 09:33:13 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] SmokeTestMultiNodeClientYamlTestSuiteIT test {yaml=indices.get_index_template/10_basic/Add data lifecycle} failing | >test-failure Team:Data Management :Data Management/DLM | **Build scan:**
https://gradle-enterprise.elastic.co/s/uwa5vumrkallu/tests/:qa:smoke-test-multinode:yamlRestTest/org.elasticsearch.smoketest.SmokeTestMultiNodeClientYamlTestSuiteIT/test%20%7Byaml=indices.get_index_template%2F10_basic%2FAdd%20data%20lifecycle%7D
**Reproduction line:**
```
./gradlew ':qa:smoke-test-multinode:yamlRestTest' --tests "org.elasticsearch.smoketest.SmokeTestMultiNodeClientYamlTestSuiteIT" -Dtests.method="test {yaml=indices.get_index_template/10_basic/Add data lifecycle}" -Dtests.seed=714738FE10E5B6A -Dtests.locale=vi -Dtests.timezone=Australia/Darwin -Druntime.java=17 -Dtests.fips.enabled=true
```
**Applicable branches:**
main
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.smoketest.SmokeTestMultiNodeClientYamlTestSuiteIT&tests.test=test%20%7Byaml%3Dindices.get_index_template/10_basic/Add%20data%20lifecycle%7D
**Failure excerpt:**
```
java.lang.AssertionError: Failure at [indices.get_index_template/10_basic:113]: got unexpected warning header [
299 Elasticsearch-8.8.0-SNAPSHOT-2b94dfc01d12b84f6acd34ec084ea37370b8e1fb "index template [test-lifecycle] has index patterns [data-stream-with-lifecycle-*] matching patterns from existing older templates [global] with patterns (global => [*]); this template [test-lifecycle] will take precedence during new index creation"
]
at __randomizedtesting.SeedInfo.seed([714738FE10E5B6A:8F404C554FF23692]:0)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:547)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:499)
at jdk.internal.reflect.GeneratedMethodAccessor11.invoke(null:-1)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.elasticsearch.test.cluster.local.LocalElasticsearchCluster$1.evaluate(LocalElasticsearchCluster.java:39)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.AssertionError: got unexpected warning header [
299 Elasticsearch-8.8.0-SNAPSHOT-2b94dfc01d12b84f6acd34ec084ea37370b8e1fb "index template [test-lifecycle] has index patterns [data-stream-with-lifecycle-*] matching patterns from existing older templates [global] with patterns (global => [*]); this template [test-lifecycle] will take precedence during new index creation"
]
at org.junit.Assert.fail(Assert.java:88)
at org.elasticsearch.test.rest.yaml.section.DoSection.checkWarningHeaders(DoSection.java:511)
at org.elasticsearch.test.rest.yaml.section.DoSection.execute(DoSection.java:369)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:527)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:499)
at jdk.internal.reflect.GeneratedMethodAccessor11.invoke(null:-1)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.elasticsearch.test.cluster.local.LocalElasticsearchCluster$1.evaluate(LocalElasticsearchCluster.java:39)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
``` | 1.0 | [CI] SmokeTestMultiNodeClientYamlTestSuiteIT test {yaml=indices.get_index_template/10_basic/Add data lifecycle} failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/uwa5vumrkallu/tests/:qa:smoke-test-multinode:yamlRestTest/org.elasticsearch.smoketest.SmokeTestMultiNodeClientYamlTestSuiteIT/test%20%7Byaml=indices.get_index_template%2F10_basic%2FAdd%20data%20lifecycle%7D
**Reproduction line:**
```
./gradlew ':qa:smoke-test-multinode:yamlRestTest' --tests "org.elasticsearch.smoketest.SmokeTestMultiNodeClientYamlTestSuiteIT" -Dtests.method="test {yaml=indices.get_index_template/10_basic/Add data lifecycle}" -Dtests.seed=714738FE10E5B6A -Dtests.locale=vi -Dtests.timezone=Australia/Darwin -Druntime.java=17 -Dtests.fips.enabled=true
```
**Applicable branches:**
main
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.smoketest.SmokeTestMultiNodeClientYamlTestSuiteIT&tests.test=test%20%7Byaml%3Dindices.get_index_template/10_basic/Add%20data%20lifecycle%7D
**Failure excerpt:**
```
java.lang.AssertionError: Failure at [indices.get_index_template/10_basic:113]: got unexpected warning header [
299 Elasticsearch-8.8.0-SNAPSHOT-2b94dfc01d12b84f6acd34ec084ea37370b8e1fb "index template [test-lifecycle] has index patterns [data-stream-with-lifecycle-*] matching patterns from existing older templates [global] with patterns (global => [*]); this template [test-lifecycle] will take precedence during new index creation"
]
at __randomizedtesting.SeedInfo.seed([714738FE10E5B6A:8F404C554FF23692]:0)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:547)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:499)
at jdk.internal.reflect.GeneratedMethodAccessor11.invoke(null:-1)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.elasticsearch.test.cluster.local.LocalElasticsearchCluster$1.evaluate(LocalElasticsearchCluster.java:39)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.AssertionError: got unexpected warning header [
299 Elasticsearch-8.8.0-SNAPSHOT-2b94dfc01d12b84f6acd34ec084ea37370b8e1fb "index template [test-lifecycle] has index patterns [data-stream-with-lifecycle-*] matching patterns from existing older templates [global] with patterns (global => [*]); this template [test-lifecycle] will take precedence during new index creation"
]
at org.junit.Assert.fail(Assert.java:88)
at org.elasticsearch.test.rest.yaml.section.DoSection.checkWarningHeaders(DoSection.java:511)
at org.elasticsearch.test.rest.yaml.section.DoSection.execute(DoSection.java:369)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:527)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:499)
at jdk.internal.reflect.GeneratedMethodAccessor11.invoke(null:-1)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.elasticsearch.test.cluster.local.LocalElasticsearchCluster$1.evaluate(LocalElasticsearchCluster.java:39)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
``` | test | smoketestmultinodeclientyamltestsuiteit test yaml indices get index template basic add data lifecycle failing build scan reproduction line gradlew qa smoke test multinode yamlresttest tests org elasticsearch smoketest smoketestmultinodeclientyamltestsuiteit dtests method test yaml indices get index template basic add data lifecycle dtests seed dtests locale vi dtests timezone australia darwin druntime java dtests fips enabled true applicable branches main reproduces locally didn t try failure history failure excerpt java lang assertionerror failure at got unexpected warning header elasticsearch snapshot index template has index patterns matching patterns from existing older templates with patterns global this template will take precedence during new index creation at randomizedtesting seedinfo seed at org elasticsearch test rest yaml esclientyamlsuitetestcase executesection esclientyamlsuitetestcase java at org elasticsearch test rest yaml esclientyamlsuitetestcase test esclientyamlsuitetestcase java at jdk internal reflect invoke null at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org elasticsearch test cluster local localelasticsearchcluster evaluate localelasticsearchcluster java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java caused by java lang assertionerror got unexpected warning header elasticsearch snapshot index template has index patterns matching patterns from existing older templates with patterns global this template will take precedence during new index creation at org junit assert fail assert java at org elasticsearch test rest yaml section dosection checkwarningheaders dosection java at org elasticsearch test rest yaml section dosection execute dosection java at org elasticsearch test rest yaml esclientyamlsuitetestcase executesection esclientyamlsuitetestcase java at org elasticsearch test rest yaml esclientyamlsuitetestcase test esclientyamlsuitetestcase java at jdk internal reflect invoke null at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org elasticsearch test cluster local localelasticsearchcluster evaluate localelasticsearchcluster java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java | 1 |
3,727 | 3,226,996,696 | IssuesEvent | 2015-10-10 19:53:36 | sstephenson/rbenv | https://api.github.com/repos/sstephenson/rbenv | closed | cannot install 2.1.5 via brew after updates | ruby-build uncofirmed | running OS X 10.1.1, currently installed rbenv 2.1.4
```
> brew update && brew upgrade ruby-build
> rbenv install 2.1.5
Downloading ruby-2.1.5.tar.gz...
-> http://dqw8nmjcqpjn7.cloudfront.net/4305cc6ceb094df55210d83548dcbeb5117d74eea25196a9b14fa268d354b100
Installing ruby-2.1.5...
BUILD FAILED (OS X 10.10.1 using ruby-build 20141113)
Inspect or clean up the working tree at /var/folders/qv/jbvzshbs76n4tqm3w4mfcx680000gn/T/ruby-build.20141127071426.92947
Results logged to /var/folders/qv/jbvzshbs76n4tqm3w4mfcx680000gn/T/ruby-build.20141127071426.92947.log
Last 10 log lines:
make[1]: *** [ext/openssl/all] Error 2
make[1]: *** Waiting for unfinished jobs....
installing default readline libraries
checking ../.././parse.y and ../.././ext/ripper/eventids2.c
linking shared-object readline.bundle
linking shared-object psych.bundle
installing default psych libraries
installing default ripper libraries
linking shared-object ripper.bundle
make: *** [build-ext] Error 2
====== from build log ...
compiling ossl_asn1.c
In file included from openssl_missing.c:22:
./openssl_missing.h:71:6: error: conflicting types for 'HMAC_CTX_copy'
void HMAC_CTX_copy(HMAC_CTX *out, HMAC_CTX *in);
^
/usr/include/openssl/hmac.h:102:5: note: previous declaration is here
int HMAC_CTX_copy(HMAC_CTX *dctx, HMAC_CTX *sctx);
^
In file included from openssl_missing.c:22:
./openssl_missing.h:95:5: error: conflicting types for 'EVP_CIPHER_CTX_copy'
int EVP_CIPHER_CTX_copy(EVP_CIPHER_CTX *out, EVP_CIPHER_CTX *in);
^
/usr/include/openssl/evp.h:502:5: note: previous declaration is here
int EVP_CIPHER_CTX_copy(EVP_CIPHER_CTX *out, const EVP_CIPHER_CTX *in);
^
openssl_missing.c:26:1: error: conflicting types for 'HMAC_CTX_copy'
HMAC_CTX_copy(HMAC_CTX *out, HMAC_CTX *in)
^
/usr/include/openssl/hmac.h:102:5: note: previous declaration is here
int HMAC_CTX_copy(HMAC_CTX *dctx, HMAC_CTX *sctx);
^
openssl_missing.c:121:1: error: conflicting types for 'EVP_CIPHER_CTX_copy'
EVP_CIPHER_CTX_copy(EVP_CIPHER_CTX *out, EVP_CIPHER_CTX *in)
^
/usr/include/openssl/evp.h:502:5: note: previous declaration is here
int EVP_CIPHER_CTX_copy(EVP_CIPHER_CTX *out, const EVP_CIPHER_CTX *in);
^
4 errors generated.
make[2]: *** [openssl_missing.o] Error 1
make[2]: *** Waiting for unfinished jobs....
``` | 1.0 | cannot install 2.1.5 via brew after updates - running OS X 10.1.1, currently installed rbenv 2.1.4
```
> brew update && brew upgrade ruby-build
> rbenv install 2.1.5
Downloading ruby-2.1.5.tar.gz...
-> http://dqw8nmjcqpjn7.cloudfront.net/4305cc6ceb094df55210d83548dcbeb5117d74eea25196a9b14fa268d354b100
Installing ruby-2.1.5...
BUILD FAILED (OS X 10.10.1 using ruby-build 20141113)
Inspect or clean up the working tree at /var/folders/qv/jbvzshbs76n4tqm3w4mfcx680000gn/T/ruby-build.20141127071426.92947
Results logged to /var/folders/qv/jbvzshbs76n4tqm3w4mfcx680000gn/T/ruby-build.20141127071426.92947.log
Last 10 log lines:
make[1]: *** [ext/openssl/all] Error 2
make[1]: *** Waiting for unfinished jobs....
installing default readline libraries
checking ../.././parse.y and ../.././ext/ripper/eventids2.c
linking shared-object readline.bundle
linking shared-object psych.bundle
installing default psych libraries
installing default ripper libraries
linking shared-object ripper.bundle
make: *** [build-ext] Error 2
====== from build log ...
compiling ossl_asn1.c
In file included from openssl_missing.c:22:
./openssl_missing.h:71:6: error: conflicting types for 'HMAC_CTX_copy'
void HMAC_CTX_copy(HMAC_CTX *out, HMAC_CTX *in);
^
/usr/include/openssl/hmac.h:102:5: note: previous declaration is here
int HMAC_CTX_copy(HMAC_CTX *dctx, HMAC_CTX *sctx);
^
In file included from openssl_missing.c:22:
./openssl_missing.h:95:5: error: conflicting types for 'EVP_CIPHER_CTX_copy'
int EVP_CIPHER_CTX_copy(EVP_CIPHER_CTX *out, EVP_CIPHER_CTX *in);
^
/usr/include/openssl/evp.h:502:5: note: previous declaration is here
int EVP_CIPHER_CTX_copy(EVP_CIPHER_CTX *out, const EVP_CIPHER_CTX *in);
^
openssl_missing.c:26:1: error: conflicting types for 'HMAC_CTX_copy'
HMAC_CTX_copy(HMAC_CTX *out, HMAC_CTX *in)
^
/usr/include/openssl/hmac.h:102:5: note: previous declaration is here
int HMAC_CTX_copy(HMAC_CTX *dctx, HMAC_CTX *sctx);
^
openssl_missing.c:121:1: error: conflicting types for 'EVP_CIPHER_CTX_copy'
EVP_CIPHER_CTX_copy(EVP_CIPHER_CTX *out, EVP_CIPHER_CTX *in)
^
/usr/include/openssl/evp.h:502:5: note: previous declaration is here
int EVP_CIPHER_CTX_copy(EVP_CIPHER_CTX *out, const EVP_CIPHER_CTX *in);
^
4 errors generated.
make[2]: *** [openssl_missing.o] Error 1
make[2]: *** Waiting for unfinished jobs....
``` | non_test | cannot install via brew after updates running os x currently installed rbenv brew update brew upgrade ruby build rbenv install downloading ruby tar gz installing ruby build failed os x using ruby build inspect or clean up the working tree at var folders qv t ruby build results logged to var folders qv t ruby build log last log lines make error make waiting for unfinished jobs installing default readline libraries checking parse y and ext ripper c linking shared object readline bundle linking shared object psych bundle installing default psych libraries installing default ripper libraries linking shared object ripper bundle make error from build log compiling ossl c in file included from openssl missing c openssl missing h error conflicting types for hmac ctx copy void hmac ctx copy hmac ctx out hmac ctx in usr include openssl hmac h note previous declaration is here int hmac ctx copy hmac ctx dctx hmac ctx sctx in file included from openssl missing c openssl missing h error conflicting types for evp cipher ctx copy int evp cipher ctx copy evp cipher ctx out evp cipher ctx in usr include openssl evp h note previous declaration is here int evp cipher ctx copy evp cipher ctx out const evp cipher ctx in openssl missing c error conflicting types for hmac ctx copy hmac ctx copy hmac ctx out hmac ctx in usr include openssl hmac h note previous declaration is here int hmac ctx copy hmac ctx dctx hmac ctx sctx openssl missing c error conflicting types for evp cipher ctx copy evp cipher ctx copy evp cipher ctx out evp cipher ctx in usr include openssl evp h note previous declaration is here int evp cipher ctx copy evp cipher ctx out const evp cipher ctx in errors generated make error make waiting for unfinished jobs | 0 |
192,262 | 6,848,106,423 | IssuesEvent | 2017-11-13 17:22:08 | tsgrp/HPI | https://api.github.com/repos/tsgrp/HPI | opened | Flashing Links in View All Documents and Search Results | High Priority issue | It appears that there is some kind of race condition/event wiring bug when generating the links in search results. It doesn't happen all the time, but sometimes the links are "unclickable" and the text just flashes until a user mouses out of the search results and goes back over the link.
I was able to capture the issue in the "Performance" tab of Chrome, and it appears that the below anonymous onMouseEnter function in tableview.js is getting fired multiple times per second when the issue occurs, so there must be an issue with this getting wired up wrong in some conditions:
```js
self.grid.onMouseEnter.subscribe(function(e, args) {
```
- [ ] do we need to be executing all of that logic every time a user mouses over a link in the first place?
- [ ] can we ensure that we aren't firing that listener every time a user mouses over the item?
- [ ] could this be done once for the whole table?

| 1.0 | Flashing Links in View All Documents and Search Results - It appears that there is some kind of race condition/event wiring bug when generating the links in search results. It doesn't happen all the time, but sometimes the links are "unclickable" and the text just flashes until a user mouses out of the search results and goes back over the link.
I was able to capture the issue in the "Performance" tab of Chrome, and it appears that the below anonymous onMouseEnter function in tableview.js is getting fired multiple times per second when the issue occurs, so there must be an issue with this getting wired up wrong in some conditions:
```js
self.grid.onMouseEnter.subscribe(function(e, args) {
```
- [ ] do we need to be executing all of that logic every time a user mouses over a link in the first place?
- [ ] can we ensure that we aren't firing that listener every time a user mouses over the item?
- [ ] could this be done once for the whole table?

| non_test | flashing links in view all documents and search results it appears that there is some kind of race condition event wiring bug when generating the links in search results it doesn t happen all the time but sometimes the links are unclickable and the text just flashes until a user mouses out of the search results and goes back over the link i was able to capture the issue in the performance tab of chrome and it appears that the below anonymous onmouseenter function in tableview js is getting fired multiple times per second when the issue occurs so there must be an issue with this getting wired up wrong in some conditions js self grid onmouseenter subscribe function e args do we need to be executing all of that logic every time a user mouses over a link in the first place can we ensure that we aren t firing that listener every time a user mouses over the item could this be done once for the whole table | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.