Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
298,639
| 9,200,620,674
|
IssuesEvent
|
2019-03-07 17:29:32
|
hannesschulze/optimizer
|
https://api.github.com/repos/hannesschulze/optimizer
|
closed
|
Window is to large after update
|
Priority: High
|
I think it's due to French translations. Here is a screenshot:

Maybe you can adjust this a little bit.
|
1.0
|
Window is to large after update - I think it's due to French translations. Here is a screenshot:

Maybe you can adjust this a little bit.
|
non_test
|
window is to large after update i think it s due to french translations here is a screenshot maybe you can adjust this a little bit
| 0
|
190,478
| 6,818,869,390
|
IssuesEvent
|
2017-11-07 08:02:20
|
ballerinalang/composer
|
https://api.github.com/repos/ballerinalang/composer
|
closed
|
Error messages are not meaningful in transform
|
0.95 Priority/High Severity/Minor Type/Bug
|
Error message that is coming as soon as a transform is added is not meaningful
"type conversion is already exists from other to other"

|
1.0
|
Error messages are not meaningful in transform - Error message that is coming as soon as a transform is added is not meaningful
"type conversion is already exists from other to other"

|
non_test
|
error messages are not meaningful in transform error message that is coming as soon as a transform is added is not meaningful type conversion is already exists from other to other
| 0
|
168,859
| 6,388,306,447
|
IssuesEvent
|
2017-08-03 15:19:25
|
javaee/glassfish
|
https://api.github.com/repos/javaee/glassfish
|
closed
|
Add log message in server.log when skipping resource validation
|
Component: deployment Component: logging Priority: Minor Type: Task
|
When the jvm property `-Ddeployment.resource.validation=false` is set, we want to log a message in the server.log notifying that resource validation is being skipped.
|
1.0
|
Add log message in server.log when skipping resource validation - When the jvm property `-Ddeployment.resource.validation=false` is set, we want to log a message in the server.log notifying that resource validation is being skipped.
|
non_test
|
add log message in server log when skipping resource validation when the jvm property ddeployment resource validation false is set we want to log a message in the server log notifying that resource validation is being skipped
| 0
|
71,100
| 30,815,405,444
|
IssuesEvent
|
2023-08-01 13:11:53
|
MicrosoftDocs/powerbi-docs
|
https://api.github.com/repos/MicrosoftDocs/powerbi-docs
|
closed
|
Q&A Tooling note
|
assigned-to-author in-progress doc-enhancement powerbi/svc backlog powerbi-service/subsvc Pri2 Quick
|
Since the Note regarding Q&A Tooling only currently support Import connections applies to all Tooling, not just the Teach Q&A feature, should the note move from the Teach Q&A Limitations section to the Tooling Limitations section? I chose to leave this feedback rather than change it myself in case there was something I was missing.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 83c83048-7e4e-0df6-bd4c-16097df348cd
* Version Independent ID: c275fe95-edd1-a8c6-4b00-4857e1e65aab
* Content: [Limitations of Power BI Q&A - Power BI](https://docs.microsoft.com/en-us/power-bi/natural-language/q-and-a-limitations#teach-qa-limitations)
* Content Source: [powerbi-docs/natural-language/q-and-a-limitations.md](https://github.com/MicrosoftDocs/powerbi-docs/blob/live/powerbi-docs/natural-language/q-and-a-limitations.md)
* Service: **powerbi**
* Sub-service: **powerbi-service**
* GitHub Login: @maggiesMSFT
* Microsoft Alias: **maggies**
|
1.0
|
Q&A Tooling note - Since the Note regarding Q&A Tooling only currently support Import connections applies to all Tooling, not just the Teach Q&A feature, should the note move from the Teach Q&A Limitations section to the Tooling Limitations section? I chose to leave this feedback rather than change it myself in case there was something I was missing.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 83c83048-7e4e-0df6-bd4c-16097df348cd
* Version Independent ID: c275fe95-edd1-a8c6-4b00-4857e1e65aab
* Content: [Limitations of Power BI Q&A - Power BI](https://docs.microsoft.com/en-us/power-bi/natural-language/q-and-a-limitations#teach-qa-limitations)
* Content Source: [powerbi-docs/natural-language/q-and-a-limitations.md](https://github.com/MicrosoftDocs/powerbi-docs/blob/live/powerbi-docs/natural-language/q-and-a-limitations.md)
* Service: **powerbi**
* Sub-service: **powerbi-service**
* GitHub Login: @maggiesMSFT
* Microsoft Alias: **maggies**
|
non_test
|
q a tooling note since the note regarding q a tooling only currently support import connections applies to all tooling not just the teach q a feature should the note move from the teach q a limitations section to the tooling limitations section i chose to leave this feedback rather than change it myself in case there was something i was missing document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service powerbi sub service powerbi service github login maggiesmsft microsoft alias maggies
| 0
|
405,284
| 27,511,268,745
|
IssuesEvent
|
2023-03-06 09:00:17
|
iotaledger/wallet.rs
|
https://api.github.com/repos/iotaledger/wallet.rs
|
closed
|
Fix clone command
|
dx-documentation
|
## Description
The git clone command in i.e. [Clone the Repository](https://wiki.iota.org/shimmer/wallet.rs/how_tos/run_how_tos/#clone-the-repository) does not work, it should be `git clone https://github.com/iotaledger/wallet.rs.git`.
## Are you planning to do it yourself in a pull request?
Yes.
|
1.0
|
Fix clone command - ## Description
The git clone command in i.e. [Clone the Repository](https://wiki.iota.org/shimmer/wallet.rs/how_tos/run_how_tos/#clone-the-repository) does not work, it should be `git clone https://github.com/iotaledger/wallet.rs.git`.
## Are you planning to do it yourself in a pull request?
Yes.
|
non_test
|
fix clone command description the git clone command in i e does not work it should be git clone are you planning to do it yourself in a pull request yes
| 0
|
448,125
| 31,768,918,750
|
IssuesEvent
|
2023-09-12 10:28:43
|
ArtalkJS/Artalk
|
https://api.github.com/repos/ArtalkJS/Artalk
|
closed
|
Docker 2.6.0版本如何使用环境变量
|
documentation
|
看到2.6.0更新,看到支持环境变量,我找了文档,没有发现支持环境变量说明。
# 示例
`artalk.example.zh-CN.yml`文件有
```
# 服务器地址
host: "0.0.0.0"
# 服务器端口
port: 23366
# 加密密钥
app_key: ""
# 调试模式
debug: false
# 语言 ["en", "zh-CN", "zh-TW", "jp"]
locale: "zh-CN"
# 时间区域
timezone: "Asia/Shanghai"
# 默认站点名
site_default: "默认站点"
```
环境变量是什么?
`docker-compose.yml`
```
version: '3.4'
services:
artalk:
image: artalk/artalk-go
restart: always
container_name: artalk
ports:
- 23366:23366
environment:
HOST: "0.0.0.0"
TIMEZONE: "Asia/Shanghai"
```
` environment`部分对吗?
|
1.0
|
Docker 2.6.0版本如何使用环境变量 - 看到2.6.0更新,看到支持环境变量,我找了文档,没有发现支持环境变量说明。
# 示例
`artalk.example.zh-CN.yml`文件有
```
# 服务器地址
host: "0.0.0.0"
# 服务器端口
port: 23366
# 加密密钥
app_key: ""
# 调试模式
debug: false
# 语言 ["en", "zh-CN", "zh-TW", "jp"]
locale: "zh-CN"
# 时间区域
timezone: "Asia/Shanghai"
# 默认站点名
site_default: "默认站点"
```
环境变量是什么?
`docker-compose.yml`
```
version: '3.4'
services:
artalk:
image: artalk/artalk-go
restart: always
container_name: artalk
ports:
- 23366:23366
environment:
HOST: "0.0.0.0"
TIMEZONE: "Asia/Shanghai"
```
` environment`部分对吗?
|
non_test
|
docker ,看到支持环境变量,我找了文档,没有发现支持环境变量说明。 示例 artalk example zh cn yml 文件有 服务器地址 host 服务器端口 port 加密密钥 app key 调试模式 debug false 语言 locale zh cn 时间区域 timezone asia shanghai 默认站点名 site default 默认站点 环境变量是什么? docker compose yml version services artalk image artalk artalk go restart always container name artalk ports environment host timezone asia shanghai environment 部分对吗?
| 0
|
278,951
| 24,187,463,894
|
IssuesEvent
|
2022-09-23 14:27:19
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
Failing test: Jest Tests.x-pack/plugins/index_lifecycle_management/__jest__/client_integration/edit_policy/form_validation - <EditPolicy /> policy name validation doesn't allow policy name starting with underscore
|
failed-test
|
A test failed on a tracked branch
```
Error: expect(received).toEqual(expected) // deep equality
- Expected - 3
+ Received + 1
- Array [
- "A policy name cannot start with an underscore.",
- ]
+ Array []
at Object.expectMessages (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/x-pack/plugins/index_lifecycle_management/__jest__/client_integration/helpers/actions/errors_actions.ts:26:40)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/x-pack/plugins/index_lifecycle_management/__jest__/client_integration/edit_policy/form_validation/policy_name_validation.test.ts:88:20)
at _callCircusTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:212:5)
at _runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:149:3)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:63:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at run (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:472:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/21398#01836a91-7753-41ec-b4ef-89bdfd1944a5)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Tests.x-pack/plugins/index_lifecycle_management/__jest__/client_integration/edit_policy/form_validation","test.name":"<EditPolicy /> policy name validation doesn't allow policy name starting with underscore","test.failCount":1}} -->
|
1.0
|
Failing test: Jest Tests.x-pack/plugins/index_lifecycle_management/__jest__/client_integration/edit_policy/form_validation - <EditPolicy /> policy name validation doesn't allow policy name starting with underscore - A test failed on a tracked branch
```
Error: expect(received).toEqual(expected) // deep equality
- Expected - 3
+ Received + 1
- Array [
- "A policy name cannot start with an underscore.",
- ]
+ Array []
at Object.expectMessages (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/x-pack/plugins/index_lifecycle_management/__jest__/client_integration/helpers/actions/errors_actions.ts:26:40)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/x-pack/plugins/index_lifecycle_management/__jest__/client_integration/edit_policy/form_validation/policy_name_validation.test.ts:88:20)
at _callCircusTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:212:5)
at _runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:149:3)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:63:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at run (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-0891049732a0e254/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:472:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/21398#01836a91-7753-41ec-b4ef-89bdfd1944a5)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Tests.x-pack/plugins/index_lifecycle_management/__jest__/client_integration/edit_policy/form_validation","test.name":"<EditPolicy /> policy name validation doesn't allow policy name starting with underscore","test.failCount":1}} -->
|
test
|
failing test jest tests x pack plugins index lifecycle management jest client integration edit policy form validation policy name validation doesn t allow policy name starting with underscore a test failed on a tracked branch error expect received toequal expected deep equality expected received array a policy name cannot start with an underscore array at object expectmessages var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack plugins index lifecycle management jest client integration helpers actions errors actions ts at object var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack plugins index lifecycle management jest client integration edit policy form validation policy name validation test ts at callcircustest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at run var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runandtransformresultstojestformat var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapterinit js at jestadapter var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapter js at runtestinternal var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js first failure policy name validation doesn t allow policy name starting with underscore test failcount
| 1
|
248,400
| 21,016,580,010
|
IssuesEvent
|
2022-03-30 11:36:12
|
stores-cedcommerce/Dezymart-Store-ReDesign
|
https://api.github.com/repos/stores-cedcommerce/Dezymart-Store-ReDesign
|
closed
|
Product page
|
Product page Mobile Issue Inprogress Ready to test Fixed
|
**The actual result:**
1: When we click on the thumbnail images then the border is not coming over the images of thumbnail.
**The url :** https://8jul4942fz5ea8w7-56773443766.shopifypreview.com/products/knee-heating-pads-wrap-for-men-and-women-hot-compress-therapy-thermal-pads-for-cramps-and-arthritis-pain-relief
**The issue :**

2: The sliders arrows are not visible properly in the thumbnail images.
Expected result:
1: The border have to come when we click on the thumbnails images.
2: The sliders arrows have to be visible properly in the product page.
-----------------------------------------------------------------------------------------------------------------------------------
**Retesting:**
**Actual result:**
1: The border have to be visible properly, and slider have to be in a fixed position it is moving up and down when we are clicking on the thumbnail images.
**The issue video:**
https://www.awesomescreenshot.com/video/8117913?key=0d0f8057b9894b1f7e7dfb58681f2533
**Expected result:**
1: The sliders arrow have to be visible properly and the border have to be visible properly, and some time when we are clicking on the sliders then slider is moving down and extra spacing is coming.
|
1.0
|
Product page - **The actual result:**
1: When we click on the thumbnail images then the border is not coming over the images of thumbnail.
**The url :** https://8jul4942fz5ea8w7-56773443766.shopifypreview.com/products/knee-heating-pads-wrap-for-men-and-women-hot-compress-therapy-thermal-pads-for-cramps-and-arthritis-pain-relief
**The issue :**

2: The sliders arrows are not visible properly in the thumbnail images.
Expected result:
1: The border have to come when we click on the thumbnails images.
2: The sliders arrows have to be visible properly in the product page.
-----------------------------------------------------------------------------------------------------------------------------------
**Retesting:**
**Actual result:**
1: The border have to be visible properly, and slider have to be in a fixed position it is moving up and down when we are clicking on the thumbnail images.
**The issue video:**
https://www.awesomescreenshot.com/video/8117913?key=0d0f8057b9894b1f7e7dfb58681f2533
**Expected result:**
1: The sliders arrow have to be visible properly and the border have to be visible properly, and some time when we are clicking on the sliders then slider is moving down and extra spacing is coming.
|
test
|
product page the actual result when we click on the thumbnail images then the border is not coming over the images of thumbnail the url the issue the sliders arrows are not visible properly in the thumbnail images expected result the border have to come when we click on the thumbnails images the sliders arrows have to be visible properly in the product page retesting actual result the border have to be visible properly and slider have to be in a fixed position it is moving up and down when we are clicking on the thumbnail images the issue video expected result the sliders arrow have to be visible properly and the border have to be visible properly and some time when we are clicking on the sliders then slider is moving down and extra spacing is coming
| 1
|
38,911
| 5,204,519,853
|
IssuesEvent
|
2017-01-24 15:45:41
|
publiclab/plots2
|
https://api.github.com/repos/publiclab/plots2
|
closed
|
Add code coverage to plots2
|
in progress testing
|
Its time that we did some analytics on our code. Add code coverage and test coverage to plots2. Trying in two different services - [codeclimate](https://codeclimate.com) and [coveralls](http://coveralls.io/)
|
1.0
|
Add code coverage to plots2 - Its time that we did some analytics on our code. Add code coverage and test coverage to plots2. Trying in two different services - [codeclimate](https://codeclimate.com) and [coveralls](http://coveralls.io/)
|
test
|
add code coverage to its time that we did some analytics on our code add code coverage and test coverage to trying in two different services and
| 1
|
312,470
| 26,866,372,672
|
IssuesEvent
|
2023-02-04 00:34:30
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[QUALIDADE] [SENIOR] [SALVADOR] [HOME OFFICE] Analista de Qualidade Sr | home office na [CAPGEMINI]
|
SALVADOR HOME OFFICE JAVA SENIOR GIT MOBILE SELENIUM APPIUM JENKINS QUALIDADE ECLIPSE QUALIDADE DE SOFTWARE ANDROID STUDIO HELP WANTED PIPELINE TESTES DE API Stale
|
<!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Analista de Qualidade Sr | home office
## Local
- Salvador
## Benefícios
- Assistência Médica, Assistência Odontológica, Vale Refeição, Seguro de Vida, Previdência Privada, Curso de Inglês e Universidade Corporativa, Coursera, Gympass, Veloe, Clube de Benefícios, Programa de Reconhecimento.
## Requisitos
**Obrigatórios:**
- Superior completo na área de Tecnologia da Informação;
- Experiência como Analista de Qualidade focado em processo de desenvolvimento, bem como na validação e certificação de processos;
- Conhecimento em Java Appium Selenium pra Mobile, pipeline Jenkings, Git, versionamento e melhores práticas de criação de branchs;
- Experiência em testes de APIs, Eclipse Android Studio SDK.
- Vaga para atuação 100% remota.
## Contratação
- a combinar
## Nossa empresa
- A Capgemini é o lugar onde diferentes pessoas formam um time único, desenvolvem a criatividade e entregam resultados. Gostamos de aprender uns com os outros e de nos divertir durante o processo.
- Em conjunto com seus clientes, a Capgemini cria e entrega soluções de negócios, de tecnologia e digitais que atendem às suas necessidades, permitindo que conquistem inovação e competitividade. Como uma empresa essencialmente multicultural, a Capgemini desenvolveu seu modo próprio de trabalhar, o Collaborative Business ExperienceTM, com base no Rightshore®, seu modelo de entrega mundial.
- Oferecemos uma série de serviços integrados que combinam a tecnologia de ponta com conhecimentos profundos em diversos setores e um forte controle de nossos quatro principais negócios.
## Como se candidatar
- [Clique aqui para se candidatar](https://www.capgemini.com/br-pt/jobs/analista-de-qualidade-sr-home-office/)
|
1.0
|
[QUALIDADE] [SENIOR] [SALVADOR] [HOME OFFICE] Analista de Qualidade Sr | home office na [CAPGEMINI] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Analista de Qualidade Sr | home office
## Local
- Salvador
## Benefícios
- Assistência Médica, Assistência Odontológica, Vale Refeição, Seguro de Vida, Previdência Privada, Curso de Inglês e Universidade Corporativa, Coursera, Gympass, Veloe, Clube de Benefícios, Programa de Reconhecimento.
## Requisitos
**Obrigatórios:**
- Superior completo na área de Tecnologia da Informação;
- Experiência como Analista de Qualidade focado em processo de desenvolvimento, bem como na validação e certificação de processos;
- Conhecimento em Java Appium Selenium pra Mobile, pipeline Jenkings, Git, versionamento e melhores práticas de criação de branchs;
- Experiência em testes de APIs, Eclipse Android Studio SDK.
- Vaga para atuação 100% remota.
## Contratação
- a combinar
## Nossa empresa
- A Capgemini é o lugar onde diferentes pessoas formam um time único, desenvolvem a criatividade e entregam resultados. Gostamos de aprender uns com os outros e de nos divertir durante o processo.
- Em conjunto com seus clientes, a Capgemini cria e entrega soluções de negócios, de tecnologia e digitais que atendem às suas necessidades, permitindo que conquistem inovação e competitividade. Como uma empresa essencialmente multicultural, a Capgemini desenvolveu seu modo próprio de trabalhar, o Collaborative Business ExperienceTM, com base no Rightshore®, seu modelo de entrega mundial.
- Oferecemos uma série de serviços integrados que combinam a tecnologia de ponta com conhecimentos profundos em diversos setores e um forte controle de nossos quatro principais negócios.
## Como se candidatar
- [Clique aqui para se candidatar](https://www.capgemini.com/br-pt/jobs/analista-de-qualidade-sr-home-office/)
|
test
|
analista de qualidade sr home office na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na descrição da vaga analista de qualidade sr home office local salvador benefícios assistência médica assistência odontológica vale refeição seguro de vida previdência privada curso de inglês e universidade corporativa coursera gympass veloe clube de benefícios programa de reconhecimento requisitos obrigatórios superior completo na área de tecnologia da informação experiência como analista de qualidade focado em processo de desenvolvimento bem como na validação e certificação de processos conhecimento em java appium selenium pra mobile pipeline jenkings git versionamento e melhores práticas de criação de branchs experiência em testes de apis eclipse android studio sdk vaga para atuação remota contratação a combinar nossa empresa a capgemini é o lugar onde diferentes pessoas formam um time único desenvolvem a criatividade e entregam resultados gostamos de aprender uns com os outros e de nos divertir durante o processo em conjunto com seus clientes a capgemini cria e entrega soluções de negócios de tecnologia e digitais que atendem às suas necessidades permitindo que conquistem inovação e competitividade como uma empresa essencialmente multicultural a capgemini desenvolveu seu modo próprio de trabalhar o collaborative business experiencetm com base no rightshore® seu modelo de entrega mundial oferecemos uma série de serviços integrados que combinam a tecnologia de ponta com conhecimentos profundos em diversos setores e um forte controle de nossos quatro principais negócios como se candidatar
| 1
|
76,963
| 7,550,124,254
|
IssuesEvent
|
2018-04-18 15:58:08
|
EyeSeeTea/QAApp
|
https://api.github.com/repos/EyeSeeTea/QAApp
|
closed
|
1.2.9: Improve, incorrect calculation of date of next assessment
|
buddybug question testing type - bug type - maintenance
|
Feedback from clussiana@psi.org : Incorrect calculation of date of next assessment. This is related to #255, too.

| Item | Value |
| --- | --- |
| Created | Fri Feb 16 2018 11:03:38 GMT+0000 (UTC) |
| App uptime | undefined |
| Build | 69 |
| Device type | SM-T531 |
| Device name | undefined |
| Screen size | 800 |
| Screen size | 800px by 1280px |
| Battery | 51% Unplugged |
| Memory free | 222 MB / 1376 MB |
| Network IP | undefined |
[Link to buddybuild feedback from build 69](https://dashboard.buddybuild.com/apps/56b408ad65c5670100adf4df/feedback?fid=5a86ba8ae4770400011b3e82&bnum=69)
|
1.0
|
1.2.9: Improve, incorrect calculation of date of next assessment - Feedback from clussiana@psi.org : Incorrect calculation of date of next assessment. This is related to #255, too.

| Item | Value |
| --- | --- |
| Created | Fri Feb 16 2018 11:03:38 GMT+0000 (UTC) |
| App uptime | undefined |
| Build | 69 |
| Device type | SM-T531 |
| Device name | undefined |
| Screen size | 800 |
| Screen size | 800px by 1280px |
| Battery | 51% Unplugged |
| Memory free | 222 MB / 1376 MB |
| Network IP | undefined |
[Link to buddybuild feedback from build 69](https://dashboard.buddybuild.com/apps/56b408ad65c5670100adf4df/feedback?fid=5a86ba8ae4770400011b3e82&bnum=69)
|
test
|
improve incorrect calculation of date of next assessment feedback from clussiana psi org incorrect calculation of date of next assessment this is related to too item value created fri feb gmt utc app uptime undefined build device type sm device name undefined screen size screen size by battery unplugged memory free mb mb network ip undefined
| 1
|
26,980
| 4,266,123,921
|
IssuesEvent
|
2016-07-12 13:41:04
|
TerraME/terrame
|
https://api.github.com/repos/TerraME/terrame
|
opened
|
comments with -- are not recognised in the verification of source code lines
|
bug Test
|
Run tests with `lines=true` to see the problem.
|
1.0
|
comments with -- are not recognised in the verification of source code lines - Run tests with `lines=true` to see the problem.
|
test
|
comments with are not recognised in the verification of source code lines run tests with lines true to see the problem
| 1
|
3,897
| 2,694,981,703
|
IssuesEvent
|
2015-04-01 23:58:46
|
printdotio/printio-android-sdk
|
https://api.github.com/repos/printdotio/printio-android-sdk
|
closed
|
High: When choosing a new shipping-to country, defaults to Afghanistan
|
bug enhancement Ready to Test
|
Newest SDK version.
Tap on the Shipping To dropdown, pop-up appears asking user if they want to ship to Afghanistan. Also, along the same lines of the currency screen, could we add the United States to the top of the country list?

|
1.0
|
High: When choosing a new shipping-to country, defaults to Afghanistan - Newest SDK version.
Tap on the Shipping To dropdown, pop-up appears asking user if they want to ship to Afghanistan. Also, along the same lines of the currency screen, could we add the United States to the top of the country list?

|
test
|
high when choosing a new shipping to country defaults to afghanistan newest sdk version tap on the shipping to dropdown pop up appears asking user if they want to ship to afghanistan also along the same lines of the currency screen could we add the united states to the top of the country list
| 1
|
196,782
| 6,948,941,140
|
IssuesEvent
|
2017-12-06 03:10:11
|
Daniel-Hoerauf/isa
|
https://api.github.com/repos/Daniel-Hoerauf/isa
|
opened
|
Students will get recommendations on what groups they could join
|
Priority - high
|
The web layer will pull information gained from the api (which took info from the model) about which groups had co-views higher than 3 of the group the student is currently looking at
|
1.0
|
Students will get recommendations on what groups they could join - The web layer will pull information gained from the api (which took info from the model) about which groups had co-views higher than 3 of the group the student is currently looking at
|
non_test
|
students will get recommendations on what groups they could join the web layer will pull information gained from the api which took info from the model about which groups had co views higher than of the group the student is currently looking at
| 0
|
206,258
| 16,023,862,435
|
IssuesEvent
|
2021-04-21 06:19:49
|
caos/zitadel
|
https://api.github.com/repos/caos/zitadel
|
closed
|
[Admin UX] Explain Management Roles
|
documentation enhancement help wanted
|
**Describe the bug**
There's no description of the management roles either in the Management Console or the documentation.
**Expected behavior**
Explain what each role does. Best in UI & Docs.
|
1.0
|
[Admin UX] Explain Management Roles - **Describe the bug**
There's no description of the management roles either in the Management Console or the documentation.
**Expected behavior**
Explain what each role does. Best in UI & Docs.
|
non_test
|
explain management roles describe the bug there s no description of the management roles either in the management console or the documentation expected behavior explain what each role does best in ui docs
| 0
|
63,736
| 6,883,697,767
|
IssuesEvent
|
2017-11-21 10:18:24
|
DEIB-GECO/GMQL
|
https://api.github.com/repos/DEIB-GECO/GMQL
|
closed
|
MAP wrong number of output samples
|
test Urgent
|
I run the following query:
INPUT_1 = SELECT(region: chr== chr2) dataset_1;
INPUT_2 = SELECT(region: chr== chr2) dataset_2;
RES = MAP(avg_score AS AVG(score)) INPUT_1 INPUT_2 ;
MATERIALIZE RES INTO MAP_EXAMPLE_1;
The input_1 has 1 sample, the input_2 has 3 but the res of the map is 6 samples (instead of 3!). I enclosed all the usefull material.
[map.zip](https://github.com/DEIB-GECO/GMQL/files/1453112/map.zip)
|
1.0
|
MAP wrong number of output samples - I run the following query:
INPUT_1 = SELECT(region: chr== chr2) dataset_1;
INPUT_2 = SELECT(region: chr== chr2) dataset_2;
RES = MAP(avg_score AS AVG(score)) INPUT_1 INPUT_2 ;
MATERIALIZE RES INTO MAP_EXAMPLE_1;
The input_1 has 1 sample, the input_2 has 3 but the res of the map is 6 samples (instead of 3!). I enclosed all the usefull material.
[map.zip](https://github.com/DEIB-GECO/GMQL/files/1453112/map.zip)
|
test
|
map wrong number of output samples i run the following query input select region chr dataset input select region chr dataset res map avg score as avg score input input materialize res into map example the input has sample the input has but the res of the map is samples instead of i enclosed all the usefull material
| 1
|
494,269
| 14,247,613,240
|
IssuesEvent
|
2020-11-19 11:42:56
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.amazon.com - see bug description
|
browser-focus-geckoview engine-gecko ml-needsdiagnosis-false priority-critical
|
<!-- @browser: Firefox Mobile 81.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:81.0) Gecko/81.0 Firefox/81.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/62099 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.amazon.com/ap/register
**Browser / Version**: Firefox Mobile 81.0
**Operating System**: Android 7.0
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: when I am creating a account it is showing an internal error
**Steps to Reproduce**:
I want to create a new account but it's showing internal error
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.amazon.com - see bug description - <!-- @browser: Firefox Mobile 81.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:81.0) Gecko/81.0 Firefox/81.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/62099 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.amazon.com/ap/register
**Browser / Version**: Firefox Mobile 81.0
**Operating System**: Android 7.0
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: when I am creating a account it is showing an internal error
**Steps to Reproduce**:
I want to create a new account but it's showing internal error
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
see bug description url browser version firefox mobile operating system android tested another browser yes other problem type something else description when i am creating a account it is showing an internal error steps to reproduce i want to create a new account but it s showing internal error browser configuration none from with ❤️
| 0
|
205,359
| 15,610,412,999
|
IssuesEvent
|
2021-03-19 13:10:00
|
Azure/azure-sdk-for-python
|
https://api.github.com/repos/Azure/azure-sdk-for-python
|
closed
|
URL encodings for recordings can make a request unfindable by recording infrastructure
|
EngSys feature-request test enhancement
|
If a key has a `:`, `/`, `?`, `=`, etc. within the key, the scrubber needs to look for the key in both an encoded and non-encoded format. It is probably best to change the scrubber.register_name_pair to register an additional pairing with a url encoded version of the key.
|
1.0
|
URL encodings for recordings can make a request unfindable by recording infrastructure - If a key has a `:`, `/`, `?`, `=`, etc. within the key, the scrubber needs to look for the key in both an encoded and non-encoded format. It is probably best to change the scrubber.register_name_pair to register an additional pairing with a url encoded version of the key.
|
test
|
url encodings for recordings can make a request unfindable by recording infrastructure if a key has a etc within the key the scrubber needs to look for the key in both an encoded and non encoded format it is probably best to change the scrubber register name pair to register an additional pairing with a url encoded version of the key
| 1
|
102,336
| 4,154,325,169
|
IssuesEvent
|
2016-06-16 11:09:45
|
alexhultman/uWebSockets
|
https://api.github.com/repos/alexhultman/uWebSockets
|
closed
|
Implement permessage-deflate
|
enhancement high priority
|
This is a popular extension and is needed to pass Autobahn fully.
|
1.0
|
Implement permessage-deflate - This is a popular extension and is needed to pass Autobahn fully.
|
non_test
|
implement permessage deflate this is a popular extension and is needed to pass autobahn fully
| 0
|
300,083
| 25,944,726,261
|
IssuesEvent
|
2022-12-16 22:40:46
|
hashicorp/terraform-provider-google
|
https://api.github.com/repos/hashicorp/terraform-provider-google
|
opened
|
Failing test(s): TestAccDNSRecordSet_routingPolicy
|
test failure
|
<!--- This is a template for reporting test failures on nightly builds. It should only be used by core contributors who have access to our CI/CD results. --->
<!-- i.e. "Consistently since X date" or "X% failure in MONTH" -->
Failure rate: 100% since 2022-12-06
<!-- List all impacted tests for searchability. The title of the issue can instead list one or more groups of tests, or describe the overall root cause. -->
Impacted tests:
- TestAccDNSRecordSet_routingPolicy
<!-- Link to the nightly build(s), ideally with one impacted test opened -->
Nightly builds:
- https://ci-oss.hashicorp.engineering/buildConfiguration/GoogleCloudBeta_ProviderGoogleCloudBetaGoogleProject/360401?buildTab=tests&expandedTest=6933794115364398004
<!-- The error message that displays in the tests tab, for reference -->
Message:
```
Error: Error creating DNS RecordSet: googleapi: Error 400: Routing policies referencing internal load balancers cannot be added to public zones, internalLoadBalancerDisallowedInPublicZone
```
|
1.0
|
Failing test(s): TestAccDNSRecordSet_routingPolicy - <!--- This is a template for reporting test failures on nightly builds. It should only be used by core contributors who have access to our CI/CD results. --->
<!-- i.e. "Consistently since X date" or "X% failure in MONTH" -->
Failure rate: 100% since 2022-12-06
<!-- List all impacted tests for searchability. The title of the issue can instead list one or more groups of tests, or describe the overall root cause. -->
Impacted tests:
- TestAccDNSRecordSet_routingPolicy
<!-- Link to the nightly build(s), ideally with one impacted test opened -->
Nightly builds:
- https://ci-oss.hashicorp.engineering/buildConfiguration/GoogleCloudBeta_ProviderGoogleCloudBetaGoogleProject/360401?buildTab=tests&expandedTest=6933794115364398004
<!-- The error message that displays in the tests tab, for reference -->
Message:
```
Error: Error creating DNS RecordSet: googleapi: Error 400: Routing policies referencing internal load balancers cannot be added to public zones, internalLoadBalancerDisallowedInPublicZone
```
|
test
|
failing test s testaccdnsrecordset routingpolicy failure rate since impacted tests testaccdnsrecordset routingpolicy nightly builds message error error creating dns recordset googleapi error routing policies referencing internal load balancers cannot be added to public zones internalloadbalancerdisallowedinpubliczone
| 1
|
287,858
| 24,868,543,618
|
IssuesEvent
|
2022-10-27 13:43:32
|
openrocket/openrocket
|
https://api.github.com/repos/openrocket/openrocket
|
closed
|
Write unit tests for software updater
|
good first issue Unit testing
|
These unit tests should verify all the different scenarios for the software updater checker: if there's a newer release available, if you have a newer release than the official release, if you have the same release, if you have a bogus release etc.
|
1.0
|
Write unit tests for software updater - These unit tests should verify all the different scenarios for the software updater checker: if there's a newer release available, if you have a newer release than the official release, if you have the same release, if you have a bogus release etc.
|
test
|
write unit tests for software updater these unit tests should verify all the different scenarios for the software updater checker if there s a newer release available if you have a newer release than the official release if you have the same release if you have a bogus release etc
| 1
|
312,873
| 26,882,967,903
|
IssuesEvent
|
2023-02-05 21:16:48
|
IntellectualSites/PlotSquared
|
https://api.github.com/repos/IntellectualSites/PlotSquared
|
opened
|
PlayerQuitEvent
|
Requires Testing
|
### Server Implementation
Paper
### Server Version
1.18.2
### Describe the bug
When you kick / ban / disconnect a player, it will show this error in console.
https://pastebin.com/QLPPVP3L
https://pastebin.com/W1t1yNY5
### To Reproduce
By kicking / banning / disconnecting, the error happens
https://pastebin.com/W1t1yNY5
### Expected behaviour
Not getting errors
### Screenshots / Videos
_No response_
### Error log (if applicable)
_No response_
### Plot Debugpaste
https://athion.net/ISPaster/paste/view/ce9e7089de1843fa9ba026cb9ec94c14
### PlotSquared Version
6.10.9-premium
### Checklist
- [X] I have included a Plot debugpaste.
- [X] I am using the newest build from https://www.spigotmc.org/resources/77506/ and the issue still persists.
### Anything else?
_No response_
|
1.0
|
PlayerQuitEvent - ### Server Implementation
Paper
### Server Version
1.18.2
### Describe the bug
When you kick / ban / disconnect a player, it will show this error in console.
https://pastebin.com/QLPPVP3L
https://pastebin.com/W1t1yNY5
### To Reproduce
By kicking / banning / disconnecting, the error happens
https://pastebin.com/W1t1yNY5
### Expected behaviour
Not getting errors
### Screenshots / Videos
_No response_
### Error log (if applicable)
_No response_
### Plot Debugpaste
https://athion.net/ISPaster/paste/view/ce9e7089de1843fa9ba026cb9ec94c14
### PlotSquared Version
6.10.9-premium
### Checklist
- [X] I have included a Plot debugpaste.
- [X] I am using the newest build from https://www.spigotmc.org/resources/77506/ and the issue still persists.
### Anything else?
_No response_
|
test
|
playerquitevent server implementation paper server version describe the bug when you kick ban disconnect a player it will show this error in console to reproduce by kicking banning disconnecting the error happens expected behaviour not getting errors screenshots videos no response error log if applicable no response plot debugpaste plotsquared version premium checklist i have included a plot debugpaste i am using the newest build from and the issue still persists anything else no response
| 1
|
159,395
| 12,474,935,618
|
IssuesEvent
|
2020-05-29 10:34:33
|
redhat-developer/service-binding-operator
|
https://api.github.com/repos/redhat-developer/service-binding-operator
|
opened
|
Add tests for Empty Service Selector scenario
|
unit-test
|
## Motivation
Unit tests for [empty service selector scenario](https://github.com/redhat-developer/service-binding-operator/blob/master/pkg/controller/servicebindingrequest/reconciler.go#L131) are missing which is needed.
|
1.0
|
Add tests for Empty Service Selector scenario - ## Motivation
Unit tests for [empty service selector scenario](https://github.com/redhat-developer/service-binding-operator/blob/master/pkg/controller/servicebindingrequest/reconciler.go#L131) are missing which is needed.
|
test
|
add tests for empty service selector scenario motivation unit tests for are missing which is needed
| 1
|
743,122
| 25,888,000,322
|
IssuesEvent
|
2022-12-14 15:50:14
|
SETI/pds-oops
|
https://api.github.com/repos/SETI/pds-oops
|
opened
|
Remove / deprecate PolynimialFOV and RadialFOV classes
|
A-Cleanup Effort 3 Easy B-OOPS Priority 5 Minor
|
These classes will be superceded by PolyFOV and BarrelFOV (see issue #26). Once they are adequately validated, these curent classes can be removed.
|
1.0
|
Remove / deprecate PolynimialFOV and RadialFOV classes - These classes will be superceded by PolyFOV and BarrelFOV (see issue #26). Once they are adequately validated, these curent classes can be removed.
|
non_test
|
remove deprecate polynimialfov and radialfov classes these classes will be superceded by polyfov and barrelfov see issue once they are adequately validated these curent classes can be removed
| 0
|
303,448
| 26,208,021,678
|
IssuesEvent
|
2023-01-04 01:44:25
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional_with_es_ssl/apps/discover/search_source_alert·ts - Discover alerting Search source Alert should show time field validation error
|
failed-test Team:DataDiscovery
|
A test failed on a tracked branch
```
TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="esQueryAlertExpressionError"])
Wait timed out after 10281ms
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-67b254da5c2dae6f/elastic/kibana-on-merge/kibana/node_modules/selenium-webdriver/lib/webdriver.js:907:17
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
remoteStacktrace: ''
}
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/25109#01853690-39e3-4a23-8257-b69e0a46361e)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional_with_es_ssl/apps/discover/search_source_alert·ts","test.name":"Discover alerting Search source Alert should show time field validation error","test.failCount":1}} -->
|
1.0
|
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional_with_es_ssl/apps/discover/search_source_alert·ts - Discover alerting Search source Alert should show time field validation error - A test failed on a tracked branch
```
TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="esQueryAlertExpressionError"])
Wait timed out after 10281ms
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-67b254da5c2dae6f/elastic/kibana-on-merge/kibana/node_modules/selenium-webdriver/lib/webdriver.js:907:17
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
remoteStacktrace: ''
}
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/25109#01853690-39e3-4a23-8257-b69e0a46361e)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional_with_es_ssl/apps/discover/search_source_alert·ts","test.name":"Discover alerting Search source Alert should show time field validation error","test.failCount":1}} -->
|
test
|
failing test chrome x pack ui functional tests x pack test functional with es ssl apps discover search source alert·ts discover alerting search source alert should show time field validation error a test failed on a tracked branch timeouterror waiting for element to be located by css selector wait timed out after at var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules selenium webdriver lib webdriver js at runmicrotasks at processticksandrejections node internal process task queues remotestacktrace first failure
| 1
|
585,770
| 17,533,554,005
|
IssuesEvent
|
2021-08-12 02:20:29
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
Unable to install grpcio on Alpine Linux distro
|
kind/bug priority/P2
|
<!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
Python
### What operating system (Linux, Windows,...) and version?
Alpine Linux, 3.14
### What runtime / compiler are you using (e.g. python version or version of gcc)
grpcio v1.39.0
### What did you do?
Please provide either 1) A unit test for reproducing the bug or 2) Specific steps for us to follow to reproduce the bug. If there’s not enough information to debug the problem, gRPC team may close the issue at their discretion. You’re welcome to re-open the issue once you have a reproduction.
1. Using docker desktop on windows 10, run a container with image `python:3.9-alpine`
2. Install `grpcio v1.39.0`
```
docker container run -it --rm --name python-alpine python:3.9-alpine sh
/ # pip install grpcio==1.39.0
```
### What did you expect to see?
`grpcio` package installed successfully
```
pip install grpcio
Collecting grpcio
Downloading grpcio-1.39.0-cp39-cp39-manylinux2014_x86_64.whl (4.3 MB)
|████████████████████████████████| 4.3 MB 7.7 MB/s
Collecting six>=1.5.2
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Installing collected packages: six, grpcio
Successfully installed grpcio-1.39.0 six-1.16.0
```
### What did you see instead?
```
/ # pip install grpcio==1.39.0
Collecting grpcio==1.39.0
Downloading grpcio-1.39.0.tar.gz (21.3 MB)
|████████████████████████████████| 21.3 MB 10.0 MB/s
ERROR: Command errored out with exit status 1: python setup.py
command: /usr/local/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-127ja51a/grpcio_6784ae116c334fd698b10432521236db/setup.py'"'"'; __file__='"'"'/tmp/pip-install-127ja51a/grpcio_6784ae116c334fd698b10432521236db/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-e7s1nsjk 0.13.1, 0.14.0rc1
cwd: /tmp/pip-install-127ja51a/grpcio_6784ae116c334fd698b10432521236db/ 1.11.0rc1, 1.11.0
Complete output (11 lines): , 1.20.0rc2, 1.20
Traceback (most recent call last): 1.31.0rc1, 1.31.0
File "<string>", line 1, in <module>
File "/tmp/pip-install-127ja51a/grpcio_6784ae116c334fd698b10432521236db/setup.py", line 257, in <module>
if check_linker_need_libatomic():
File "/tmp/pip-install-127ja51a/grpcio_6784ae116c334fd698b10432521236db/setup.py", line 204, in check_linker_need_libatomic
cpp_test = subprocess.Popen([cxx, '-x', 'c++', '-std=c++11', '-'],
File "/usr/local/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'c++'
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/07/ea/398472e896f529d23fb58e33f01298dfc554a341d58f87c1ea5ad817208e/grpcio-1.39.0.tar.gz#sha256=57974361a459d6fe04c9ae0af1845974606612249f467bbd2062d963cb90f407 (from https://pypi.org/simple/grpcio/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement grpcio==1.39.0 (from versions: 0.4.0a0, 0.4.0a1, 0.4.0a2, 0.4.0a3, 0.4.0a4, 0.4.0a5, 0.4.0a6, 0.4.0a7, 0.4.0a8, 0.4.0a13, 0.4.0a14, 0.5.0a0, 0.5.0a1, 0.5.0a2, 0.9.0a0, 0.9.0a1, 0.10.0a0, 0.11.0b0, 0.11.0b1, 0.12.0b0, 0.13.0, 0.13.1rc1,
0.13.1, 0.14.0rc1, 0.14.0, 0.15.0, 1.0.0rc1, 1.0.0rc2, 1.0.0, 1.0.1rc1, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.1.3, 1.2.0, 1.2.1, 1.3.0, 1.3.5, 1.4.0, 1.6.0, 1.6.3, 1.7.0, 1.7.3, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.6, 1.9.0rc1, 1.9.0rc2, 1.9.0rc3, 1.9.0, 1.9.1, 1.10.0rc1, 1.10.0rc2, 1.10.0, 1.10.1rc1, 1.10.1rc2, 1.10.1, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1rc1, 1.11.1, 1.12.0rc1, 1.12.0, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0rc3, 1.13.0, 1.14.0rc1, 1.14.0rc2, 1.14.0, 1.14.1, 1.14.2rc1, 1.14.2, 1.15.0rc1, 1.15.0, 1.16.0rc1, 1.16.0, 1.16.1rc1, 1.16.1, 1.17.0rc1, 1.17.0, 1.17.1rc1, 1.17.1, 1.18.0rc1, 1.18.0, 1.19.0rc1, 1.19.0, 1.20.0rc1, 1.20.0rc2, 1.20.0rc3, 1.20.0, 1.20.1, 1.21.0rc1, 1.21.1rc1, 1.21.1, 1.22.0rc1, 1.22.0, 1.22.1, 1.23.0rc1, 1.23.0, 1.23.1, 1.24.0rc1, 1.24.0, 1.24.1, 1.24.3, 1.25.0rc1, 1.25.0, 1.26.0rc1, 1.26.0, 1.27.0rc1, 1.27.0rc2, 1.27.1, 1.27.2, 1.28.0.dev0, 1.28.0rc1, 1.28.0rc2, 1.28.0rc3, 1.28.1, 1.29.0, 1.30.0rc1, 1.30.0, 1.31.0rc1, 1.31.0rc2, 1.31.0, 1.32.0rc1, 1.32.0, 1.33.0rc1, 1.33.0rc2, 1.33.1, 1.33.2, 1.34.0rc1, 1.34.0, 1.34.1, 1.35.0rc1, 1.35.0, 1.36.0rc1, 1.36.0, 1.36.1, 1.37.0rc1, 1.37.0, 1.37.1, 1.38.0rc1, 1.38.0, 1.38.1, 1.39.0rc1, 1.39.0)
ERROR: No matching distribution found for grpcio==1.39.0
```
Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs).
See [TROUBLESHOOTING.md](https://github.com/grpc/grpc/blob/master/TROUBLESHOOTING.md) for how to diagnose problems better.
### Anything else we should know about your project / environment?
- Just in case the issue may be temporary, here is the SHA of the python image which im using in docker
`python@sha256:853365cd7245aec1580879933f2c5ea1a45c1ceb868c05480a58cba443ffb1e5`
- It started happening this Monday (9 August 2021)
|
1.0
|
Unable to install grpcio on Alpine Linux distro - <!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
Python
### What operating system (Linux, Windows,...) and version?
Alpine Linux, 3.14
### What runtime / compiler are you using (e.g. python version or version of gcc)
grpcio v1.39.0
### What did you do?
Please provide either 1) A unit test for reproducing the bug or 2) Specific steps for us to follow to reproduce the bug. If there’s not enough information to debug the problem, gRPC team may close the issue at their discretion. You’re welcome to re-open the issue once you have a reproduction.
1. Using docker desktop on windows 10, run a container with image `python:3.9-alpine`
2. Install `grpcio v1.39.0`
```
docker container run -it --rm --name python-alpine python:3.9-alpine sh
/ # pip install grpcio==1.39.0
```
### What did you expect to see?
`grpcio` package installed successfully
```
pip install grpcio
Collecting grpcio
Downloading grpcio-1.39.0-cp39-cp39-manylinux2014_x86_64.whl (4.3 MB)
|████████████████████████████████| 4.3 MB 7.7 MB/s
Collecting six>=1.5.2
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Installing collected packages: six, grpcio
Successfully installed grpcio-1.39.0 six-1.16.0
```
### What did you see instead?
```
/ # pip install grpcio==1.39.0
Collecting grpcio==1.39.0
Downloading grpcio-1.39.0.tar.gz (21.3 MB)
|████████████████████████████████| 21.3 MB 10.0 MB/s
ERROR: Command errored out with exit status 1: python setup.py
command: /usr/local/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-127ja51a/grpcio_6784ae116c334fd698b10432521236db/setup.py'"'"'; __file__='"'"'/tmp/pip-install-127ja51a/grpcio_6784ae116c334fd698b10432521236db/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-e7s1nsjk 0.13.1, 0.14.0rc1
cwd: /tmp/pip-install-127ja51a/grpcio_6784ae116c334fd698b10432521236db/ 1.11.0rc1, 1.11.0
Complete output (11 lines): , 1.20.0rc2, 1.20
Traceback (most recent call last): 1.31.0rc1, 1.31.0
File "<string>", line 1, in <module>
File "/tmp/pip-install-127ja51a/grpcio_6784ae116c334fd698b10432521236db/setup.py", line 257, in <module>
if check_linker_need_libatomic():
File "/tmp/pip-install-127ja51a/grpcio_6784ae116c334fd698b10432521236db/setup.py", line 204, in check_linker_need_libatomic
cpp_test = subprocess.Popen([cxx, '-x', 'c++', '-std=c++11', '-'],
File "/usr/local/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'c++'
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/07/ea/398472e896f529d23fb58e33f01298dfc554a341d58f87c1ea5ad817208e/grpcio-1.39.0.tar.gz#sha256=57974361a459d6fe04c9ae0af1845974606612249f467bbd2062d963cb90f407 (from https://pypi.org/simple/grpcio/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement grpcio==1.39.0 (from versions: 0.4.0a0, 0.4.0a1, 0.4.0a2, 0.4.0a3, 0.4.0a4, 0.4.0a5, 0.4.0a6, 0.4.0a7, 0.4.0a8, 0.4.0a13, 0.4.0a14, 0.5.0a0, 0.5.0a1, 0.5.0a2, 0.9.0a0, 0.9.0a1, 0.10.0a0, 0.11.0b0, 0.11.0b1, 0.12.0b0, 0.13.0, 0.13.1rc1,
0.13.1, 0.14.0rc1, 0.14.0, 0.15.0, 1.0.0rc1, 1.0.0rc2, 1.0.0, 1.0.1rc1, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.1.3, 1.2.0, 1.2.1, 1.3.0, 1.3.5, 1.4.0, 1.6.0, 1.6.3, 1.7.0, 1.7.3, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.6, 1.9.0rc1, 1.9.0rc2, 1.9.0rc3, 1.9.0, 1.9.1, 1.10.0rc1, 1.10.0rc2, 1.10.0, 1.10.1rc1, 1.10.1rc2, 1.10.1, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1rc1, 1.11.1, 1.12.0rc1, 1.12.0, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0rc3, 1.13.0, 1.14.0rc1, 1.14.0rc2, 1.14.0, 1.14.1, 1.14.2rc1, 1.14.2, 1.15.0rc1, 1.15.0, 1.16.0rc1, 1.16.0, 1.16.1rc1, 1.16.1, 1.17.0rc1, 1.17.0, 1.17.1rc1, 1.17.1, 1.18.0rc1, 1.18.0, 1.19.0rc1, 1.19.0, 1.20.0rc1, 1.20.0rc2, 1.20.0rc3, 1.20.0, 1.20.1, 1.21.0rc1, 1.21.1rc1, 1.21.1, 1.22.0rc1, 1.22.0, 1.22.1, 1.23.0rc1, 1.23.0, 1.23.1, 1.24.0rc1, 1.24.0, 1.24.1, 1.24.3, 1.25.0rc1, 1.25.0, 1.26.0rc1, 1.26.0, 1.27.0rc1, 1.27.0rc2, 1.27.1, 1.27.2, 1.28.0.dev0, 1.28.0rc1, 1.28.0rc2, 1.28.0rc3, 1.28.1, 1.29.0, 1.30.0rc1, 1.30.0, 1.31.0rc1, 1.31.0rc2, 1.31.0, 1.32.0rc1, 1.32.0, 1.33.0rc1, 1.33.0rc2, 1.33.1, 1.33.2, 1.34.0rc1, 1.34.0, 1.34.1, 1.35.0rc1, 1.35.0, 1.36.0rc1, 1.36.0, 1.36.1, 1.37.0rc1, 1.37.0, 1.37.1, 1.38.0rc1, 1.38.0, 1.38.1, 1.39.0rc1, 1.39.0)
ERROR: No matching distribution found for grpcio==1.39.0
```
Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs).
See [TROUBLESHOOTING.md](https://github.com/grpc/grpc/blob/master/TROUBLESHOOTING.md) for how to diagnose problems better.
### Anything else we should know about your project / environment?
- Just in case the issue may be temporary, here is the SHA of the python image which im using in docker
`python@sha256:853365cd7245aec1580879933f2c5ea1a45c1ceb868c05480a58cba443ffb1e5`
- It started happening this Monday (9 August 2021)
|
non_test
|
unable to install grpcio on alpine linux distro please do not post a question here this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers at stackoverflow with grpc tag for questions that specifically need to be answered by grpc team members please ask look for answers at grpc io mailing list issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using python what operating system linux windows and version alpine linux what runtime compiler are you using e g python version or version of gcc grpcio what did you do please provide either a unit test for reproducing the bug or specific steps for us to follow to reproduce the bug if there’s not enough information to debug the problem grpc team may close the issue at their discretion you’re welcome to re open the issue once you have a reproduction using docker desktop on windows run a container with image python alpine install grpcio docker container run it rm name python alpine python alpine sh pip install grpcio what did you expect to see grpcio package installed successfully pip install grpcio collecting grpcio downloading grpcio whl mb ████████████████████████████████ mb mb s collecting six downloading six none any whl kb installing collected packages six grpcio successfully installed grpcio six what did you see instead pip install grpcio collecting grpcio downloading grpcio tar gz mb ████████████████████████████████ mb mb s error command errored out with exit status python setup py command usr local bin python c import io os sys setuptools tokenize sys argv tmp pip install grpcio setup py file tmp pip install grpcio setup py f getattr tokenize open open file if os path exists file else io stringio from setuptools import setup setup code f read replace r n n f close exec compile code file exec egg info egg base tmp pip pip egg info cwd tmp pip install grpcio complete output lines traceback most recent call last file line in file tmp pip install grpcio setup py line in if check linker need libatomic file tmp pip install grpcio setup py line in check linker need libatomic cpp test subprocess popen file usr local lib subprocess py line in init self execute child args executable preexec fn close fds file usr local lib subprocess py line in execute child raise child exception type errno num err msg err filename filenotfounderror no such file or directory c warning discarding from command errored out with exit status python setup py egg info check the logs for full command output error could not find a version that satisfies the requirement grpcio from versions error no matching distribution found for grpcio make sure you include information that can help us debug full error message exception listing stack trace logs see for how to diagnose problems better anything else we should know about your project environment just in case the issue may be temporary here is the sha of the python image which im using in docker python it started happening this monday august
| 0
|
59,446
| 24,769,314,750
|
IssuesEvent
|
2022-10-22 23:51:06
|
covid-projections/act-now-links-service
|
https://api.github.com/repos/covid-projections/act-now-links-service
|
closed
|
`/getShareLinkUrl` requests being blocked due to (an assumed) CORS issue
|
bug links-service
|
Requests to the links service began failing unexpectedly (without any changes as far as I can d) due to a CORS issue.
Requests succeed in [CORS testers](https://cors-test.codehappy.dev/?url=https%3A%2F%2Fus-central1-act-now-links-dev.cloudfunctions.net%2Fapi%2FgetShareLinkUrl%2Fhttp%3A%2F%2Fhackathon-september-2022-4dve.vercel.app%2F%2F%2Fus%2Farkansas-ar-pope-county&method=get).
Example of blocked/failed request:

Note the `301: Permenantly Moved` response; AFAIK this request shouldn't be making any redirects. It's possible that if the request is being redirected (though I don't think it should be) and the `access-control-allow-origin: *` header is being lost/not forwarded, resulting in the request being blocked.
|
1.0
|
`/getShareLinkUrl` requests being blocked due to (an assumed) CORS issue - Requests to the links service began failing unexpectedly (without any changes as far as I can d) due to a CORS issue.
Requests succeed in [CORS testers](https://cors-test.codehappy.dev/?url=https%3A%2F%2Fus-central1-act-now-links-dev.cloudfunctions.net%2Fapi%2FgetShareLinkUrl%2Fhttp%3A%2F%2Fhackathon-september-2022-4dve.vercel.app%2F%2F%2Fus%2Farkansas-ar-pope-county&method=get).
Example of blocked/failed request:

Note the `301: Permenantly Moved` response; AFAIK this request shouldn't be making any redirects. It's possible that if the request is being redirected (though I don't think it should be) and the `access-control-allow-origin: *` header is being lost/not forwarded, resulting in the request being blocked.
|
non_test
|
getsharelinkurl requests being blocked due to an assumed cors issue requests to the links service began failing unexpectedly without any changes as far as i can d due to a cors issue requests succeed in example of blocked failed request note the permenantly moved response afaik this request shouldn t be making any redirects it s possible that if the request is being redirected though i don t think it should be and the access control allow origin header is being lost not forwarded resulting in the request being blocked
| 0
|
546,159
| 16,005,290,590
|
IssuesEvent
|
2021-04-20 01:33:23
|
membermatters/MemberMatters
|
https://api.github.com/repos/membermatters/MemberMatters
|
closed
|
Long login sessions
|
bug high priority
|
It would be great to have long sticky login sessions so i dont have to auth every time i open the portal.
|
1.0
|
Long login sessions - It would be great to have long sticky login sessions so i dont have to auth every time i open the portal.
|
non_test
|
long login sessions it would be great to have long sticky login sessions so i dont have to auth every time i open the portal
| 0
|
36,698
| 5,077,609,370
|
IssuesEvent
|
2016-12-28 10:55:54
|
cyphar/umoci
|
https://api.github.com/repos/cyphar/umoci
|
opened
|
test: increase coverage
|
test/unit
|
I'd prefer if we can hit a unit test coverage of ~80%. Currently it's not enough testing IMO. The problem is that there's a lot of error paths that we will not be able to hit.
|
1.0
|
test: increase coverage - I'd prefer if we can hit a unit test coverage of ~80%. Currently it's not enough testing IMO. The problem is that there's a lot of error paths that we will not be able to hit.
|
test
|
test increase coverage i d prefer if we can hit a unit test coverage of currently it s not enough testing imo the problem is that there s a lot of error paths that we will not be able to hit
| 1
|
112,231
| 9,558,240,545
|
IssuesEvent
|
2019-05-03 13:46:07
|
saltstack/salt
|
https://api.github.com/repos/saltstack/salt
|
closed
|
unit.utils.test_vmware.DisconnectTestCase.test_disconnect_raise_vim_fault
|
2019.2.1 Test Failure
|
2019.2.1 failed [salt-fedora-29-py3](https://jenkinsci.saltstack.com/job/2019.2.1/job/salt-fedora-29-py3/11/testReport/junit/unit.utils.test_vmware/DisconnectTestCase/test_disconnect_raise_vim_fault)
---
<module 'salt.utils.vmware' from '/tmp/kitchen/testing/salt/utils/vmware.py'> does not have the attribute 'Disconnect'
```
Traceback (most recent call last):
File "/tmp/kitchen/testing/tests/unit/utils/test_vmware.py", line 2005, in test_disconnect_raise_vim_fault
with patch('salt.utils.vmware.Disconnect', MagicMock(side_effect=exc)):
File "/usr/local/lib/python3.7/site-packages/mock/mock.py", line 1393, in __enter__
original, local = self.get_original()
File "/usr/local/lib/python3.7/site-packages/mock/mock.py", line 1367, in get_original
"{} does not have the attribute {!r}".format(target, name)
AttributeError: <module 'salt.utils.vmware' from '/tmp/kitchen/testing/salt/utils/vmware.py'> does not have the attribute 'Disconnect'
```
|
1.0
|
unit.utils.test_vmware.DisconnectTestCase.test_disconnect_raise_vim_fault - 2019.2.1 failed [salt-fedora-29-py3](https://jenkinsci.saltstack.com/job/2019.2.1/job/salt-fedora-29-py3/11/testReport/junit/unit.utils.test_vmware/DisconnectTestCase/test_disconnect_raise_vim_fault)
---
<module 'salt.utils.vmware' from '/tmp/kitchen/testing/salt/utils/vmware.py'> does not have the attribute 'Disconnect'
```
Traceback (most recent call last):
File "/tmp/kitchen/testing/tests/unit/utils/test_vmware.py", line 2005, in test_disconnect_raise_vim_fault
with patch('salt.utils.vmware.Disconnect', MagicMock(side_effect=exc)):
File "/usr/local/lib/python3.7/site-packages/mock/mock.py", line 1393, in __enter__
original, local = self.get_original()
File "/usr/local/lib/python3.7/site-packages/mock/mock.py", line 1367, in get_original
"{} does not have the attribute {!r}".format(target, name)
AttributeError: <module 'salt.utils.vmware' from '/tmp/kitchen/testing/salt/utils/vmware.py'> does not have the attribute 'Disconnect'
```
|
test
|
unit utils test vmware disconnecttestcase test disconnect raise vim fault failed does not have the attribute disconnect traceback most recent call last file tmp kitchen testing tests unit utils test vmware py line in test disconnect raise vim fault with patch salt utils vmware disconnect magicmock side effect exc file usr local lib site packages mock mock py line in enter original local self get original file usr local lib site packages mock mock py line in get original does not have the attribute r format target name attributeerror does not have the attribute disconnect
| 1
|
23,450
| 11,966,016,681
|
IssuesEvent
|
2020-04-06 01:50:19
|
rapidsai/cudf
|
https://api.github.com/repos/rapidsai/cudf
|
closed
|
[BUG] Reading huge csv by chunks is too slow
|
Performance bug cuIO
|
**Bug description**
I want to read an 8.4G CSV file with 100 millions lines. To do that, I'm reading a million lines by step using the _nrows_ and _skiprows_ argument of _cudf.read_csv_. But, each iteration take at least 42 seconds.
I'm doing a batching process:
* read a chunk of the csv file and create a dataframe with _cudf.read_csv_,
* execute some operations over the dataframe, and
* write the result dataframe to a file.
I do that because the whole file can't be read with my nVidia card (4G).
**To reproduce bug**
Basically, I execute the following code:
```python
start = 0
step = 10**6
while True:
df = cudf.read_csv('file.csv',
delimiter='|',
names=names,
dtype=dtype,
usecols=usecols,
nrows=step,
skiprows=start)
start += step
```
where _file.csv_ is a csv file with 16 columns (6 str, 4 int64, and 6 int8).
**Expected behavior**
I expect to wait less time by iteration.
**Environment overview (please complete the following information)**
- Environment location: bare-metal
- nVidia driver version: 396.37
- nVidia gpu: GeForce GTX 1050 Ti with 4G
- Method for cuDF installation: conda
|
True
|
[BUG] Reading huge csv by chunks is too slow - **Bug description**
I want to read an 8.4G CSV file with 100 millions lines. To do that, I'm reading a million lines by step using the _nrows_ and _skiprows_ argument of _cudf.read_csv_. But, each iteration take at least 42 seconds.
I'm doing a batching process:
* read a chunk of the csv file and create a dataframe with _cudf.read_csv_,
* execute some operations over the dataframe, and
* write the result dataframe to a file.
I do that because the whole file can't be read with my nVidia card (4G).
**To reproduce bug**
Basically, I execute the following code:
```python
start = 0
step = 10**6
while True:
df = cudf.read_csv('file.csv',
delimiter='|',
names=names,
dtype=dtype,
usecols=usecols,
nrows=step,
skiprows=start)
start += step
```
where _file.csv_ is a csv file with 16 columns (6 str, 4 int64, and 6 int8).
**Expected behavior**
I expect to wait less time by iteration.
**Environment overview (please complete the following information)**
- Environment location: bare-metal
- nVidia driver version: 396.37
- nVidia gpu: GeForce GTX 1050 Ti with 4G
- Method for cuDF installation: conda
|
non_test
|
reading huge csv by chunks is too slow bug description i want to read an csv file with millions lines to do that i m reading a million lines by step using the nrows and skiprows argument of cudf read csv but each iteration take at least seconds i m doing a batching process read a chunk of the csv file and create a dataframe with cudf read csv execute some operations over the dataframe and write the result dataframe to a file i do that because the whole file can t be read with my nvidia card to reproduce bug basically i execute the following code python start step while true df cudf read csv file csv delimiter names names dtype dtype usecols usecols nrows step skiprows start start step where file csv is a csv file with columns str and expected behavior i expect to wait less time by iteration environment overview please complete the following information environment location bare metal nvidia driver version nvidia gpu geforce gtx ti with method for cudf installation conda
| 0
|
189,620
| 14,516,718,556
|
IssuesEvent
|
2020-12-13 16:56:45
|
kalexmills/github-vet-tests-dec2020
|
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
|
closed
|
iwind/GoIM: src/github.com/iwind/TeaMQ/nets/server_test.go; 9 LoC
|
fresh test tiny
|
Found a possible issue in [iwind/GoIM](https://www.github.com/iwind/GoIM) at [src/github.com/iwind/TeaMQ/nets/server_test.go](https://github.com/iwind/GoIM/blob/4644e1d7bc38a64e43d0ce0d311729ea0c2e975b/src/github.com/iwind/TeaMQ/nets/server_test.go#L29-L37)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable i used in defer or goroutine at line 32
[Click here to see the code in its original context.](https://github.com/iwind/GoIM/blob/4644e1d7bc38a64e43d0ce0d311729ea0c2e975b/src/github.com/iwind/TeaMQ/nets/server_test.go#L29-L37)
<details>
<summary>Click here to show the 9 line(s) of Go which triggered the analyzer.</summary>
```go
for i, c := range clients {
if i != client.id {
go func(c *Client) {
log.Println("write message to ", i)
now := time.Now()
c.Write(message + "|" + fmt.Sprintf("%d", now.Nanosecond()) + "\n")
}(c)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 4644e1d7bc38a64e43d0ce0d311729ea0c2e975b
|
1.0
|
iwind/GoIM: src/github.com/iwind/TeaMQ/nets/server_test.go; 9 LoC -
Found a possible issue in [iwind/GoIM](https://www.github.com/iwind/GoIM) at [src/github.com/iwind/TeaMQ/nets/server_test.go](https://github.com/iwind/GoIM/blob/4644e1d7bc38a64e43d0ce0d311729ea0c2e975b/src/github.com/iwind/TeaMQ/nets/server_test.go#L29-L37)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable i used in defer or goroutine at line 32
[Click here to see the code in its original context.](https://github.com/iwind/GoIM/blob/4644e1d7bc38a64e43d0ce0d311729ea0c2e975b/src/github.com/iwind/TeaMQ/nets/server_test.go#L29-L37)
<details>
<summary>Click here to show the 9 line(s) of Go which triggered the analyzer.</summary>
```go
for i, c := range clients {
if i != client.id {
go func(c *Client) {
log.Println("write message to ", i)
now := time.Now()
c.Write(message + "|" + fmt.Sprintf("%d", now.Nanosecond()) + "\n")
}(c)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 4644e1d7bc38a64e43d0ce0d311729ea0c2e975b
|
test
|
iwind goim src github com iwind teamq nets server test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable i used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for i c range clients if i client id go func c client log println write message to i now time now c write message fmt sprintf d now nanosecond n c leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 1
|
48,586
| 20,195,418,594
|
IssuesEvent
|
2022-02-11 10:12:24
|
BBVA-Openweb/uptime-services
|
https://api.github.com/repos/BBVA-Openweb/uptime-services
|
closed
|
🛑 Openweb Service (PLAY) - Analytics is down
|
status openweb-service-play-analytics
|
In [`58f785a`](https://github.com/BBVA-Openweb/uptime-services/commit/58f785a5bbbaff4e2c1759d27f1da0069ebe0fd4
), Openweb Service (PLAY) - Analytics ($OPENWEB_HEALTH_PLAY/analytics) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
1.0
|
🛑 Openweb Service (PLAY) - Analytics is down - In [`58f785a`](https://github.com/BBVA-Openweb/uptime-services/commit/58f785a5bbbaff4e2c1759d27f1da0069ebe0fd4
), Openweb Service (PLAY) - Analytics ($OPENWEB_HEALTH_PLAY/analytics) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
non_test
|
🛑 openweb service play analytics is down in openweb service play analytics openweb health play analytics was down http code response time ms
| 0
|
265,745
| 23,194,734,646
|
IssuesEvent
|
2022-08-01 15:23:28
|
ChainSafe/lodestar
|
https://api.github.com/repos/ChainSafe/lodestar
|
closed
|
Lodestar produced block with invalid attestation during Sepolia instability
|
scope-testnet-debugging
|
Post merge Sepolia has 2 unstable epochs
- the first merge slot was https://sepolia.beaconcha.in/block/115193
- Up until slot 115232, we don't see any of the config issue related validators missing a single slot. 115233 however is missed by lodestar and subsequently 115234&115235 are also missed by Besu and Nimbus. These are providers where no config issue is known, hence it could be a shuffling issue or something else
- The trigger point could be slot 115233, see slot dump from Nimbus node [invalid.tar.gz](https://github.com/ChainSafe/lodestar/files/9150536/invalid.tar.gz)
- Teku also logged that invalid slot. They failed validating the signature but the proposer_index matches the expected one (709) according to beaconcha.in [115233.zip](https://github.com/ChainSafe/lodestar/files/9150573/115233.zip)
- Seems like the block contains an attestation with invalid signature [state_at_115232.ssz.zip](https://github.com/ChainSafe/lodestar/files/9150582/state_at_115232.ssz.zip)
- The offending attestation is
```
IndexedAttestation{attesting_indices=SszList{size=17: 50, 198, 311, 389, 556, 639, 804, 1121, 1238, 1324, 1435, 1448, 1487, 1497, 1612, 1640, 1779}, data=AttestationData{slot=115232, index=0, beacon_block_root=0xacd63ca2b12641cb281fe40a479a5bf37debbba9ae324e4c9089742fdbbe1403, source=Checkpoint{epoch=3599, root=0x5a4b89da91bfe310be27b6cccaf4ead3ec62280d1ed9f4356e6df525a1f2d19b}, target=Checkpoint{epoch=3601, root=0xacd63ca2b12641cb281fe40a479a5bf37debbba9ae324e4c9089742fdbbe1403}}, signature=SszByteVector{0x86330d22c822f54c36456a82420ec64150024016a9ca1dd59218cec47d4d0be8182dbad6eacfff6debecd9ab2c7390840321416e0cb7ce01e6428ddef73f2777ca69e7d7fe448fc1febd86c6bff4d8a509bc29a3ec32e19e3f79aee8661c30ff}}
```
- source is correct, the target is weird. And gap between source and target is a couple of epochs. Source pre-merge and target post-merge.
**Hypothesis**
- attestation is valid wrt side chain and makes it into pool
- beacon node produces block on mainchain (with divergent shuffling) and pulls thsi attestation from the pool (because it was pre-validated)
- block is invalid because bitfield represents side-chain shuffling and is invalid to the assumed mainchain shuffling in the block
_continue_
- the gap 3599-3601 is present in other attestations https://sepolia.beaconcha.in/epoch/3601 - the target root is wrong
- But it might be correct wrt to the side chain, which the node had blocks available at the time
- The gap here at the point merge implies that there was still agreement in latest finalized and thus the source correct and thus lodestar still importing side-chain blocks
|
1.0
|
Lodestar produced block with invalid attestation during Sepolia instability - Post merge Sepolia has 2 unstable epochs
- the first merge slot was https://sepolia.beaconcha.in/block/115193
- Up until slot 115232, we don't see any of the config issue related validators missing a single slot. 115233 however is missed by lodestar and subsequently 115234&115235 are also missed by Besu and Nimbus. These are providers where no config issue is known, hence it could be a shuffling issue or something else
- The trigger point could be slot 115233, see slot dump from Nimbus node [invalid.tar.gz](https://github.com/ChainSafe/lodestar/files/9150536/invalid.tar.gz)
- Teku also logged that invalid slot. They failed validating the signature but the proposer_index matches the expected one (709) according to beaconcha.in [115233.zip](https://github.com/ChainSafe/lodestar/files/9150573/115233.zip)
- Seems like the block contains an attestation with invalid signature [state_at_115232.ssz.zip](https://github.com/ChainSafe/lodestar/files/9150582/state_at_115232.ssz.zip)
- The offending attestation is
```
IndexedAttestation{attesting_indices=SszList{size=17: 50, 198, 311, 389, 556, 639, 804, 1121, 1238, 1324, 1435, 1448, 1487, 1497, 1612, 1640, 1779}, data=AttestationData{slot=115232, index=0, beacon_block_root=0xacd63ca2b12641cb281fe40a479a5bf37debbba9ae324e4c9089742fdbbe1403, source=Checkpoint{epoch=3599, root=0x5a4b89da91bfe310be27b6cccaf4ead3ec62280d1ed9f4356e6df525a1f2d19b}, target=Checkpoint{epoch=3601, root=0xacd63ca2b12641cb281fe40a479a5bf37debbba9ae324e4c9089742fdbbe1403}}, signature=SszByteVector{0x86330d22c822f54c36456a82420ec64150024016a9ca1dd59218cec47d4d0be8182dbad6eacfff6debecd9ab2c7390840321416e0cb7ce01e6428ddef73f2777ca69e7d7fe448fc1febd86c6bff4d8a509bc29a3ec32e19e3f79aee8661c30ff}}
```
- source is correct, the target is weird. And gap between source and target is a couple of epochs. Source pre-merge and target post-merge.
**Hypothesis**
- attestation is valid wrt side chain and makes it into pool
- beacon node produces block on mainchain (with divergent shuffling) and pulls thsi attestation from the pool (because it was pre-validated)
- block is invalid because bitfield represents side-chain shuffling and is invalid to the assumed mainchain shuffling in the block
_continue_
- the gap 3599-3601 is present in other attestations https://sepolia.beaconcha.in/epoch/3601 - the target root is wrong
- But it might be correct wrt to the side chain, which the node had blocks available at the time
- The gap here at the point merge implies that there was still agreement in latest finalized and thus the source correct and thus lodestar still importing side-chain blocks
|
test
|
lodestar produced block with invalid attestation during sepolia instability post merge sepolia has unstable epochs the first merge slot was up until slot we don t see any of the config issue related validators missing a single slot however is missed by lodestar and subsequently are also missed by besu and nimbus these are providers where no config issue is known hence it could be a shuffling issue or something else the trigger point could be slot see slot dump from nimbus node teku also logged that invalid slot they failed validating the signature but the proposer index matches the expected one according to beaconcha in seems like the block contains an attestation with invalid signature the offending attestation is indexedattestation attesting indices sszlist size data attestationdata slot index beacon block root source checkpoint epoch root target checkpoint epoch root signature sszbytevector source is correct the target is weird and gap between source and target is a couple of epochs source pre merge and target post merge hypothesis attestation is valid wrt side chain and makes it into pool beacon node produces block on mainchain with divergent shuffling and pulls thsi attestation from the pool because it was pre validated block is invalid because bitfield represents side chain shuffling and is invalid to the assumed mainchain shuffling in the block continue the gap is present in other attestations the target root is wrong but it might be correct wrt to the side chain which the node had blocks available at the time the gap here at the point merge implies that there was still agreement in latest finalized and thus the source correct and thus lodestar still importing side chain blocks
| 1
|
135,266
| 10,968,030,717
|
IssuesEvent
|
2019-11-28 10:44:18
|
DiscordFederation/Erin
|
https://api.github.com/repos/DiscordFederation/Erin
|
closed
|
Improve test coverage
|
tests
|
This is not related to #8. Some simple tests can be added to de-coupled methods and functions.
**Task list**:
- [x] Verify `glia init` retrieves proper files
- [x] Ensure [`glia.cli`](https://github.com/DiscordFederation/Glia/tree/124eca51f8dc8485b17411a2c33ebf58e51339a3/glia/cli) commands work
- [x] Test [`glia.core.utils`](https://github.com/DiscordFederation/Glia/blob/124eca51f8dc8485b17411a2c33ebf58e51339a3/glia/core/utils.py)
|
1.0
|
Improve test coverage - This is not related to #8. Some simple tests can be added to de-coupled methods and functions.
**Task list**:
- [x] Verify `glia init` retrieves proper files
- [x] Ensure [`glia.cli`](https://github.com/DiscordFederation/Glia/tree/124eca51f8dc8485b17411a2c33ebf58e51339a3/glia/cli) commands work
- [x] Test [`glia.core.utils`](https://github.com/DiscordFederation/Glia/blob/124eca51f8dc8485b17411a2c33ebf58e51339a3/glia/core/utils.py)
|
test
|
improve test coverage this is not related to some simple tests can be added to de coupled methods and functions task list verify glia init retrieves proper files ensure commands work test
| 1
|
268,712
| 23,391,427,840
|
IssuesEvent
|
2022-08-11 18:14:04
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Notebook Editor Event - onDidChangeVisibleNotebookEditors on two editor groups
|
integration-test-failure notebook
|
https://github.com/microsoft/vscode/runs/4541821400?check_suite_focus=true#step:18:365
```
1) Notebook Editor
Notebook Editor Event - onDidChangeVisibleNotebookEditors on two editor groups:
Error: Timeout of 60000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (D:\a\vscode\vscode\extensions\vscode-api-tests\out\singlefolder-tests\notebook.editor.test.js)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
```
|
1.0
|
Notebook Editor Event - onDidChangeVisibleNotebookEditors on two editor groups - https://github.com/microsoft/vscode/runs/4541821400?check_suite_focus=true#step:18:365
```
1) Notebook Editor
Notebook Editor Event - onDidChangeVisibleNotebookEditors on two editor groups:
Error: Timeout of 60000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (D:\a\vscode\vscode\extensions\vscode-api-tests\out\singlefolder-tests\notebook.editor.test.js)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
```
|
test
|
notebook editor event ondidchangevisiblenotebookeditors on two editor groups notebook editor notebook editor event ondidchangevisiblenotebookeditors on two editor groups error timeout of exceeded for async tests and hooks ensure done is called if returning a promise ensure it resolves d a vscode vscode extensions vscode api tests out singlefolder tests notebook editor test js at listontimeout internal timers js at processtimers internal timers js
| 1
|
292,983
| 25,256,045,079
|
IssuesEvent
|
2022-11-15 18:09:40
|
hashicorp/terraform-provider-google
|
https://api.github.com/repos/hashicorp/terraform-provider-google
|
closed
|
Failing test(s): TestAccApigeeInstance_apigeeInstanceServiceAttachmentBasicTestExample (permadiff causing recreate)
|
size/xs test failure crosslinked
|
<!--- This is a template for reporting test failures on nightly builds. It should only be used by core contributors who have access to our CI/CD results. --->
<!-- i.e. "Consistently since X date" or "X% failure in MONTH" -->
Failure rate: 100% since June 11 2022
<!-- List all impacted tests for searchability. The title of the issue can instead list one or more groups of tests, or describe the overall root cause. -->
Impacted tests:
- TestAccApigeeInstance_apigeeInstanceServiceAttachmentBasicTestExample
<!-- Link to the nightly build(s), ideally with one impacted test opened -->
Nightly builds:
- https://ci-oss.hashicorp.engineering/buildConfiguration/GoogleCloud_ProviderGoogleCloudGoogleProject/335482?buildTab=tests&expandedTest=13181212698358612
<!-- The error message that displays in the tests tab, for reference -->
Message:
```
Terraform will perform the following actions:
# google_apigee_instance.apigee_instance must be replaced
-/+ resource "google_apigee_instance" "apigee_instance" {
~ consumer_accept_list = [
"123456",
- "tf-testfppqz67af1",
+ "278360720793",
]
~ host = "10.44.0.2" -> (known after apply)
~ id = "organizations/tf-testfppqz67af1/instances/tf-testfppqz67af1" -> (known after apply)
name = "tf-testfppqz67af1"
~ peering_cidr_range = "SLASH_22" -> (known after apply)
~ port = "443" -> (known after apply)
~ service_attachment = "projects/zb31bb0e851da69fe-tp/regions/us-central1/serviceAttachments/apigee-us-central1-l5lk" -> (known after apply)
# (2 unchanged attributes hidden)
}
Plan: 1 to add, 0 to change, 1 to destroy.
```
|
1.0
|
Failing test(s): TestAccApigeeInstance_apigeeInstanceServiceAttachmentBasicTestExample (permadiff causing recreate) - <!--- This is a template for reporting test failures on nightly builds. It should only be used by core contributors who have access to our CI/CD results. --->
<!-- i.e. "Consistently since X date" or "X% failure in MONTH" -->
Failure rate: 100% since June 11 2022
<!-- List all impacted tests for searchability. The title of the issue can instead list one or more groups of tests, or describe the overall root cause. -->
Impacted tests:
- TestAccApigeeInstance_apigeeInstanceServiceAttachmentBasicTestExample
<!-- Link to the nightly build(s), ideally with one impacted test opened -->
Nightly builds:
- https://ci-oss.hashicorp.engineering/buildConfiguration/GoogleCloud_ProviderGoogleCloudGoogleProject/335482?buildTab=tests&expandedTest=13181212698358612
<!-- The error message that displays in the tests tab, for reference -->
Message:
```
Terraform will perform the following actions:
# google_apigee_instance.apigee_instance must be replaced
-/+ resource "google_apigee_instance" "apigee_instance" {
~ consumer_accept_list = [
"123456",
- "tf-testfppqz67af1",
+ "278360720793",
]
~ host = "10.44.0.2" -> (known after apply)
~ id = "organizations/tf-testfppqz67af1/instances/tf-testfppqz67af1" -> (known after apply)
name = "tf-testfppqz67af1"
~ peering_cidr_range = "SLASH_22" -> (known after apply)
~ port = "443" -> (known after apply)
~ service_attachment = "projects/zb31bb0e851da69fe-tp/regions/us-central1/serviceAttachments/apigee-us-central1-l5lk" -> (known after apply)
# (2 unchanged attributes hidden)
}
Plan: 1 to add, 0 to change, 1 to destroy.
```
|
test
|
failing test s testaccapigeeinstance apigeeinstanceserviceattachmentbasictestexample permadiff causing recreate failure rate since june impacted tests testaccapigeeinstance apigeeinstanceserviceattachmentbasictestexample nightly builds message terraform will perform the following actions google apigee instance apigee instance must be replaced resource google apigee instance apigee instance consumer accept list tf host known after apply id organizations tf instances tf known after apply name tf peering cidr range slash known after apply port known after apply service attachment projects tp regions us serviceattachments apigee us known after apply unchanged attributes hidden plan to add to change to destroy
| 1
|
210,550
| 7,190,812,049
|
IssuesEvent
|
2018-02-02 18:36:47
|
conveyal/trimet-mod-otp
|
https://api.github.com/repos/conveyal/trimet-mod-otp
|
closed
|
UI: For legs with one stop (like on an interline), don't list the stops on the "Ride 1 min, 1 stops" drop down.
|
high priority
|
If a leg only has 1 stop, don't show that "Ride 1 min, 1 stops" drop down.

NOTE: I don't think we really need the separate Trip Viewer button and view (at minimum will need a way to turn that off as a config option in config.yml).
|
1.0
|
UI: For legs with one stop (like on an interline), don't list the stops on the "Ride 1 min, 1 stops" drop down. - If a leg only has 1 stop, don't show that "Ride 1 min, 1 stops" drop down.

NOTE: I don't think we really need the separate Trip Viewer button and view (at minimum will need a way to turn that off as a config option in config.yml).
|
non_test
|
ui for legs with one stop like on an interline don t list the stops on the ride min stops drop down if a leg only has stop don t show that ride min stops drop down note i don t think we really need the separate trip viewer button and view at minimum will need a way to turn that off as a config option in config yml
| 0
|
134,695
| 10,927,183,003
|
IssuesEvent
|
2019-11-22 16:09:44
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
[Flaky Test] diffResources test failling in ci-kubernetes-e2e-gci-gce-ingress
|
kind/failing-test kind/flake priority/important-soon sig/network
|
<!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs -->
**Which jobs are failing**:
ci-kubernetes-e2e-gci-gce-ingress
**Which test(s) are failing**:
diffResources
**Since when has it been failing**:
Failing since 8/8 at around 3pm PDT.
**Testgrid link**:
https://testgrid.k8s.io/sig-release-master-blocking#gci-gce-ingress
**Reason for failure**:
```
W0808 13:10:21.128] 2019/08/08 13:10:21 main.go:316: Something went wrong: encountered 1 errors: [Error: 22 leaked resources
W0808 13:10:21.129] +default-route-0664c6589a381464 bootstrap-e2e 10.158.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.129] +default-route-09a35afb44f61daa bootstrap-e2e 10.166.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.129] +default-route-09fc8d89464e0aa7 bootstrap-e2e 10.132.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.129] +default-route-13d934f6decdf352 bootstrap-e2e 10.156.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.129] +default-route-17f5feddac4a07f1 bootstrap-e2e 10.138.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-1ea2cf96c7f35a5b bootstrap-e2e 10.154.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-22434a99e33d7a0e bootstrap-e2e 10.152.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-3426799f9659ebd0 bootstrap-e2e 10.160.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-4ebbc87e89131421 bootstrap-e2e 10.146.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-51cf7756e6ff45ca bootstrap-e2e 10.142.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-7dd9ffc7b7ffc098 bootstrap-e2e 10.172.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-a2e14e4af1d7cb0e bootstrap-e2e 10.140.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-a7208be7960d3a8f bootstrap-e2e 10.168.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-a801c0f409f309ca bootstrap-e2e 10.164.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-bba7e10001568990 bootstrap-e2e 10.128.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-c947e975f1bb7461 bootstrap-e2e 0.0.0.0/0 default-internet-gateway 1000
W0808 13:10:21.131] +default-route-ca01adfbd6f982d7 bootstrap-e2e 10.170.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-d186332f278768ed bootstrap-e2e 10.162.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-d5fdb4b47ed5090f bootstrap-e2e 10.144.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.132] +default-route-eddc385808141165 bootstrap-e2e 10.150.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.132] +default-route-f4ac8ac48e453189 bootstrap-e2e 10.148.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.132] +default-route-ff16dfda5f195c34 bootstrap-e2e 10.174.0.0/20 bootstrap-e2e 1000]
```
pinging /cc @kubernetes/sig-network-test-failures
The tests performed seem to all be related to ingress resources.
/milestone v1.16
/priority critical-urgent
/kind failing-test
/sig sig-network
/cc @Verolop @jimangel @soggiest @alenkacz
|
1.0
|
[Flaky Test] diffResources test failling in ci-kubernetes-e2e-gci-gce-ingress - <!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs -->
**Which jobs are failing**:
ci-kubernetes-e2e-gci-gce-ingress
**Which test(s) are failing**:
diffResources
**Since when has it been failing**:
Failing since 8/8 at around 3pm PDT.
**Testgrid link**:
https://testgrid.k8s.io/sig-release-master-blocking#gci-gce-ingress
**Reason for failure**:
```
W0808 13:10:21.128] 2019/08/08 13:10:21 main.go:316: Something went wrong: encountered 1 errors: [Error: 22 leaked resources
W0808 13:10:21.129] +default-route-0664c6589a381464 bootstrap-e2e 10.158.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.129] +default-route-09a35afb44f61daa bootstrap-e2e 10.166.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.129] +default-route-09fc8d89464e0aa7 bootstrap-e2e 10.132.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.129] +default-route-13d934f6decdf352 bootstrap-e2e 10.156.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.129] +default-route-17f5feddac4a07f1 bootstrap-e2e 10.138.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-1ea2cf96c7f35a5b bootstrap-e2e 10.154.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-22434a99e33d7a0e bootstrap-e2e 10.152.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-3426799f9659ebd0 bootstrap-e2e 10.160.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-4ebbc87e89131421 bootstrap-e2e 10.146.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-51cf7756e6ff45ca bootstrap-e2e 10.142.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-7dd9ffc7b7ffc098 bootstrap-e2e 10.172.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.130] +default-route-a2e14e4af1d7cb0e bootstrap-e2e 10.140.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-a7208be7960d3a8f bootstrap-e2e 10.168.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-a801c0f409f309ca bootstrap-e2e 10.164.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-bba7e10001568990 bootstrap-e2e 10.128.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-c947e975f1bb7461 bootstrap-e2e 0.0.0.0/0 default-internet-gateway 1000
W0808 13:10:21.131] +default-route-ca01adfbd6f982d7 bootstrap-e2e 10.170.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-d186332f278768ed bootstrap-e2e 10.162.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.131] +default-route-d5fdb4b47ed5090f bootstrap-e2e 10.144.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.132] +default-route-eddc385808141165 bootstrap-e2e 10.150.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.132] +default-route-f4ac8ac48e453189 bootstrap-e2e 10.148.0.0/20 bootstrap-e2e 1000
W0808 13:10:21.132] +default-route-ff16dfda5f195c34 bootstrap-e2e 10.174.0.0/20 bootstrap-e2e 1000]
```
pinging /cc @kubernetes/sig-network-test-failures
The tests performed seem to all be related to ingress resources.
/milestone v1.16
/priority critical-urgent
/kind failing-test
/sig sig-network
/cc @Verolop @jimangel @soggiest @alenkacz
|
test
|
diffresources test failling in ci kubernetes gci gce ingress which jobs are failing ci kubernetes gci gce ingress which test s are failing diffresources since when has it been failing failing since at around pdt testgrid link reason for failure main go something went wrong encountered errors error leaked resources default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap default internet gateway default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap default route bootstrap bootstrap pinging cc kubernetes sig network test failures the tests performed seem to all be related to ingress resources milestone priority critical urgent kind failing test sig sig network cc verolop jimangel soggiest alenkacz
| 1
|
88,604
| 17,615,062,698
|
IssuesEvent
|
2021-08-18 08:42:28
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
[4] No "real" error page, always a "page not found"
|
No Code Attached Yet
|
### Steps to reproduce the issue
generate a 403 or php fatal issue in Joomla 4
### Expected result
403/Error page
### Actual result
"The requested page can't be found." (which is factually incorrect)
<img width="994" alt="Screenshot 2021-08-17 at 22 49 51" src="https://user-images.githubusercontent.com/400092/129805363-6e470de4-a250-4567-870a-4bff8cd4fa59.png">
### System information (as much as possible)
### Additional comments
|
1.0
|
[4] No "real" error page, always a "page not found" - ### Steps to reproduce the issue
generate a 403 or php fatal issue in Joomla 4
### Expected result
403/Error page
### Actual result
"The requested page can't be found." (which is factually incorrect)
<img width="994" alt="Screenshot 2021-08-17 at 22 49 51" src="https://user-images.githubusercontent.com/400092/129805363-6e470de4-a250-4567-870a-4bff8cd4fa59.png">
### System information (as much as possible)
### Additional comments
|
non_test
|
no real error page always a page not found steps to reproduce the issue generate a or php fatal issue in joomla expected result error page actual result the requested page can t be found which is factually incorrect img width alt screenshot at src system information as much as possible additional comments
| 0
|
498,169
| 14,402,361,842
|
IssuesEvent
|
2020-12-03 14:49:21
|
mlr-org/mlr3tuning
|
https://api.github.com/repos/mlr-org/mlr3tuning
|
opened
|
Avoid storing learner with TuneToken and search space in ObjectiveTuning
|
Priority: Medium
|
If a learner with `TuneToken` is supplied, a search space is generated in `TuningInstanceSingleCrit$initialize()`, `TuningInstanceMultiCrit$initialize()` and `AutoTuner$initialize()`. We decided that we remove `TuneToken`s from learners before they are stored in `ObjectiveTuning`. They are not needed anymore because the information is stored in the search space. However, this is not possible if a `TuneToken` is stored in the values field of a parameter with the tag `required`. We should discus a clean solution. Currently we store the learner with the `TuneToken` and remove them in `ObjectiveTuning`.
|
1.0
|
Avoid storing learner with TuneToken and search space in ObjectiveTuning - If a learner with `TuneToken` is supplied, a search space is generated in `TuningInstanceSingleCrit$initialize()`, `TuningInstanceMultiCrit$initialize()` and `AutoTuner$initialize()`. We decided that we remove `TuneToken`s from learners before they are stored in `ObjectiveTuning`. They are not needed anymore because the information is stored in the search space. However, this is not possible if a `TuneToken` is stored in the values field of a parameter with the tag `required`. We should discus a clean solution. Currently we store the learner with the `TuneToken` and remove them in `ObjectiveTuning`.
|
non_test
|
avoid storing learner with tunetoken and search space in objectivetuning if a learner with tunetoken is supplied a search space is generated in tuninginstancesinglecrit initialize tuninginstancemulticrit initialize and autotuner initialize we decided that we remove tunetoken s from learners before they are stored in objectivetuning they are not needed anymore because the information is stored in the search space however this is not possible if a tunetoken is stored in the values field of a parameter with the tag required we should discus a clean solution currently we store the learner with the tunetoken and remove them in objectivetuning
| 0
|
74,820
| 7,446,189,605
|
IssuesEvent
|
2018-03-28 08:19:07
|
datahq/datahub-qa
|
https://api.github.com/repos/datahq/datahub-qa
|
closed
|
Search works unexpected with connective words (the, in, on, etc)
|
Severity: Minor Tested: Success
|
Connective words and articles words make search results invalid :(
This is incorrect. I think we should exclude such words from the filter conditions.
## How to reproduce
#### http://datahub.io/search?q=Mauna+Loa
- 2 have 'Mauna Loa' in the title
- 1 have 'Mauna Loa' in the Readme
This is correct
#### http://datahub.io/search?q=the+Mauna+Loa
* only 1 dataset with 'the Mauna Loa' in the Readme
* NO datasets with 'Mauna Loa' in the title
Incorrect
#### http://datahub.io/search?q=gdp+in+uk
- `0` datasets
- No `gdp-uk` dataset, while it has words `GDP` and `UK` in the title and in the readme
## Expected behavior
- [x] connective words and articles should not be considered, while making search
- [x] search `gdp in uk` - should have at least 2 datasets
- [x] search `the gold` - shold have at least 3 datasets
|
1.0
|
Search works unexpected with connective words (the, in, on, etc) - Connective words and articles words make search results invalid :(
This is incorrect. I think we should exclude such words from the filter conditions.
## How to reproduce
#### http://datahub.io/search?q=Mauna+Loa
- 2 have 'Mauna Loa' in the title
- 1 have 'Mauna Loa' in the Readme
This is correct
#### http://datahub.io/search?q=the+Mauna+Loa
* only 1 dataset with 'the Mauna Loa' in the Readme
* NO datasets with 'Mauna Loa' in the title
Incorrect
#### http://datahub.io/search?q=gdp+in+uk
- `0` datasets
- No `gdp-uk` dataset, while it has words `GDP` and `UK` in the title and in the readme
## Expected behavior
- [x] connective words and articles should not be considered, while making search
- [x] search `gdp in uk` - should have at least 2 datasets
- [x] search `the gold` - shold have at least 3 datasets
|
test
|
search works unexpected with connective words the in on etc connective words and articles words make search results invalid this is incorrect i think we should exclude such words from the filter conditions how to reproduce have mauna loa in the title have mauna loa in the readme this is correct only dataset with the mauna loa in the readme no datasets with mauna loa in the title incorrect datasets no gdp uk dataset while it has words gdp and uk in the title and in the readme expected behavior connective words and articles should not be considered while making search search gdp in uk should have at least datasets search the gold shold have at least datasets
| 1
|
98,447
| 8,677,510,990
|
IssuesEvent
|
2018-11-30 16:58:54
|
SME-Issues/issues
|
https://api.github.com/repos/SME-Issues/issues
|
closed
|
Test Summary - 30/11/2018 - 5004
|
NLP Api pulse_tests
|
### Intent
- **Intent Errors: 3** (#1479)
### Canonical
- Query Invoice Tests Canonical (250): **91%** pass (212), 20 failed understood (#1477)
### Comprehension
- Query Invoice Tests Comprehension Partial (22): **42%** pass (8), 11 failed understood (#1478)
|
1.0
|
Test Summary - 30/11/2018 - 5004 - ### Intent
- **Intent Errors: 3** (#1479)
### Canonical
- Query Invoice Tests Canonical (250): **91%** pass (212), 20 failed understood (#1477)
### Comprehension
- Query Invoice Tests Comprehension Partial (22): **42%** pass (8), 11 failed understood (#1478)
|
test
|
test summary intent intent errors canonical query invoice tests canonical pass failed understood comprehension query invoice tests comprehension partial pass failed understood
| 1
|
35,300
| 17,019,788,201
|
IssuesEvent
|
2021-07-02 17:01:07
|
LiveSplit/LiveSplitOne
|
https://api.github.com/repos/LiveSplit/LiveSplitOne
|
closed
|
Inline main JS bundle into HTML
|
enhancement performance suitable for contributions
|
We should look into a way to inline the main JavaScript bundle into the HTML. If you look at the request waterfall, you can see that we first request the HTML as usual, then the bundle.js and then that one requests all the other resources in parallel. The first chunk however is very minimal, so the browser shouldn't need to request that manually and instead all that can already be sent as part of the HTML. That should cut a lot of latency.

This will need to be some sort of webpack plugin. A quick search revealed this one so far: https://github.com/facebook/create-react-app/blob/edc671eeea6b7d26ac3f1eb2050e50f75cf9ad5d/packages/react-dev-utils/InlineChunkHtmlPlugin.js#L10
|
True
|
Inline main JS bundle into HTML - We should look into a way to inline the main JavaScript bundle into the HTML. If you look at the request waterfall, you can see that we first request the HTML as usual, then the bundle.js and then that one requests all the other resources in parallel. The first chunk however is very minimal, so the browser shouldn't need to request that manually and instead all that can already be sent as part of the HTML. That should cut a lot of latency.

This will need to be some sort of webpack plugin. A quick search revealed this one so far: https://github.com/facebook/create-react-app/blob/edc671eeea6b7d26ac3f1eb2050e50f75cf9ad5d/packages/react-dev-utils/InlineChunkHtmlPlugin.js#L10
|
non_test
|
inline main js bundle into html we should look into a way to inline the main javascript bundle into the html if you look at the request waterfall you can see that we first request the html as usual then the bundle js and then that one requests all the other resources in parallel the first chunk however is very minimal so the browser shouldn t need to request that manually and instead all that can already be sent as part of the html that should cut a lot of latency this will need to be some sort of webpack plugin a quick search revealed this one so far
| 0
|
264,151
| 8,305,870,794
|
IssuesEvent
|
2018-09-22 12:29:25
|
Bro-Time/Bro-Time-Server
|
https://api.github.com/repos/Bro-Time/Bro-Time-Server
|
closed
|
BroBit Dropping in-chat
|
priority: medium
|
Every 4-7 minutes, 1-3 BroBits will drop in the chat and users should be able to pick them up using !pick, or other commands (will be listed below). This system will only be put in #hangout.
# Functionality:
- Randomize # of minutes after a message is sent
15% - 4 minutes after a message is sent
25% - 5 minutes after a message is sent
35% - 6 minutes after a message is sent
25% - 7 minutes after a message is sent
-Randomize # of BroBits dropped
50% - 1 BroBit dropped
30% - 2 BroBits dropped
20% - 3 BroBits dropped
-If a timer is already running, a timer is not reset
-Once a BroBit is claimed, the embed is deleted (along with who claimed it)
-When a message is sent, a timer is set with a random time (times showed above). When the timer runs out, the bot will make sure at least two people are typing and if they are typing, it will put the BroBits out to claim. If two people are not typing at the same time, the bot waits until two people are typing and then posts the BroBits.
# How to prevent abuse of this system
- One user cannot claim BroBits 2x in a row.
- Have many commands for this system. Each command has an equal chance of being picked to be used for this system.
!pick, !grab, !take, !steal, !mine, !snatch, !pull, !select, !choose, !get
- If a user is caught spamming any of these commands, the bot will ignore them for 7 days.
- Admins and mods will be watching the chat
|
1.0
|
BroBit Dropping in-chat - Every 4-7 minutes, 1-3 BroBits will drop in the chat and users should be able to pick them up using !pick, or other commands (will be listed below). This system will only be put in #hangout.
# Functionality:
- Randomize # of minutes after a message is sent
15% - 4 minutes after a message is sent
25% - 5 minutes after a message is sent
35% - 6 minutes after a message is sent
25% - 7 minutes after a message is sent
-Randomize # of BroBits dropped
50% - 1 BroBit dropped
30% - 2 BroBits dropped
20% - 3 BroBits dropped
-If a timer is already running, a timer is not reset
-Once a BroBit is claimed, the embed is deleted (along with who claimed it)
-When a message is sent, a timer is set with a random time (times showed above). When the timer runs out, the bot will make sure at least two people are typing and if they are typing, it will put the BroBits out to claim. If two people are not typing at the same time, the bot waits until two people are typing and then posts the BroBits.
# How to prevent abuse of this system
- One user cannot claim BroBits 2x in a row.
- Have many commands for this system. Each command has an equal chance of being picked to be used for this system.
!pick, !grab, !take, !steal, !mine, !snatch, !pull, !select, !choose, !get
- If a user is caught spamming any of these commands, the bot will ignore them for 7 days.
- Admins and mods will be watching the chat
|
non_test
|
brobit dropping in chat every minutes brobits will drop in the chat and users should be able to pick them up using pick or other commands will be listed below this system will only be put in hangout functionality randomize of minutes after a message is sent minutes after a message is sent minutes after a message is sent minutes after a message is sent minutes after a message is sent randomize of brobits dropped brobit dropped brobits dropped brobits dropped if a timer is already running a timer is not reset once a brobit is claimed the embed is deleted along with who claimed it when a message is sent a timer is set with a random time times showed above when the timer runs out the bot will make sure at least two people are typing and if they are typing it will put the brobits out to claim if two people are not typing at the same time the bot waits until two people are typing and then posts the brobits how to prevent abuse of this system one user cannot claim brobits in a row have many commands for this system each command has an equal chance of being picked to be used for this system pick grab take steal mine snatch pull select choose get if a user is caught spamming any of these commands the bot will ignore them for days admins and mods will be watching the chat
| 0
|
314,576
| 27,012,059,758
|
IssuesEvent
|
2023-02-10 16:08:39
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix raw_ops.test_tensorflow_Sigmoid
|
TensorFlow Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4012329973/jobs/6890687587" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4012329973/jobs/6890687587" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4012329973/jobs/6890687587" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4012329973/jobs/6890687587" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_raw_ops.py::test_tensorflow_Sigmoid[cpu-ivy.functional.backends.torch-False-False]</summary>
2023-01-26T04:48:53.3994259Z E RuntimeError: "sigmoid_cpu" not implemented for 'Half'
2023-01-26T04:48:53.4002329Z E ivy.exceptions.IvyBackendException: torch: sigmoid: "sigmoid_cpu" not implemented for 'Half'
2023-01-26T04:48:53.4003122Z E Falsifying example: test_tensorflow_Sigmoid(
2023-01-26T04:48:53.4003778Z E dtype_and_x=(['float16'], [array([-1.], dtype=float16)]),
2023-01-26T04:48:53.4004332Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. ,
2023-01-26T04:48:53.4004946Z E fn_tree='ivy.functional.frontends.tensorflow.raw_ops.Sigmoid',
2023-01-26T04:48:53.4005544Z E frontend='tensorflow',
2023-01-26T04:48:53.4005848Z E on_device='cpu',
2023-01-26T04:48:53.4006090Z E )
2023-01-26T04:48:53.4006369Z E
2023-01-26T04:48:53.4007315Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2ZkAAMoBaEBAHoABw==') as a decorator on your test case
</details>
|
1.0
|
Fix raw_ops.test_tensorflow_Sigmoid - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4012329973/jobs/6890687587" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4012329973/jobs/6890687587" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4012329973/jobs/6890687587" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4012329973/jobs/6890687587" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_raw_ops.py::test_tensorflow_Sigmoid[cpu-ivy.functional.backends.torch-False-False]</summary>
2023-01-26T04:48:53.3994259Z E RuntimeError: "sigmoid_cpu" not implemented for 'Half'
2023-01-26T04:48:53.4002329Z E ivy.exceptions.IvyBackendException: torch: sigmoid: "sigmoid_cpu" not implemented for 'Half'
2023-01-26T04:48:53.4003122Z E Falsifying example: test_tensorflow_Sigmoid(
2023-01-26T04:48:53.4003778Z E dtype_and_x=(['float16'], [array([-1.], dtype=float16)]),
2023-01-26T04:48:53.4004332Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. ,
2023-01-26T04:48:53.4004946Z E fn_tree='ivy.functional.frontends.tensorflow.raw_ops.Sigmoid',
2023-01-26T04:48:53.4005544Z E frontend='tensorflow',
2023-01-26T04:48:53.4005848Z E on_device='cpu',
2023-01-26T04:48:53.4006090Z E )
2023-01-26T04:48:53.4006369Z E
2023-01-26T04:48:53.4007315Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2ZkAAMoBaEBAHoABw==') as a decorator on your test case
</details>
|
test
|
fix raw ops test tensorflow sigmoid tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test frontends test tensorflow test raw ops py test tensorflow sigmoid e runtimeerror sigmoid cpu not implemented for half e ivy exceptions ivybackendexception torch sigmoid sigmoid cpu not implemented for half e falsifying example test tensorflow sigmoid e dtype and x dtype e test flags num positional args with out false inplace false native arrays as variable e fn tree ivy functional frontends tensorflow raw ops sigmoid e frontend tensorflow e on device cpu e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case
| 1
|
44,735
| 5,642,133,135
|
IssuesEvent
|
2017-04-06 20:24:29
|
camile024/tsam
|
https://api.github.com/repos/camile024/tsam
|
closed
|
File removal - testing
|
Fixed (if closed)/Fixed in next update (if open) needs testing
|
Old file removal needs testing if works properly (the system call in main.c)
|
1.0
|
File removal - testing - Old file removal needs testing if works properly (the system call in main.c)
|
test
|
file removal testing old file removal needs testing if works properly the system call in main c
| 1
|
640,998
| 20,814,537,327
|
IssuesEvent
|
2022-03-18 08:46:51
|
ASE-Projekte-WS-2021/ase-ws-21-unser-horsaal
|
https://api.github.com/repos/ASE-Projekte-WS-2021/ase-ws-21-unser-horsaal
|
closed
|
(PROFIL) Profildaten ändern
|
Medium Priority
|
- [x] Als Student:in möchte ich im Profil meine Email, Passwort und Nutzername ändern können, um das Profil meinen Bedürfnissen anpassen zu können.
|
1.0
|
(PROFIL) Profildaten ändern - - [x] Als Student:in möchte ich im Profil meine Email, Passwort und Nutzername ändern können, um das Profil meinen Bedürfnissen anpassen zu können.
|
non_test
|
profil profildaten ändern als student in möchte ich im profil meine email passwort und nutzername ändern können um das profil meinen bedürfnissen anpassen zu können
| 0
|
93,399
| 15,886,055,909
|
IssuesEvent
|
2021-04-09 21:45:42
|
garymsegal-ws-org/dev-example-places
|
https://api.github.com/repos/garymsegal-ws-org/dev-example-places
|
opened
|
CVE-2021-25122 (High) detected in tomcat-embed-core-9.0.35.jar, tomcat-embed-core-9.0.36.jar
|
security vulnerability
|
## CVE-2021-25122 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-9.0.35.jar</b>, <b>tomcat-embed-core-9.0.36.jar</b></p></summary>
<p>
<details><summary><b>tomcat-embed-core-9.0.35.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: dev-example-places/api/r2dbc/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.35/tomcat-embed-core-9.0.35.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.0.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.0.RELEASE.jar
- :x: **tomcat-embed-core-9.0.35.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.36.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: dev-example-places/api/jdbc/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.36/tomcat-embed-core-9.0.36.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.1.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.1.RELEASE.jar
- :x: **tomcat-embed-core-9.0.36.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/garymsegal-ws-org/dev-example-places/commit/14a29ec1a84abf2ff445ea8ee791bfbd0aa81b6f">14a29ec1a84abf2ff445ea8ee791bfbd0aa81b6f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When responding to new h2c connection requests, Apache Tomcat versions 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41 and 8.5.0 to 8.5.61 could duplicate request headers and a limited amount of request body from one request to another meaning user A and user B could both see the results of user A's request.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25122>CVE-2021-25122</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:8.5.62,9.0.42,10.0.2;org.apache.tomcat:tomcat-coyote:8.5.62,9.0.42,10.0.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.35","packageFilePaths":["/api/r2dbc/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.3.0.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.3.0.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.35","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat.embed:tomcat-embed-core:8.5.62,9.0.42,10.0.2;org.apache.tomcat:tomcat-coyote:8.5.62,9.0.42,10.0.2"},{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.36","packageFilePaths":["/api/jdbc/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.3.1.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.3.1.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.36","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat.embed:tomcat-embed-core:8.5.62,9.0.42,10.0.2;org.apache.tomcat:tomcat-coyote:8.5.62,9.0.42,10.0.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-25122","vulnerabilityDetails":"When responding to new h2c connection requests, Apache Tomcat versions 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41 and 8.5.0 to 8.5.61 could duplicate request headers and a limited amount of request body from one request to another meaning user A and user B could both see the results of user A\u0027s request.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25122","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-25122 (High) detected in tomcat-embed-core-9.0.35.jar, tomcat-embed-core-9.0.36.jar - ## CVE-2021-25122 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-9.0.35.jar</b>, <b>tomcat-embed-core-9.0.36.jar</b></p></summary>
<p>
<details><summary><b>tomcat-embed-core-9.0.35.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: dev-example-places/api/r2dbc/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.35/tomcat-embed-core-9.0.35.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.0.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.0.RELEASE.jar
- :x: **tomcat-embed-core-9.0.35.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.36.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: dev-example-places/api/jdbc/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.36/tomcat-embed-core-9.0.36.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.1.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.1.RELEASE.jar
- :x: **tomcat-embed-core-9.0.36.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/garymsegal-ws-org/dev-example-places/commit/14a29ec1a84abf2ff445ea8ee791bfbd0aa81b6f">14a29ec1a84abf2ff445ea8ee791bfbd0aa81b6f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When responding to new h2c connection requests, Apache Tomcat versions 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41 and 8.5.0 to 8.5.61 could duplicate request headers and a limited amount of request body from one request to another meaning user A and user B could both see the results of user A's request.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25122>CVE-2021-25122</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:8.5.62,9.0.42,10.0.2;org.apache.tomcat:tomcat-coyote:8.5.62,9.0.42,10.0.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.35","packageFilePaths":["/api/r2dbc/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.3.0.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.3.0.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.35","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat.embed:tomcat-embed-core:8.5.62,9.0.42,10.0.2;org.apache.tomcat:tomcat-coyote:8.5.62,9.0.42,10.0.2"},{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.36","packageFilePaths":["/api/jdbc/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.3.1.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.3.1.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.36","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat.embed:tomcat-embed-core:8.5.62,9.0.42,10.0.2;org.apache.tomcat:tomcat-coyote:8.5.62,9.0.42,10.0.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-25122","vulnerabilityDetails":"When responding to new h2c connection requests, Apache Tomcat versions 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41 and 8.5.0 to 8.5.61 could duplicate request headers and a limited amount of request body from one request to another meaning user A and user B could both see the results of user A\u0027s request.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25122","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in tomcat embed core jar tomcat embed core jar cve high severity vulnerability vulnerable libraries tomcat embed core jar tomcat embed core jar tomcat embed core jar core tomcat implementation library home page a href path to dependency file dev example places api pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file dev example places api jdbc pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details when responding to new connection requests apache tomcat versions to to and to could duplicate request headers and a limited amount of request body from one request to another meaning user a and user b could both see the results of user a s request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter tomcat release org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote packagetype java groupid org apache tomcat embed packagename tomcat embed core packageversion packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter tomcat release org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote basebranches vulnerabilityidentifier cve vulnerabilitydetails when responding to new connection requests apache tomcat versions to to and to could duplicate request headers and a limited amount of request body from one request to another meaning user a and user b could both see the results of user a request vulnerabilityurl
| 0
|
63,282
| 15,553,537,784
|
IssuesEvent
|
2021-03-16 01:41:21
|
JacobUsgaard/FlameSymbol
|
https://api.github.com/repos/JacobUsgaard/FlameSymbol
|
opened
|
Add Release github action
|
build
|
When the time comes, there should be a github actions for when a release is created
|
1.0
|
Add Release github action - When the time comes, there should be a github actions for when a release is created
|
non_test
|
add release github action when the time comes there should be a github actions for when a release is created
| 0
|
184,079
| 14,270,127,182
|
IssuesEvent
|
2020-11-21 05:07:26
|
adamconnelly/Thrift.Net
|
https://api.github.com/repos/adamconnelly/Thrift.Net
|
closed
|
Testing and code coverage strategy
|
component/testing
|
Having a well thought out and really solid automated testing strategy is crucial for the project to allow us to grow and accept contributions from more contributors, as well as being able to rapidly release changes as soon as they are made. At the moment we've got unit tests running and code coverage results published to Azure Pipelines, but there's a few things that are worth improving:
- [x] Add documentation about how our tests are organised, and what we're aiming for with code coverage and why it matters.
- [x] Tweak the coverage settings to remove stuff like the Antlr generated code from analysis. We don't control it, and we can't really do much about the coverage since we don't necessarily use all the functionality generated.
- [x] Investigate adding a coverage threshold to our builds so we get a failure if the test coverage drops before a certain level.
|
1.0
|
Testing and code coverage strategy - Having a well thought out and really solid automated testing strategy is crucial for the project to allow us to grow and accept contributions from more contributors, as well as being able to rapidly release changes as soon as they are made. At the moment we've got unit tests running and code coverage results published to Azure Pipelines, but there's a few things that are worth improving:
- [x] Add documentation about how our tests are organised, and what we're aiming for with code coverage and why it matters.
- [x] Tweak the coverage settings to remove stuff like the Antlr generated code from analysis. We don't control it, and we can't really do much about the coverage since we don't necessarily use all the functionality generated.
- [x] Investigate adding a coverage threshold to our builds so we get a failure if the test coverage drops before a certain level.
|
test
|
testing and code coverage strategy having a well thought out and really solid automated testing strategy is crucial for the project to allow us to grow and accept contributions from more contributors as well as being able to rapidly release changes as soon as they are made at the moment we ve got unit tests running and code coverage results published to azure pipelines but there s a few things that are worth improving add documentation about how our tests are organised and what we re aiming for with code coverage and why it matters tweak the coverage settings to remove stuff like the antlr generated code from analysis we don t control it and we can t really do much about the coverage since we don t necessarily use all the functionality generated investigate adding a coverage threshold to our builds so we get a failure if the test coverage drops before a certain level
| 1
|
50,468
| 21,111,618,108
|
IssuesEvent
|
2022-04-05 02:43:12
|
dotnet/fsharp
|
https://api.github.com/repos/dotnet/fsharp
|
closed
|
Something is referring to 3 strings in the FSharpPackage that cannot be found
|
Area-LangService Area-Setup Feature Improvement
|
My ActivityLog states the following:
``` XML
-<entry>
<record>622</record>
<time>2019/02/19 00:11:32.518</time>
<type>Warning</type>
<source>VisualStudio</source>
<description>Performance warning: String load failed. Pkg:{871D2A70-12A2-4E42-9440-425DD92A4116} (FSharpPackage) LANG:0409 ID:6000 </description>
</entry>
-<entry>
<record>623</record>
<time>2019/02/19 00:11:32.520</time>
<type>Warning</type>
<source>VisualStudio</source>
<description>Performance warning: String load failed. Pkg:{871D2A70-12A2-4E42-9440-425DD92A4116} (FSharpPackage) LANG:0409 ID:6001 </description>
</entry>
-<entry>
<record>625</record>
<time>2019/02/19 00:11:32.850</time>
<type>Warning</type>
<source>VisualStudio</source>
<description>Performance warning: String load failed. Pkg:{871D2A70-12A2-4E42-9440-425DD92A4116} (FSharpPackage) LANG:0409 ID:100 </description>
</entry>
```
|
1.0
|
Something is referring to 3 strings in the FSharpPackage that cannot be found - My ActivityLog states the following:
``` XML
-<entry>
<record>622</record>
<time>2019/02/19 00:11:32.518</time>
<type>Warning</type>
<source>VisualStudio</source>
<description>Performance warning: String load failed. Pkg:{871D2A70-12A2-4E42-9440-425DD92A4116} (FSharpPackage) LANG:0409 ID:6000 </description>
</entry>
-<entry>
<record>623</record>
<time>2019/02/19 00:11:32.520</time>
<type>Warning</type>
<source>VisualStudio</source>
<description>Performance warning: String load failed. Pkg:{871D2A70-12A2-4E42-9440-425DD92A4116} (FSharpPackage) LANG:0409 ID:6001 </description>
</entry>
-<entry>
<record>625</record>
<time>2019/02/19 00:11:32.850</time>
<type>Warning</type>
<source>VisualStudio</source>
<description>Performance warning: String load failed. Pkg:{871D2A70-12A2-4E42-9440-425DD92A4116} (FSharpPackage) LANG:0409 ID:100 </description>
</entry>
```
|
non_test
|
something is referring to strings in the fsharppackage that cannot be found my activitylog states the following xml warning visualstudio performance warning string load failed pkg fsharppackage lang id warning visualstudio performance warning string load failed pkg fsharppackage lang id warning visualstudio performance warning string load failed pkg fsharppackage lang id
| 0
|
248,224
| 21,003,413,753
|
IssuesEvent
|
2022-03-29 19:48:29
|
pulp/pulpcore
|
https://api.github.com/repos/pulp/pulpcore
|
closed
|
Test task child/parent tracking
|
Tests Finished?
|
Author: @dralley (dalley)
Redmine Issue: 6431, https://pulp.plan.io/issues/6431
---
Task parentage functionality was added without tests, because it is difficult to test via the standard means. Task parent/child relationships can only be set up through the plugin API, so the only way to test this would be:
* manually set them up in the database
* in the "using plugin" section of the functional tests, run tests against a plugin that uses task groups
* bmbouter suggests:
> One idea I had for the automated tests is that we should have some way for them to load additional viewsets and tasks in the RQ registry if there is a PULP_TEST env var set or something. Something that keeps them unloaded on production systems and the tests automatically skip if they aren't loaded. Just an idea I had.
We need to test that task parent/child relationships show up appropriately in the serializer.
Since task parentage is only exposed through the
|
1.0
|
Test task child/parent tracking - Author: @dralley (dalley)
Redmine Issue: 6431, https://pulp.plan.io/issues/6431
---
Task parentage functionality was added without tests, because it is difficult to test via the standard means. Task parent/child relationships can only be set up through the plugin API, so the only way to test this would be:
* manually set them up in the database
* in the "using plugin" section of the functional tests, run tests against a plugin that uses task groups
* bmbouter suggests:
> One idea I had for the automated tests is that we should have some way for them to load additional viewsets and tasks in the RQ registry if there is a PULP_TEST env var set or something. Something that keeps them unloaded on production systems and the tests automatically skip if they aren't loaded. Just an idea I had.
We need to test that task parent/child relationships show up appropriately in the serializer.
Since task parentage is only exposed through the
|
test
|
test task child parent tracking author dralley dalley redmine issue task parentage functionality was added without tests because it is difficult to test via the standard means task parent child relationships can only be set up through the plugin api so the only way to test this would be manually set them up in the database in the using plugin section of the functional tests run tests against a plugin that uses task groups bmbouter suggests one idea i had for the automated tests is that we should have some way for them to load additional viewsets and tasks in the rq registry if there is a pulp test env var set or something something that keeps them unloaded on production systems and the tests automatically skip if they aren t loaded just an idea i had we need to test that task parent child relationships show up appropriately in the serializer since task parentage is only exposed through the
| 1
|
103,672
| 8,925,772,015
|
IssuesEvent
|
2019-01-22 00:37:53
|
linkerd/linkerd2
|
https://api.github.com/repos/linkerd/linkerd2
|
closed
|
Proxy: Intermittent inbound_tcp test failure
|
area/proxy area/test priority/P2 wontfix
|
```
running 5 tests
test outbound_times_out ... ignored
server h1 error: invalid HTTP version specified
ERROR:conduit_proxy::map_err: turning service error into 500: Inner(Upstream(Inner(Inner(Inner(Error { kind: Inner(Error { kind: Proto(FRAME_SIZE_ERROR) }) })))))
test outbound_uses_orig_dst_if_not_local_svc ... ok
test outbound_reconnects_if_controller_stream_ends ... ok
test outbound_asks_controller_api ... ok
test outbound_updates_newer_services ... ok
test result: ok. 4 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out
Running target/debug/deps/telemetry-aa22a33a1ddc7e5f
running 5 tests
test records_latency_statistics ... ignored
test telemetry_report_errors_are_ignored ... ok
test inbound_aggregates_telemetry_over_several_requests ... ok
test http1_inbound_sends_telemetry ... ok
test inbound_sends_telemetry ... ok
test result: ok. 4 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out
Running target/debug/deps/transparency-ea2372ecb1a50674
running 12 tests
thread 'support server' panicked at 'assertion failed: `(left == right)`
left: `[]`,
right: `[99, 117, 115, 116, 111, 109, 32, 116, 99, 112, 32, 104, 101, 108, 108, 111]`', proxy/tests/transparency.rs:208:13
test http1_connect_not_supported ... ok
test inbound_tcp ... FAILED
ERROR:conduit_proxy::map_err: turning service error into 500: Inner(Upstream(Inner(Inner(Error { kind: Inner(Error { kind: Proto(INTERNAL_ERROR) }) }))))
test outbound_tcp ... ok
test http11_upgrade_not_supported ... ok
test tcp_with_no_orig_dst ... ok
test http10_with_host ... ok
test http11_absolute_uri_differs_from_host ... ok
test inbound_http1 ... ok
test http1_get_doesnt_add_transfer_encoding ... ok
test http10_without_host ... ok
test http1_removes_connection_headers ... ok
test outbound_http1 ... ok
failures:
---- inbound_tcp stdout ----
thread 'inbound_tcp' panicked at 'read: Error { repr: Os { code: 104, message: "Connection reset by peer" } }', /checkout/src/libcore/result.rs:916:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.
failures:
inbound_tcp
test result: FAILED. 11 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
```
|
1.0
|
Proxy: Intermittent inbound_tcp test failure - ```
running 5 tests
test outbound_times_out ... ignored
server h1 error: invalid HTTP version specified
ERROR:conduit_proxy::map_err: turning service error into 500: Inner(Upstream(Inner(Inner(Inner(Error { kind: Inner(Error { kind: Proto(FRAME_SIZE_ERROR) }) })))))
test outbound_uses_orig_dst_if_not_local_svc ... ok
test outbound_reconnects_if_controller_stream_ends ... ok
test outbound_asks_controller_api ... ok
test outbound_updates_newer_services ... ok
test result: ok. 4 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out
Running target/debug/deps/telemetry-aa22a33a1ddc7e5f
running 5 tests
test records_latency_statistics ... ignored
test telemetry_report_errors_are_ignored ... ok
test inbound_aggregates_telemetry_over_several_requests ... ok
test http1_inbound_sends_telemetry ... ok
test inbound_sends_telemetry ... ok
test result: ok. 4 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out
Running target/debug/deps/transparency-ea2372ecb1a50674
running 12 tests
thread 'support server' panicked at 'assertion failed: `(left == right)`
left: `[]`,
right: `[99, 117, 115, 116, 111, 109, 32, 116, 99, 112, 32, 104, 101, 108, 108, 111]`', proxy/tests/transparency.rs:208:13
test http1_connect_not_supported ... ok
test inbound_tcp ... FAILED
ERROR:conduit_proxy::map_err: turning service error into 500: Inner(Upstream(Inner(Inner(Error { kind: Inner(Error { kind: Proto(INTERNAL_ERROR) }) }))))
test outbound_tcp ... ok
test http11_upgrade_not_supported ... ok
test tcp_with_no_orig_dst ... ok
test http10_with_host ... ok
test http11_absolute_uri_differs_from_host ... ok
test inbound_http1 ... ok
test http1_get_doesnt_add_transfer_encoding ... ok
test http10_without_host ... ok
test http1_removes_connection_headers ... ok
test outbound_http1 ... ok
failures:
---- inbound_tcp stdout ----
thread 'inbound_tcp' panicked at 'read: Error { repr: Os { code: 104, message: "Connection reset by peer" } }', /checkout/src/libcore/result.rs:916:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.
failures:
inbound_tcp
test result: FAILED. 11 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
```
|
test
|
proxy intermittent inbound tcp test failure running tests test outbound times out ignored server error invalid http version specified error conduit proxy map err turning service error into inner upstream inner inner inner error kind inner error kind proto frame size error test outbound uses orig dst if not local svc ok test outbound reconnects if controller stream ends ok test outbound asks controller api ok test outbound updates newer services ok test result ok passed failed ignored measured filtered out running target debug deps telemetry running tests test records latency statistics ignored test telemetry report errors are ignored ok test inbound aggregates telemetry over several requests ok test inbound sends telemetry ok test inbound sends telemetry ok test result ok passed failed ignored measured filtered out running target debug deps transparency running tests thread support server panicked at assertion failed left right left right proxy tests transparency rs test connect not supported ok test inbound tcp failed error conduit proxy map err turning service error into inner upstream inner inner error kind inner error kind proto internal error test outbound tcp ok test upgrade not supported ok test tcp with no orig dst ok test with host ok test absolute uri differs from host ok test inbound ok test get doesnt add transfer encoding ok test without host ok test removes connection headers ok test outbound ok failures inbound tcp stdout thread inbound tcp panicked at read error repr os code message connection reset by peer checkout src libcore result rs note run with rust backtrace for a backtrace failures inbound tcp test result failed passed failed ignored measured filtered out
| 1
|
11,860
| 18,276,674,529
|
IssuesEvent
|
2021-10-04 19:43:28
|
CMPUT301F21T09/BudgetProjectName
|
https://api.github.com/repos/CMPUT301F21T09/BudgetProjectName
|
closed
|
RE-HE #10: User should be able to make a habit event
|
user requirement
|
:exclamation: This requirement is based off of a user story that requires more investigation with the client
A user should be able to make a habit event as a marker for completing a specific habit they own on a given day.
A habit event should only be able to be created on today (confirm with client if creating habit events on past days should be possible).
A habit event should be associated with only one habit and can only be created by the owner of that habit.
Habit events should be synced with the online database.
### See User Requirements:
US 02.01.01
|
1.0
|
RE-HE #10: User should be able to make a habit event - :exclamation: This requirement is based off of a user story that requires more investigation with the client
A user should be able to make a habit event as a marker for completing a specific habit they own on a given day.
A habit event should only be able to be created on today (confirm with client if creating habit events on past days should be possible).
A habit event should be associated with only one habit and can only be created by the owner of that habit.
Habit events should be synced with the online database.
### See User Requirements:
US 02.01.01
|
non_test
|
re he user should be able to make a habit event exclamation this requirement is based off of a user story that requires more investigation with the client a user should be able to make a habit event as a marker for completing a specific habit they own on a given day a habit event should only be able to be created on today confirm with client if creating habit events on past days should be possible a habit event should be associated with only one habit and can only be created by the owner of that habit habit events should be synced with the online database see user requirements us
| 0
|
164,602
| 12,809,119,461
|
IssuesEvent
|
2020-07-03 14:57:50
|
aliasrobotics/RVD
|
https://api.github.com/repos/aliasrobotics/RVD
|
closed
|
RVD#3116: CWE-134 (format), If format strings can be influenced by an attacker, they can be exploi... @ vers/boards/tap-v1/sdio.c:80
|
CWE-134 bug components software flawfinder flawfinder_level_4 mitigated robot component: PX4 static analysis testing triage version: v1.8.0
|
```yaml
id: 3116
title: 'RVD#3116: CWE-134 (format), If format strings can be influenced by an attacker,
they can be exploi... @ vers/boards/tap-v1/sdio.c:80'
type: bug
description: If format strings can be influenced by an attacker, they can be exploited
(CWE-134). Use a constant for the format specification. . Happening @ ...vers/boards/tap-v1/sdio.c:80
cwe:
- CWE-134
cve: None
keywords:
- flawfinder
- flawfinder_level_4
- static analysis
- testing
- triage
- CWE-134
- bug
- 'version: v1.8.0'
- 'robot component: PX4'
- components software
system: ./Firmware/src/drivers/boards/tap-v1/sdio.c:80:21
vendor: null
severity:
rvss-score: 0
rvss-vector: ''
severity-description: ''
cvss-score: 0
cvss-vector: ''
links:
- https://github.com/aliasrobotics/RVD/issues/3116
flaw:
phase: testing
specificity: subject-specific
architectural-location: application-specific
application: N/A
subsystem: N/A
package: N/A
languages: None
date-detected: 2020-06-29 (16:29)
detected-by: Alias Robotics
detected-by-method: testing static
date-reported: 2020-06-29 (16:29)
reported-by: Alias Robotics
reported-by-relationship: automatic
issue: https://github.com/aliasrobotics/RVD/issues/3116
reproducibility: always
trace: '(context) # define message printf'
reproduction: See artifacts below (if available)
reproduction-image: gitlab.com/aliasrobotics/offensive/alurity/pipelines/active/pipeline_px4/-/jobs/615986299/artifacts/download
exploitation:
description: ''
exploitation-image: ''
exploitation-vector: ''
exploitation-recipe: ''
mitigation:
description: Use a constant for the format specification
pull-request: ''
date-mitigation: ''
```
|
1.0
|
RVD#3116: CWE-134 (format), If format strings can be influenced by an attacker, they can be exploi... @ vers/boards/tap-v1/sdio.c:80 - ```yaml
id: 3116
title: 'RVD#3116: CWE-134 (format), If format strings can be influenced by an attacker,
they can be exploi... @ vers/boards/tap-v1/sdio.c:80'
type: bug
description: If format strings can be influenced by an attacker, they can be exploited
(CWE-134). Use a constant for the format specification. . Happening @ ...vers/boards/tap-v1/sdio.c:80
cwe:
- CWE-134
cve: None
keywords:
- flawfinder
- flawfinder_level_4
- static analysis
- testing
- triage
- CWE-134
- bug
- 'version: v1.8.0'
- 'robot component: PX4'
- components software
system: ./Firmware/src/drivers/boards/tap-v1/sdio.c:80:21
vendor: null
severity:
rvss-score: 0
rvss-vector: ''
severity-description: ''
cvss-score: 0
cvss-vector: ''
links:
- https://github.com/aliasrobotics/RVD/issues/3116
flaw:
phase: testing
specificity: subject-specific
architectural-location: application-specific
application: N/A
subsystem: N/A
package: N/A
languages: None
date-detected: 2020-06-29 (16:29)
detected-by: Alias Robotics
detected-by-method: testing static
date-reported: 2020-06-29 (16:29)
reported-by: Alias Robotics
reported-by-relationship: automatic
issue: https://github.com/aliasrobotics/RVD/issues/3116
reproducibility: always
trace: '(context) # define message printf'
reproduction: See artifacts below (if available)
reproduction-image: gitlab.com/aliasrobotics/offensive/alurity/pipelines/active/pipeline_px4/-/jobs/615986299/artifacts/download
exploitation:
description: ''
exploitation-image: ''
exploitation-vector: ''
exploitation-recipe: ''
mitigation:
description: Use a constant for the format specification
pull-request: ''
date-mitigation: ''
```
|
test
|
rvd cwe format if format strings can be influenced by an attacker they can be exploi vers boards tap sdio c yaml id title rvd cwe format if format strings can be influenced by an attacker they can be exploi vers boards tap sdio c type bug description if format strings can be influenced by an attacker they can be exploited cwe use a constant for the format specification happening vers boards tap sdio c cwe cwe cve none keywords flawfinder flawfinder level static analysis testing triage cwe bug version robot component components software system firmware src drivers boards tap sdio c vendor null severity rvss score rvss vector severity description cvss score cvss vector links flaw phase testing specificity subject specific architectural location application specific application n a subsystem n a package n a languages none date detected detected by alias robotics detected by method testing static date reported reported by alias robotics reported by relationship automatic issue reproducibility always trace context define message printf reproduction see artifacts below if available reproduction image gitlab com aliasrobotics offensive alurity pipelines active pipeline jobs artifacts download exploitation description exploitation image exploitation vector exploitation recipe mitigation description use a constant for the format specification pull request date mitigation
| 1
|
246,176
| 26,600,345,026
|
IssuesEvent
|
2023-01-23 15:21:15
|
lukebrogan-mend/django.nV
|
https://api.github.com/repos/lukebrogan-mend/django.nV
|
closed
|
CVE-2016-2513 (Low) detected in Django-1.8.3-py2.py3-none-any.whl - autoclosed
|
security vulnerability
|
## CVE-2016-2513 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Django-1.8.3-py2.py3-none-any.whl</b></p></summary>
<p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/a3/e1/0f3c17b1caa559ba69513ff72e250377c268d5bd3e8ad2b22809c7e2e907/Django-1.8.3-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/a3/e1/0f3c17b1caa559ba69513ff72e250377c268d5bd3e8ad2b22809c7e2e907/Django-1.8.3-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Django-1.8.3-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lukebroganws/django.nV/commit/442c6c7076c373c9762f875ec09227c88ad5d198">442c6c7076c373c9762f875ec09227c88ad5d198</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The password hasher in contrib/auth/hashers.py in Django before 1.8.10 and 1.9.x before 1.9.3 allows remote attackers to enumerate users via a timing attack involving login requests.
<p>Publish Date: 2016-04-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2513>CVE-2016-2513</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2513">https://nvd.nist.gov/vuln/detail/CVE-2016-2513</a></p>
<p>Release Date: 2016-04-08</p>
<p>Fix Resolution: 1.8.10,1.9.3</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
True
|
CVE-2016-2513 (Low) detected in Django-1.8.3-py2.py3-none-any.whl - autoclosed - ## CVE-2016-2513 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Django-1.8.3-py2.py3-none-any.whl</b></p></summary>
<p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/a3/e1/0f3c17b1caa559ba69513ff72e250377c268d5bd3e8ad2b22809c7e2e907/Django-1.8.3-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/a3/e1/0f3c17b1caa559ba69513ff72e250377c268d5bd3e8ad2b22809c7e2e907/Django-1.8.3-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Django-1.8.3-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lukebroganws/django.nV/commit/442c6c7076c373c9762f875ec09227c88ad5d198">442c6c7076c373c9762f875ec09227c88ad5d198</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The password hasher in contrib/auth/hashers.py in Django before 1.8.10 and 1.9.x before 1.9.3 allows remote attackers to enumerate users via a timing attack involving login requests.
<p>Publish Date: 2016-04-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2513>CVE-2016-2513</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2513">https://nvd.nist.gov/vuln/detail/CVE-2016-2513</a></p>
<p>Release Date: 2016-04-08</p>
<p>Fix Resolution: 1.8.10,1.9.3</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
non_test
|
cve low detected in django none any whl autoclosed cve low severity vulnerability vulnerable library django none any whl a high level python web framework that encourages rapid development and clean pragmatic design library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy x django none any whl vulnerable library found in head commit a href found in base branch master vulnerability details the password hasher in contrib auth hashers py in django before and x before allows remote attackers to enumerate users via a timing attack involving login requests publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr
| 0
|
98,764
| 8,685,445,805
|
IssuesEvent
|
2018-12-03 07:46:28
|
humera987/FXLabs-Test-Automation
|
https://api.github.com/repos/humera987/FXLabs-Test-Automation
|
closed
|
FX Testing 3 : ApiV1OrgsIdGetPathParamIdNullValue
|
FX Testing 3
|
Project : FX Testing 3
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=Y2YwM2Y3NDktODJjZC00OGIzLTg1MmEtNWY5ZDJlMThhZTdi; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Dec 2018 04:37:56 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/orgs/null
Request :
Response :
{
"timestamp" : "2018-12-03T04:37:56.660+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/orgs/null"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot ---
|
1.0
|
FX Testing 3 : ApiV1OrgsIdGetPathParamIdNullValue - Project : FX Testing 3
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=Y2YwM2Y3NDktODJjZC00OGIzLTg1MmEtNWY5ZDJlMThhZTdi; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Dec 2018 04:37:56 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/orgs/null
Request :
Response :
{
"timestamp" : "2018-12-03T04:37:56.660+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/orgs/null"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot ---
|
test
|
fx testing project fx testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api orgs null logs assertion resolved to result assertion resolved to result assertion resolved to result assertion resolved to result fx bot
| 1
|
444,753
| 31,145,262,547
|
IssuesEvent
|
2023-08-16 05:45:52
|
ErlisI/TTP-CAPSTONE
|
https://api.github.com/repos/ErlisI/TTP-CAPSTONE
|
closed
|
As a Business Owner
|
documentation
|
As a Business Owner, I often find myself overwhelmed with the numerous tasks involved in managing my restaurant efficiently. I need a comprehensive restaurant management app that can help me streamline and centralize all aspects of my business in one place.
|
1.0
|
As a Business Owner - As a Business Owner, I often find myself overwhelmed with the numerous tasks involved in managing my restaurant efficiently. I need a comprehensive restaurant management app that can help me streamline and centralize all aspects of my business in one place.
|
non_test
|
as a business owner as a business owner i often find myself overwhelmed with the numerous tasks involved in managing my restaurant efficiently i need a comprehensive restaurant management app that can help me streamline and centralize all aspects of my business in one place
| 0
|
60,652
| 14,576,715,089
|
IssuesEvent
|
2020-12-18 00:09:15
|
h2oai/wave
|
https://api.github.com/repos/h2oai/wave
|
closed
|
Propagate OIDC refresh token to Python client.
|
feature security
|
`Access` token is already accessible in Python client. For us to be able to access DAI and MLOps we need also `refresh` token.
## Goal
Propagate `refresh` token the same way as `access` token to Python client. It should be present in `q.auth.refresh_token` if OIDC enabled.
|
True
|
Propagate OIDC refresh token to Python client. - `Access` token is already accessible in Python client. For us to be able to access DAI and MLOps we need also `refresh` token.
## Goal
Propagate `refresh` token the same way as `access` token to Python client. It should be present in `q.auth.refresh_token` if OIDC enabled.
|
non_test
|
propagate oidc refresh token to python client access token is already accessible in python client for us to be able to access dai and mlops we need also refresh token goal propagate refresh token the same way as access token to python client it should be present in q auth refresh token if oidc enabled
| 0
|
280,014
| 24,273,735,161
|
IssuesEvent
|
2022-09-28 12:25:09
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
reopened
|
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/ml/data_frame_analytics/regression_creation·ts - machine learning data frame analytics regression creation electrical grid stability navigates through the wizard and sets all needed fields
|
:ml failed-test
|
A test failed on a tracked branch
```
Error: mlAnalyticsCreateJobWizardTrainingPercentSlider slider value should be '20' (got '10')
at Assertion.assert (/dev/shm/workspace/parallel/7/kibana/packages/kbn-expect/expect.js:100:11)
at Assertion.eql (/dev/shm/workspace/parallel/7/kibana/packages/kbn-expect/expect.js:244:8)
at Object.assertSliderValue (test/functional/services/ml/common_ui.ts:210:30)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at Object.setSliderValue (test/functional/services/ml/common_ui.ts:205:7)
at Object.setTrainingPercent (test/functional/services/ml/data_frame_analytics_creation.ts:278:7)
at Context.<anonymous> (test/functional/apps/ml/data_frame_analytics/regression_creation.ts:95:11)
at Object.apply (/dev/shm/workspace/parallel/7/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:73:16) {
actual: '10',
expected: 20,
showDiff: true
}
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/12454/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/ml/data_frame_analytics/regression_creation·ts","test.name":"machine learning data frame analytics regression creation electrical grid stability navigates through the wizard and sets all needed fields","test.failCount":3}} -->
|
1.0
|
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/ml/data_frame_analytics/regression_creation·ts - machine learning data frame analytics regression creation electrical grid stability navigates through the wizard and sets all needed fields - A test failed on a tracked branch
```
Error: mlAnalyticsCreateJobWizardTrainingPercentSlider slider value should be '20' (got '10')
at Assertion.assert (/dev/shm/workspace/parallel/7/kibana/packages/kbn-expect/expect.js:100:11)
at Assertion.eql (/dev/shm/workspace/parallel/7/kibana/packages/kbn-expect/expect.js:244:8)
at Object.assertSliderValue (test/functional/services/ml/common_ui.ts:210:30)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at Object.setSliderValue (test/functional/services/ml/common_ui.ts:205:7)
at Object.setTrainingPercent (test/functional/services/ml/data_frame_analytics_creation.ts:278:7)
at Context.<anonymous> (test/functional/apps/ml/data_frame_analytics/regression_creation.ts:95:11)
at Object.apply (/dev/shm/workspace/parallel/7/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:73:16) {
actual: '10',
expected: 20,
showDiff: true
}
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/12454/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/ml/data_frame_analytics/regression_creation·ts","test.name":"machine learning data frame analytics regression creation electrical grid stability navigates through the wizard and sets all needed fields","test.failCount":3}} -->
|
test
|
failing test chrome x pack ui functional tests x pack test functional apps ml data frame analytics regression creation·ts machine learning data frame analytics regression creation electrical grid stability navigates through the wizard and sets all needed fields a test failed on a tracked branch error mlanalyticscreatejobwizardtrainingpercentslider slider value should be got at assertion assert dev shm workspace parallel kibana packages kbn expect expect js at assertion eql dev shm workspace parallel kibana packages kbn expect expect js at object assertslidervalue test functional services ml common ui ts at processticksandrejections internal process task queues js at object setslidervalue test functional services ml common ui ts at object settrainingpercent test functional services ml data frame analytics creation ts at context test functional apps ml data frame analytics regression creation ts at object apply dev shm workspace parallel kibana packages kbn test src functional test runner lib mocha wrap function js actual expected showdiff true first failure
| 1
|
136,903
| 11,092,251,898
|
IssuesEvent
|
2019-12-15 17:43:40
|
ayumi-cloud/oc-security-module
|
https://api.github.com/repos/ayumi-cloud/oc-security-module
|
closed
|
Add blank Ahrefs Crawler records to whitelist
|
Add to Whitelist FINSIHED Firewall Definitions Priority: Medium Testing - Passed enhancement
|
### Enhancement idea
- [x] Add blank Ahrefs Crawler records to whitelist.
Fields | Details
---|---
ISP | OVH SAS
Type | Data Center/Web Hosting/Transit
Hostname | ip-xxx.xxx.xxx.xxx.a.ahrefs.com
Domain | ovh.com
Country | France
City | Roubaix, Hauts-de-France
Note: Has two different bots one for crawling and one for site audits.
|
1.0
|
Add blank Ahrefs Crawler records to whitelist - ### Enhancement idea
- [x] Add blank Ahrefs Crawler records to whitelist.
Fields | Details
---|---
ISP | OVH SAS
Type | Data Center/Web Hosting/Transit
Hostname | ip-xxx.xxx.xxx.xxx.a.ahrefs.com
Domain | ovh.com
Country | France
City | Roubaix, Hauts-de-France
Note: Has two different bots one for crawling and one for site audits.
|
test
|
add blank ahrefs crawler records to whitelist enhancement idea add blank ahrefs crawler records to whitelist fields details isp ovh sas type data center web hosting transit hostname ip xxx xxx xxx xxx a ahrefs com domain ovh com country france city roubaix hauts de france note has two different bots one for crawling and one for site audits
| 1
|
12,960
| 15,214,359,164
|
IssuesEvent
|
2021-02-17 13:06:32
|
BauhausLuftfahrt/PAXelerate
|
https://api.github.com/repos/BauhausLuftfahrt/PAXelerate
|
closed
|
Check functionalities of local EMFStore
|
compatibilty
|
- import and export of models
- versionizing
- exchange
see #88 for potential error
|
True
|
Check functionalities of local EMFStore - - import and export of models
- versionizing
- exchange
see #88 for potential error
|
non_test
|
check functionalities of local emfstore import and export of models versionizing exchange see for potential error
| 0
|
212,912
| 16,504,091,636
|
IssuesEvent
|
2021-05-25 17:05:05
|
Accenture/AmpliGraph
|
https://api.github.com/repos/Accenture/AmpliGraph
|
closed
|
Update docs of BCE loss
|
quality & documentation
|
**Background and Context**
In the constructor of the BCE loss, we need to add details of label smoothing and label weighting under loss params. It is missing currently
**Description**
|
1.0
|
Update docs of BCE loss - **Background and Context**
In the constructor of the BCE loss, we need to add details of label smoothing and label weighting under loss params. It is missing currently
**Description**
|
non_test
|
update docs of bce loss background and context in the constructor of the bce loss we need to add details of label smoothing and label weighting under loss params it is missing currently description
| 0
|
250,753
| 7,987,206,817
|
IssuesEvent
|
2018-07-19 06:52:38
|
ess-dmsc/forward-epics-to-kafka
|
https://api.github.com/repos/ess-dmsc/forward-epics-to-kafka
|
closed
|
Is a scalar value message really 112 bytes?
|
high priority
|
If the statistics reported by the Forwarder are correct then a scalar value pv update message is 112 bytes. This seems very large.
|
1.0
|
Is a scalar value message really 112 bytes? - If the statistics reported by the Forwarder are correct then a scalar value pv update message is 112 bytes. This seems very large.
|
non_test
|
is a scalar value message really bytes if the statistics reported by the forwarder are correct then a scalar value pv update message is bytes this seems very large
| 0
|
104,320
| 13,055,539,797
|
IssuesEvent
|
2020-07-30 01:59:54
|
alice-i-cecile/Fonts-of-Power
|
https://api.github.com/repos/alice-i-cecile/Fonts-of-Power
|
closed
|
Try to fix interaction between prismatic and weapon swapping
|
balance design
|
Really really sad to lose all your power.
Maybe one prismatic affix per piece of gear??
|
1.0
|
Try to fix interaction between prismatic and weapon swapping - Really really sad to lose all your power.
Maybe one prismatic affix per piece of gear??
|
non_test
|
try to fix interaction between prismatic and weapon swapping really really sad to lose all your power maybe one prismatic affix per piece of gear
| 0
|
161,024
| 12,529,897,772
|
IssuesEvent
|
2020-06-04 12:10:53
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
storage: TestInOrderDelivery failed under stress
|
C-test-failure O-robot branch-master
|
SHA: https://github.com/cockroachdb/cockroach/commits/cf4d9a46193b9fc2c63b7adf89b3b0b5c84adb2b
Parameters:
```
TAGS=
GOFLAGS=
```
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=TestInOrderDelivery PKG=github.com/cockroachdb/cockroach/pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1667384&tab=buildLog
```
panic: locally configured maximum clock offset (1ns) does not match that of node 127.0.0.1:34153 (500ms)
goroutine 798249 [running]:
github.com/cockroachdb/cockroach/pkg/rpc.(*HeartbeatService).Ping(0xc000359460, 0x34b0e80, 0xc00146c720, 0xc001978380, 0xc000359460, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/heartbeat.go:105 +0x512
github.com/cockroachdb/cockroach/pkg/rpc._Heartbeat_Ping_Handler.func1(0x34b0e80, 0xc00146c720, 0x2c256a0, 0xc001978380, 0x2c256a0, 0xc001978380, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/heartbeat.pb.go:224 +0x86
github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x34b0e80, 0xc00146c720, 0x2c256a0, 0xc001978380, 0xc00301a880, 0xc00301a8a0, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go:44 +0xa08
github.com/cockroachdb/cockroach/pkg/rpc.NewServerWithInterceptor.func4(0x34b0e80, 0xc00146c720, 0x2c256a0, 0xc001978380, 0xc00301a880, 0xc00301a8a0, 0x29efcc0, 0x4f8fbe8, 0x2cda980, 0xc000648d00)
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:252 +0xac
github.com/cockroachdb/cockroach/pkg/rpc._Heartbeat_Ping_Handler(0x29bace0, 0xc000359460, 0x34b0e80, 0xc00146c720, 0xc001978310, 0xc013b8ff50, 0x0, 0x0, 0xc000648d00, 0xc00031dc0d)
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/heartbeat.pb.go:226 +0x158
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc000f321c0, 0x34d6920, 0xc001002600, 0xc000648d00, 0xc00f3d80f0, 0x4cc2ef0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:1011 +0x4cd
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).handleStream(0xc000f321c0, 0x34d6920, 0xc001002600, 0xc000648d00, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:1249 +0x1311
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc013d191e0, 0xc000f321c0, 0x34d6920, 0xc001002600, 0xc000648d00)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:680 +0x9f
created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:678 +0xa1
goroutine 1 [chan receive]:
testing.(*T).Run(0xc0010f0000, 0x2d4eec0, 0x13, 0x2e8c5d0, 0x922a01)
/usr/local/go/src/testing/testing.go:879 +0x383
testing.runTests.func1(0xc0001e2300)
/usr/local/go/src/testing/testing.go:1119 +0x78
testing.tRunner(0xc0001e2300, 0xc00026bba0)
/usr/local/go/src/testing/testing.go:827 +0xbf
testing.runTests(0xc0002a4740, 0x4cf4700, 0x22c, 0x22c, 0x17783e4)
/usr/local/go/src/testing/testing.go:1117 +0x2aa
testing.(*M).Run(0xc0003f2700, 0x0)
/usr/local/go/src/testing/testing.go:1034 +0x165
github.com/cockroachdb/cockroach/pkg/storage_test.TestMain(0xc0003f2700)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/main_test.go:57 +0x1e1
main.main()
_testmain.go:1162 +0x13d
goroutine 20 [syscall, 4 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:139 +0x9c
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:29 +0x41
goroutine 42 [chan receive]:
github.com/cockroachdb/cockroach/pkg/util/log.flushDaemon()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1227 +0xf2
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:629 +0x11e
goroutine 43 [chan receive, 4 minutes]:
github.com/cockroachdb/cockroach/pkg/util/log.signalFlusher()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:636 +0xab
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:630 +0x136
goroutine 797535 [IO wait]:
internal/poll.runtime_pollWait(0x14c57fd9f820, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:173 +0x66
```
|
1.0
|
storage: TestInOrderDelivery failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/cf4d9a46193b9fc2c63b7adf89b3b0b5c84adb2b
Parameters:
```
TAGS=
GOFLAGS=
```
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=TestInOrderDelivery PKG=github.com/cockroachdb/cockroach/pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1667384&tab=buildLog
```
panic: locally configured maximum clock offset (1ns) does not match that of node 127.0.0.1:34153 (500ms)
goroutine 798249 [running]:
github.com/cockroachdb/cockroach/pkg/rpc.(*HeartbeatService).Ping(0xc000359460, 0x34b0e80, 0xc00146c720, 0xc001978380, 0xc000359460, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/heartbeat.go:105 +0x512
github.com/cockroachdb/cockroach/pkg/rpc._Heartbeat_Ping_Handler.func1(0x34b0e80, 0xc00146c720, 0x2c256a0, 0xc001978380, 0x2c256a0, 0xc001978380, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/heartbeat.pb.go:224 +0x86
github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x34b0e80, 0xc00146c720, 0x2c256a0, 0xc001978380, 0xc00301a880, 0xc00301a8a0, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go:44 +0xa08
github.com/cockroachdb/cockroach/pkg/rpc.NewServerWithInterceptor.func4(0x34b0e80, 0xc00146c720, 0x2c256a0, 0xc001978380, 0xc00301a880, 0xc00301a8a0, 0x29efcc0, 0x4f8fbe8, 0x2cda980, 0xc000648d00)
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:252 +0xac
github.com/cockroachdb/cockroach/pkg/rpc._Heartbeat_Ping_Handler(0x29bace0, 0xc000359460, 0x34b0e80, 0xc00146c720, 0xc001978310, 0xc013b8ff50, 0x0, 0x0, 0xc000648d00, 0xc00031dc0d)
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/heartbeat.pb.go:226 +0x158
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc000f321c0, 0x34d6920, 0xc001002600, 0xc000648d00, 0xc00f3d80f0, 0x4cc2ef0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:1011 +0x4cd
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).handleStream(0xc000f321c0, 0x34d6920, 0xc001002600, 0xc000648d00, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:1249 +0x1311
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc013d191e0, 0xc000f321c0, 0x34d6920, 0xc001002600, 0xc000648d00)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:680 +0x9f
created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:678 +0xa1
goroutine 1 [chan receive]:
testing.(*T).Run(0xc0010f0000, 0x2d4eec0, 0x13, 0x2e8c5d0, 0x922a01)
/usr/local/go/src/testing/testing.go:879 +0x383
testing.runTests.func1(0xc0001e2300)
/usr/local/go/src/testing/testing.go:1119 +0x78
testing.tRunner(0xc0001e2300, 0xc00026bba0)
/usr/local/go/src/testing/testing.go:827 +0xbf
testing.runTests(0xc0002a4740, 0x4cf4700, 0x22c, 0x22c, 0x17783e4)
/usr/local/go/src/testing/testing.go:1117 +0x2aa
testing.(*M).Run(0xc0003f2700, 0x0)
/usr/local/go/src/testing/testing.go:1034 +0x165
github.com/cockroachdb/cockroach/pkg/storage_test.TestMain(0xc0003f2700)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/main_test.go:57 +0x1e1
main.main()
_testmain.go:1162 +0x13d
goroutine 20 [syscall, 4 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:139 +0x9c
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:29 +0x41
goroutine 42 [chan receive]:
github.com/cockroachdb/cockroach/pkg/util/log.flushDaemon()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1227 +0xf2
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:629 +0x11e
goroutine 43 [chan receive, 4 minutes]:
github.com/cockroachdb/cockroach/pkg/util/log.signalFlusher()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:636 +0xab
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:630 +0x136
goroutine 797535 [IO wait]:
internal/poll.runtime_pollWait(0x14c57fd9f820, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:173 +0x66
```
|
test
|
storage testinorderdelivery failed under stress sha parameters tags goflags to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests testinorderdelivery pkg github com cockroachdb cockroach pkg storage testtimeout stressflags maxtime timeout tee tmp stress log failed test panic locally configured maximum clock offset does not match that of node goroutine github com cockroachdb cockroach pkg rpc heartbeatservice ping go src github com cockroachdb cockroach pkg rpc heartbeat go github com cockroachdb cockroach pkg rpc heartbeat ping handler go src github com cockroachdb cockroach pkg rpc heartbeat pb go github com cockroachdb cockroach vendor github com grpc ecosystem grpc opentracing go otgrpc opentracingserverinterceptor go src github com cockroachdb cockroach vendor github com grpc ecosystem grpc opentracing go otgrpc server go github com cockroachdb cockroach pkg rpc newserverwithinterceptor go src github com cockroachdb cockroach pkg rpc context go github com cockroachdb cockroach pkg rpc heartbeat ping handler go src github com cockroachdb cockroach pkg rpc heartbeat pb go github com cockroachdb cockroach vendor google golang org grpc server processunaryrpc go src github com cockroachdb cockroach vendor google golang org grpc server go github com cockroachdb cockroach vendor google golang org grpc server handlestream go src github com cockroachdb cockroach vendor google golang org grpc server go github com cockroachdb cockroach vendor google golang org grpc server servestreams go src github com cockroachdb cockroach vendor google golang org grpc server go created by github com cockroachdb cockroach vendor google golang org grpc server servestreams go src github com cockroachdb cockroach vendor google golang org grpc server go goroutine testing t run usr local go src testing testing go testing runtests usr local go src testing testing go testing trunner usr local go src testing testing go testing runtests usr local go src testing testing go testing m run usr local go src testing testing go github com cockroachdb cockroach pkg storage test testmain go src github com cockroachdb cockroach pkg storage main test go main main testmain go goroutine os signal signal recv usr local go src runtime sigqueue go os signal loop usr local go src os signal signal unix go created by os signal init usr local go src os signal signal unix go goroutine github com cockroachdb cockroach pkg util log flushdaemon go src github com cockroachdb cockroach pkg util log clog go created by github com cockroachdb cockroach pkg util log init go src github com cockroachdb cockroach pkg util log clog go goroutine github com cockroachdb cockroach pkg util log signalflusher go src github com cockroachdb cockroach pkg util log clog go created by github com cockroachdb cockroach pkg util log init go src github com cockroachdb cockroach pkg util log clog go goroutine internal poll runtime pollwait usr local go src runtime netpoll go
| 1
|
125,219
| 17,835,974,710
|
IssuesEvent
|
2021-09-03 01:09:38
|
varkalaramalingam/test-drone-build
|
https://api.github.com/repos/varkalaramalingam/test-drone-build
|
opened
|
CVE-2021-37712 (High) detected in tar-6.1.0.tgz
|
security vulnerability
|
## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: test-drone-build/package.json</p>
<p>Path to vulnerable library: test-drone-build/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.1.tgz (Root Library)
- terser-webpack-plugin-4.2.3.tgz
- cacache-15.2.0.tgz
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18, 5.0.10, 6.1.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-37712 (High) detected in tar-6.1.0.tgz - ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: test-drone-build/package.json</p>
<p>Path to vulnerable library: test-drone-build/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.1.tgz (Root Library)
- terser-webpack-plugin-4.2.3.tgz
- cacache-15.2.0.tgz
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18, 5.0.10, 6.1.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file test drone build package json path to vulnerable library test drone build node modules tar package json dependency hierarchy react scripts tgz root library terser webpack plugin tgz cacache tgz x tar tgz vulnerable library found in base branch master vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value additionally on windows systems long path portions would resolve to the same file system entities as their short path counterparts a specially crafted tar archive could thus include a directory with one form of the path followed by a symbolic link with a different string that resolves to the same file system entity followed by a file using the first form by first creating a directory and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar step up your open source security game with whitesource
| 0
|
88,220
| 8,135,671,588
|
IssuesEvent
|
2018-08-20 04:38:54
|
istio/istio
|
https://api.github.com/repos/istio/istio
|
closed
|
istio/tools/setup_run and update_all out of date
|
area/perf and scalability area/test and release stale
|
Both scripts contain references to bazel artifacts.
|
1.0
|
istio/tools/setup_run and update_all out of date - Both scripts contain references to bazel artifacts.
|
test
|
istio tools setup run and update all out of date both scripts contain references to bazel artifacts
| 1
|
549,186
| 16,087,457,148
|
IssuesEvent
|
2021-04-26 13:03:15
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.gob.mx - site is not usable
|
browser-firefox engine-gecko ml-needsdiagnosis-false os-linux priority-normal
|
<!-- @browser: Firefox 78.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/71469 -->
**URL**: https://www.gob.mx/curp/
**Browser / Version**: Firefox 78.0
**Operating System**: Linux
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
no descarga el pdf de el curp solo se queda pensando
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/4/7ded20aa-01b6-4566-a577-86ac6086e693.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200722151235</li><li>channel: esr78</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/4/b5389309-5927-4668-bbca-bc7efe5b0a95)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.gob.mx - site is not usable - <!-- @browser: Firefox 78.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/71469 -->
**URL**: https://www.gob.mx/curp/
**Browser / Version**: Firefox 78.0
**Operating System**: Linux
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
no descarga el pdf de el curp solo se queda pensando
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/4/7ded20aa-01b6-4566-a577-86ac6086e693.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200722151235</li><li>channel: esr78</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/4/b5389309-5927-4668-bbca-bc7efe5b0a95)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
site is not usable url browser version firefox operating system linux tested another browser yes chrome problem type site is not usable description buttons or links not working steps to reproduce no descarga el pdf de el curp solo se queda pensando view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
140,617
| 11,353,617,075
|
IssuesEvent
|
2020-01-24 15:54:36
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
opened
|
Test: backup and hot-exit
|
testplan-item
|
Refs: https://github.com/Microsoft/vscode/issues/84672
- [ ] anyOS
- [ ] anyOS
Complexity: 3
This milestone, backup / hot-exit was rewritten to work with custom editors. This test plan item is to verify there are no regressions for existing text based editors.
Steps:
* work with dirty files and dirty untitled editors
* with default settings you should be able to quit VSCode without getting asked to save and all dirty content is restored after next startup
* turn hot exit off in settings and verify that quitting now will ask you to save/revert/cancel for each file or untitled editor
* verify each of save/revert/cancel works as expected and you can close VSCode (unless you picked "Cancel")
* verify that you can kill VSCode (literally the process) with dirty files and upon restart your content is still preserved (we backup after 1s delay after making content changes)
|
1.0
|
Test: backup and hot-exit - Refs: https://github.com/Microsoft/vscode/issues/84672
- [ ] anyOS
- [ ] anyOS
Complexity: 3
This milestone, backup / hot-exit was rewritten to work with custom editors. This test plan item is to verify there are no regressions for existing text based editors.
Steps:
* work with dirty files and dirty untitled editors
* with default settings you should be able to quit VSCode without getting asked to save and all dirty content is restored after next startup
* turn hot exit off in settings and verify that quitting now will ask you to save/revert/cancel for each file or untitled editor
* verify each of save/revert/cancel works as expected and you can close VSCode (unless you picked "Cancel")
* verify that you can kill VSCode (literally the process) with dirty files and upon restart your content is still preserved (we backup after 1s delay after making content changes)
|
test
|
test backup and hot exit refs anyos anyos complexity this milestone backup hot exit was rewritten to work with custom editors this test plan item is to verify there are no regressions for existing text based editors steps work with dirty files and dirty untitled editors with default settings you should be able to quit vscode without getting asked to save and all dirty content is restored after next startup turn hot exit off in settings and verify that quitting now will ask you to save revert cancel for each file or untitled editor verify each of save revert cancel works as expected and you can close vscode unless you picked cancel verify that you can kill vscode literally the process with dirty files and upon restart your content is still preserved we backup after delay after making content changes
| 1
|
611,814
| 18,981,955,674
|
IssuesEvent
|
2021-11-21 02:52:40
|
code-ready/crc
|
https://api.github.com/repos/code-ready/crc
|
closed
|
[BUG] `crc start` exits with "Failed to update cluster ID"
|
kind/bug priority/minor status/stale
|
### General information
* OS: Linux
* Hypervisor: KVM
## CRC version
master with a 4.7.11 bundle
### Steps to reproduce
1. `crc start -b ~/Downloads/crc_libvirt_4.7.11.crcbundle --log-level debug -p ~/pull-secret.txt`
It happened only once for me.
### Expected
A Working cluster
### Actual
A half working cluster
### Logs
```
$ crc start -b ~/Downloads/crc_libvirt_4.7.11.crcbundle --log-level debug -p ~/pull-secret.txt
...
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Waiting for availability of resource type 'clusterversion'
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get clusterversion --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.7.11 True False 33d Cluster version is 4.7.11
DEBU NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.7.11 True False 33d Cluster version is 4.7.11
DEBU Running SSH command: timeout 30s oc get clusterversion version -o jsonpath="{['spec']['clusterID']}" --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
DEBU (crc) DBG | time="2021-06-17T10:38:15+02:00" level=debug msg="Closing plugin on server side"
Failed to update cluster ID: ssh command error:
command : timeout 30s oc get clusterversion version -o jsonpath="{['spec']['clusterID']}" --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n
```
I don't have a good idea to fix it. If someone has an idea!
|
1.0
|
[BUG] `crc start` exits with "Failed to update cluster ID" - ### General information
* OS: Linux
* Hypervisor: KVM
## CRC version
master with a 4.7.11 bundle
### Steps to reproduce
1. `crc start -b ~/Downloads/crc_libvirt_4.7.11.crcbundle --log-level debug -p ~/pull-secret.txt`
It happened only once for me.
### Expected
A Working cluster
### Actual
A half working cluster
### Logs
```
$ crc start -b ~/Downloads/crc_libvirt_4.7.11.crcbundle --log-level debug -p ~/pull-secret.txt
...
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Waiting for availability of resource type 'clusterversion'
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get clusterversion --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.7.11 True False 33d Cluster version is 4.7.11
DEBU NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.7.11 True False 33d Cluster version is 4.7.11
DEBU Running SSH command: timeout 30s oc get clusterversion version -o jsonpath="{['spec']['clusterID']}" --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
DEBU (crc) DBG | time="2021-06-17T10:38:15+02:00" level=debug msg="Closing plugin on server side"
Failed to update cluster ID: ssh command error:
command : timeout 30s oc get clusterversion version -o jsonpath="{['spec']['clusterID']}" --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n
```
I don't have a good idea to fix it. If someone has an idea!
|
non_test
|
crc start exits with failed to update cluster id general information os linux hypervisor kvm crc version master with a bundle steps to reproduce crc start b downloads crc libvirt crcbundle log level debug p pull secret txt it happened only once for me expected a working cluster actual a half working cluster logs crc start b downloads crc libvirt crcbundle log level debug p pull secret txt debu running ssh command debu ssh command succeeded debu running ssh command debu ssh command succeeded debu waiting for availability of resource type clusterversion debu retry loop attempt debu running ssh command timeout oc get clusterversion context admin cluster crc kubeconfig opt kubeconfig debu ssh command results err output name version available progressing since status version true false cluster version is debu name version available progressing since status version true false cluster version is debu running ssh command timeout oc get clusterversion version o jsonpath context admin cluster crc kubeconfig opt kubeconfig debu ssh command results err process exited with status output debu making call to close driver server debu crc calling close debu successfully made call to close driver server debu making call to close connection to plugin binary debu crc dbg time level debug msg closing plugin on server side failed to update cluster id ssh command error command timeout oc get clusterversion version o jsonpath context admin cluster crc kubeconfig opt kubeconfig err process exited with status n i don t have a good idea to fix it if someone has an idea
| 0
|
414,885
| 28,008,530,294
|
IssuesEvent
|
2023-03-27 16:50:39
|
dtcenter/METplotpy
|
https://api.github.com/repos/dtcenter/METplotpy
|
closed
|
Enhance the Release Notes by adding dropdown menus
|
type: task priority: low component: documentation requestor: METplus Team
|
Please use [Sphinx Design for Dropdown menus](https://sphinx-design.readthedocs.io/en/latest/dropdowns.html) . This will allow for searches of material hidden within dropdown menus.
Changes will need to be made to the below files:
1. config.py
add 'sphinx_design' to the "extensions =" section. (note the underscore.)
2. docs/requirements.txt file.
add 'sphinx-design==0.3.0' with a dash
3. METplotpy/.github/workflows/documentation.yml
add a line after this example [line 28](https://github.com/dtcenter/METviewer/blob/505660418afbc1debb301c5f0b4bcd28823b3896/.github/workflows/documentation.yml#L28) . Make sure it is correctly indented.
python -m pip install -r docs/requirements.txt
It should look like this:
python -m pip install --upgrade python-dateutil requests sphinx \
sphinx-gallery matplotlib Pillow sphinx_rtd_theme
python -m pip install -r docs/requirements.txt
4. Change python-version: '3.10'
back to python-version: '3.8' in the METcalcpy/.github/workflows/documentation.yml file
The dropdown menus won't work unless this is added.
Panel drop downs would be added for the subcategories and sub-subcategories. For example,
-Repository and build
---Installation
---Static Code Analysis
---Testing
---Continuous Integration
-Documentation
-Library code
---Bugfixes
---Python embedding enhancements
---Miscellaneous
---NetCDF Library
---Statistics computations
etc.
## Describe the Task ##
*Provide a description of the task here.*
### Time Estimate ###
*less than a day*
### Sub-Issues ###
None
### Relevant Deadlines ###
*NONE.*
### Funding Source ###
*Split between accounts 2702691 and 2792542.*
## Define the Metadata ##
### Assignee ###
- [x] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Task Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [x] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Development** issues
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
|
1.0
|
Enhance the Release Notes by adding dropdown menus - Please use [Sphinx Design for Dropdown menus](https://sphinx-design.readthedocs.io/en/latest/dropdowns.html) . This will allow for searches of material hidden within dropdown menus.
Changes will need to be made to the below files:
1. config.py
add 'sphinx_design' to the "extensions =" section. (note the underscore.)
2. docs/requirements.txt file.
add 'sphinx-design==0.3.0' with a dash
3. METplotpy/.github/workflows/documentation.yml
add a line after this example [line 28](https://github.com/dtcenter/METviewer/blob/505660418afbc1debb301c5f0b4bcd28823b3896/.github/workflows/documentation.yml#L28) . Make sure it is correctly indented.
python -m pip install -r docs/requirements.txt
It should look like this:
python -m pip install --upgrade python-dateutil requests sphinx \
sphinx-gallery matplotlib Pillow sphinx_rtd_theme
python -m pip install -r docs/requirements.txt
4. Change python-version: '3.10'
back to python-version: '3.8' in the METcalcpy/.github/workflows/documentation.yml file
The dropdown menus won't work unless this is added.
Panel drop downs would be added for the subcategories and sub-subcategories. For example,
-Repository and build
---Installation
---Static Code Analysis
---Testing
---Continuous Integration
-Documentation
-Library code
---Bugfixes
---Python embedding enhancements
---Miscellaneous
---NetCDF Library
---Statistics computations
etc.
## Describe the Task ##
*Provide a description of the task here.*
### Time Estimate ###
*less than a day*
### Sub-Issues ###
None
### Relevant Deadlines ###
*NONE.*
### Funding Source ###
*Split between accounts 2702691 and 2792542.*
## Define the Metadata ##
### Assignee ###
- [x] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Task Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [x] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Development** issues
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
|
non_test
|
enhance the release notes by adding dropdown menus please use this will allow for searches of material hidden within dropdown menus changes will need to be made to the below files config py add sphinx design to the extensions section note the underscore docs requirements txt file add sphinx design with a dash metplotpy github workflows documentation yml add a line after this example make sure it is correctly indented python m pip install r docs requirements txt it should look like this python m pip install upgrade python dateutil requests sphinx sphinx gallery matplotlib pillow sphinx rtd theme python m pip install r docs requirements txt change python version back to python version in the metcalcpy github workflows documentation yml file the dropdown menus won t work unless this is added panel drop downs would be added for the subcategories and sub subcategories for example repository and build installation static code analysis testing continuous integration documentation library code bugfixes python embedding enhancements miscellaneous netcdf library statistics computations etc describe the task provide a description of the task here time estimate less than a day sub issues none relevant deadlines none funding source split between accounts and define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components task checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and development issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
| 0
|
89,689
| 8,212,041,766
|
IssuesEvent
|
2018-09-04 15:16:25
|
nasa-gibs/worldview
|
https://api.github.com/repos/nasa-gibs/worldview
|
closed
|
Zoomed out button allows you to 'click through' to map
|
bug testing
|
**Describe the bug**
Zooming out all the way using Zoom - button allows you to 'click through' to map once button is disabled
**To Reproduce**
Steps to reproduce the behavior:
1. Click zoom - button until it is disabled
2. The click now 'clicks through' the disabled button and moves the map
**Expected behavior**
Can't click through disabled zoom button.
**Desktop (please complete the following information):**
- OS: Windows
- Browser Firefox
- Version 2.8.0
|
1.0
|
Zoomed out button allows you to 'click through' to map - **Describe the bug**
Zooming out all the way using Zoom - button allows you to 'click through' to map once button is disabled
**To Reproduce**
Steps to reproduce the behavior:
1. Click zoom - button until it is disabled
2. The click now 'clicks through' the disabled button and moves the map
**Expected behavior**
Can't click through disabled zoom button.
**Desktop (please complete the following information):**
- OS: Windows
- Browser Firefox
- Version 2.8.0
|
test
|
zoomed out button allows you to click through to map describe the bug zooming out all the way using zoom button allows you to click through to map once button is disabled to reproduce steps to reproduce the behavior click zoom button until it is disabled the click now clicks through the disabled button and moves the map expected behavior can t click through disabled zoom button desktop please complete the following information os windows browser firefox version
| 1
|
350,742
| 31,932,003,497
|
IssuesEvent
|
2023-09-19 08:04:42
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
opened
|
Fix tensor.test_tensorflow__rfloordiv__
|
TensorFlow Frontend Sub Task Failing Test
|
| | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6230808265/job/16911378107"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6230808265/job/16911378107"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6230808265/job/16911378107"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6230808265/job/16911378107"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6230808265/job/16911378107"><img src=https://img.shields.io/badge/-failure-red></a>
|
1.0
|
Fix tensor.test_tensorflow__rfloordiv__ - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6230808265/job/16911378107"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6230808265/job/16911378107"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6230808265/job/16911378107"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6230808265/job/16911378107"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6230808265/job/16911378107"><img src=https://img.shields.io/badge/-failure-red></a>
|
test
|
fix tensor test tensorflow rfloordiv numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src
| 1
|
178,045
| 13,759,000,373
|
IssuesEvent
|
2020-10-07 01:43:20
|
heremaps/xyz-spaces-python
|
https://api.github.com/repos/heremaps/xyz-spaces-python
|
closed
|
Test Spaces functionalities
|
no-issue-activity test
|
Please test below functionalities:
- All methods of the **Space** class - [API Reference](https://xyz-spaces-python.readthedocs.io/en/latest/xyzspaces.spaces.html) and [Example Notebook](https://github.com/heremaps/xyz-spaces-python/blob/master/docs/notebooks/spaces_class_example.ipynb)
- Performance test for methods **add_features_geojson** and **add_features** of **Space** class and the file size limit for **add_features_geojson** .
|
1.0
|
Test Spaces functionalities - Please test below functionalities:
- All methods of the **Space** class - [API Reference](https://xyz-spaces-python.readthedocs.io/en/latest/xyzspaces.spaces.html) and [Example Notebook](https://github.com/heremaps/xyz-spaces-python/blob/master/docs/notebooks/spaces_class_example.ipynb)
- Performance test for methods **add_features_geojson** and **add_features** of **Space** class and the file size limit for **add_features_geojson** .
|
test
|
test spaces functionalities please test below functionalities all methods of the space class and performance test for methods add features geojson and add features of space class and the file size limit for add features geojson
| 1
|
305,789
| 26,412,540,241
|
IssuesEvent
|
2023-01-13 13:27:56
|
hashgraph/hedera-services
|
https://api.github.com/repos/hashgraph/hedera-services
|
opened
|
Add fuzzing tests for EthereumTransaction resulting in lazy creates
|
Test Development Limechain
|
### Problem
We want to add fuzzing tests for EthereumTransaction.
### Solution
EthereumTransaction should result in lazy creates
### Alternatives
_No response_
|
1.0
|
Add fuzzing tests for EthereumTransaction resulting in lazy creates - ### Problem
We want to add fuzzing tests for EthereumTransaction.
### Solution
EthereumTransaction should result in lazy creates
### Alternatives
_No response_
|
test
|
add fuzzing tests for ethereumtransaction resulting in lazy creates problem we want to add fuzzing tests for ethereumtransaction solution ethereumtransaction should result in lazy creates alternatives no response
| 1
|
268,151
| 23,349,335,574
|
IssuesEvent
|
2022-08-09 21:25:14
|
tarantool/vshard
|
https://api.github.com/repos/tarantool/vshard
|
closed
|
flaky test: upgrade/upgrade.test.lua
|
qa teamS flaky test
|
Sometimes storage_1_b doesn't upgrade its schema.
```
2022-05-23T22:13:09.5933864Z upgrade/upgrade.test.lua [ fail ]
2022-05-23T22:13:09.5934141Z
2022-05-23T22:13:09.5935780Z Test failed! Result content mismatch:
2022-05-23T22:13:09.5942229Z --- upgrade/upgrade.result Mon May 23 22:08:57 2022
2022-05-23T22:13:09.5942946Z +++ /home/runner/work/vshard/vshard/test/var/rejects/upgrade/upgrade.reject Mon May 23 22:13:09 2022
2022-05-23T22:13:09.5943896Z @@ -180,11 +180,10 @@
2022-05-23T22:13:09.5944189Z | ...
2022-05-23T22:13:09.5944557Z box.space._schema:get({'vshard_version'})
2022-05-23T22:13:09.5944879Z | ---
2022-05-23T22:13:09.5989957Z - | - ['vshard_version', 0, 1, 16, 0]
2022-05-23T22:13:09.5991391Z | ...
2022-05-23T22:13:09.5991745Z vshard.storage.internal.schema_current_version()
2022-05-23T22:13:09.5993548Z | ---
2022-05-23T22:13:09.5994264Z - | - '{0.1.16.0}'
2022-05-23T22:13:09.5995001Z + | - '{0.1.15.0}'
2022-05-23T22:13:09.5995692Z | ...
2022-05-23T22:13:09.5996520Z vshard.storage.internal.schema_latest_version
2022-05-23T22:13:09.5997201Z | ---
2022-05-23T22:13:09.5997914Z
2022-05-23T22:13:09.5998270Z Last 15 lines of Tarantool Log file [Instance "box"][/home/runner/work/vshard/vshard/test/var/001_upgrade/box.log]:
2022-05-23T22:13:09.5999144Z 2022-05-23 22:13:07.730 [6394] main/101/box I> assigned id 1 to replica 5e6cfaff-6501-40ce-933e-d3da41253e71
2022-05-23T22:13:09.5999889Z 2022-05-23 22:13:07.730 [6394] main/101/box I> cluster uuid 4d77faa2-08ec-45c0-8070-30db701e2a4b
2022-05-23T22:13:09.6000668Z 2022-05-23 22:13:07.732 [6394] snapshot/101/main I> saving snapshot `/home/runner/work/vshard/vshard/test/var/001_upgrade/box/00000000000000000000.snap.inprogress'
2022-05-23T22:13:09.6001154Z 2022-05-23 22:13:07.734 [6394] snapshot/101/main I> done
2022-05-23T22:13:09.6002603Z 2022-05-23 22:13:07.734 [6394] main/101/box I> ready to accept requests
2022-05-23T22:13:09.6003335Z 2022-05-23 22:13:07.734 [6394] main/108/checkpoint_daemon I> started
2022-05-23T22:13:09.6004180Z 2022-05-23 22:13:07.734 [6394] main/108/checkpoint_daemon I> scheduled the next snapshot at Mon May 23 23:52:50 2022
2022-05-23T22:13:09.6018114Z 2022-05-23 22:13:07.735 [6394] main/113/console/::1:12142 I> started
2022-05-23T22:13:09.6018564Z 2022-05-23 22:13:07.735 [6394] main C> entering the event loop
2022-05-23T22:13:09.6018878Z Previous HEAD position was e42d3e3 doc: create 0.1.20 changelog
2022-05-23T22:13:09.6019229Z HEAD is now at 79a4dbf Improve compatibility with 1.9
2022-05-23T22:13:09.6019792Z 2022-05-23 22:13:08.296 [6394] main/115/console/::1:47504 I> Waiting until slaves are connected to a master
2022-05-23T22:13:09.6020355Z 2022-05-23 22:13:08.301 [6394] main/115/console/::1:47504 I> Slaves are connected to a master "storage_1_a"
2022-05-23T22:13:09.6020905Z 2022-05-23 22:13:08.301 [6394] main/115/console/::1:47504 I> Waiting until slaves are connected to a master
2022-05-23T22:13:09.6021446Z 2022-05-23 22:13:08.406 [6394] main/115/console/::1:47504 I> Slaves are connected to a master "storage_2_a"
2022-05-23T22:13:09.6021826Z Reproduce file /home/runner/work/vshard/vshard/test/var/reproduce/001_upgrade.list.yaml
2022-05-23T22:13:09.6022156Z ---
2022-05-23T22:13:09.6022442Z - [upgrade/upgrade.test.lua, null]
2022-05-23T22:13:09.6022672Z ...
```
Logs don't tell much. Happens on 1.10, I could only reproduce it in CI, disappears after some re-runs.
|
1.0
|
flaky test: upgrade/upgrade.test.lua - Sometimes storage_1_b doesn't upgrade its schema.
```
2022-05-23T22:13:09.5933864Z upgrade/upgrade.test.lua [ fail ]
2022-05-23T22:13:09.5934141Z
2022-05-23T22:13:09.5935780Z Test failed! Result content mismatch:
2022-05-23T22:13:09.5942229Z --- upgrade/upgrade.result Mon May 23 22:08:57 2022
2022-05-23T22:13:09.5942946Z +++ /home/runner/work/vshard/vshard/test/var/rejects/upgrade/upgrade.reject Mon May 23 22:13:09 2022
2022-05-23T22:13:09.5943896Z @@ -180,11 +180,10 @@
2022-05-23T22:13:09.5944189Z | ...
2022-05-23T22:13:09.5944557Z box.space._schema:get({'vshard_version'})
2022-05-23T22:13:09.5944879Z | ---
2022-05-23T22:13:09.5989957Z - | - ['vshard_version', 0, 1, 16, 0]
2022-05-23T22:13:09.5991391Z | ...
2022-05-23T22:13:09.5991745Z vshard.storage.internal.schema_current_version()
2022-05-23T22:13:09.5993548Z | ---
2022-05-23T22:13:09.5994264Z - | - '{0.1.16.0}'
2022-05-23T22:13:09.5995001Z + | - '{0.1.15.0}'
2022-05-23T22:13:09.5995692Z | ...
2022-05-23T22:13:09.5996520Z vshard.storage.internal.schema_latest_version
2022-05-23T22:13:09.5997201Z | ---
2022-05-23T22:13:09.5997914Z
2022-05-23T22:13:09.5998270Z Last 15 lines of Tarantool Log file [Instance "box"][/home/runner/work/vshard/vshard/test/var/001_upgrade/box.log]:
2022-05-23T22:13:09.5999144Z 2022-05-23 22:13:07.730 [6394] main/101/box I> assigned id 1 to replica 5e6cfaff-6501-40ce-933e-d3da41253e71
2022-05-23T22:13:09.5999889Z 2022-05-23 22:13:07.730 [6394] main/101/box I> cluster uuid 4d77faa2-08ec-45c0-8070-30db701e2a4b
2022-05-23T22:13:09.6000668Z 2022-05-23 22:13:07.732 [6394] snapshot/101/main I> saving snapshot `/home/runner/work/vshard/vshard/test/var/001_upgrade/box/00000000000000000000.snap.inprogress'
2022-05-23T22:13:09.6001154Z 2022-05-23 22:13:07.734 [6394] snapshot/101/main I> done
2022-05-23T22:13:09.6002603Z 2022-05-23 22:13:07.734 [6394] main/101/box I> ready to accept requests
2022-05-23T22:13:09.6003335Z 2022-05-23 22:13:07.734 [6394] main/108/checkpoint_daemon I> started
2022-05-23T22:13:09.6004180Z 2022-05-23 22:13:07.734 [6394] main/108/checkpoint_daemon I> scheduled the next snapshot at Mon May 23 23:52:50 2022
2022-05-23T22:13:09.6018114Z 2022-05-23 22:13:07.735 [6394] main/113/console/::1:12142 I> started
2022-05-23T22:13:09.6018564Z 2022-05-23 22:13:07.735 [6394] main C> entering the event loop
2022-05-23T22:13:09.6018878Z Previous HEAD position was e42d3e3 doc: create 0.1.20 changelog
2022-05-23T22:13:09.6019229Z HEAD is now at 79a4dbf Improve compatibility with 1.9
2022-05-23T22:13:09.6019792Z 2022-05-23 22:13:08.296 [6394] main/115/console/::1:47504 I> Waiting until slaves are connected to a master
2022-05-23T22:13:09.6020355Z 2022-05-23 22:13:08.301 [6394] main/115/console/::1:47504 I> Slaves are connected to a master "storage_1_a"
2022-05-23T22:13:09.6020905Z 2022-05-23 22:13:08.301 [6394] main/115/console/::1:47504 I> Waiting until slaves are connected to a master
2022-05-23T22:13:09.6021446Z 2022-05-23 22:13:08.406 [6394] main/115/console/::1:47504 I> Slaves are connected to a master "storage_2_a"
2022-05-23T22:13:09.6021826Z Reproduce file /home/runner/work/vshard/vshard/test/var/reproduce/001_upgrade.list.yaml
2022-05-23T22:13:09.6022156Z ---
2022-05-23T22:13:09.6022442Z - [upgrade/upgrade.test.lua, null]
2022-05-23T22:13:09.6022672Z ...
```
Logs don't tell much. Happens on 1.10, I could only reproduce it in CI, disappears after some re-runs.
|
test
|
flaky test upgrade upgrade test lua sometimes storage b doesn t upgrade its schema upgrade upgrade test lua test failed result content mismatch upgrade upgrade result mon may home runner work vshard vshard test var rejects upgrade upgrade reject mon may box space schema get vshard version vshard storage internal schema current version vshard storage internal schema latest version last lines of tarantool log file main box i assigned id to replica main box i cluster uuid snapshot main i saving snapshot home runner work vshard vshard test var upgrade box snap inprogress snapshot main i done main box i ready to accept requests main checkpoint daemon i started main checkpoint daemon i scheduled the next snapshot at mon may main console i started main c entering the event loop previous head position was doc create changelog head is now at improve compatibility with main console i waiting until slaves are connected to a master main console i slaves are connected to a master storage a main console i waiting until slaves are connected to a master main console i slaves are connected to a master storage a reproduce file home runner work vshard vshard test var reproduce upgrade list yaml logs don t tell much happens on i could only reproduce it in ci disappears after some re runs
| 1
|
289,976
| 25,028,611,383
|
IssuesEvent
|
2022-11-04 10:19:11
|
wazuh/wazuh-qa
|
https://api.github.com/repos/wazuh/wazuh-qa
|
closed
|
Restrict agent upgrade module configuration to local settings
|
team/qa type/dev-testing subteam/qa-main target/4.3.10
|
| Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
|4.3.10 | |[wazuh#15259](https://github.com/wazuh/wazuh/pull/15259)|
<!-- Important: No section may be left blank. If not, delete it directly (in principle only Steps to reproduce could be left blank in case of not proceeding, although there are always exceptions). -->
## Description
This PR aims to prevent the agent from parsing <agent-upgrade> configuration from file agent.conf.
## Proposed checks
- [x] The `<agent-upgrade>` block is not parsed when defined at _agent.conf_.
## Steps to reproduce
- Put an `<agent-upgrade>` stanza into file _agent.conf_.
- Check that the agent applied that via API request or behavior change.
## Expected results
The agent must not parse `<agent-upgrade>` from _agent.conf_.
|
1.0
|
Restrict agent upgrade module configuration to local settings - | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
|4.3.10 | |[wazuh#15259](https://github.com/wazuh/wazuh/pull/15259)|
<!-- Important: No section may be left blank. If not, delete it directly (in principle only Steps to reproduce could be left blank in case of not proceeding, although there are always exceptions). -->
## Description
This PR aims to prevent the agent from parsing <agent-upgrade> configuration from file agent.conf.
## Proposed checks
- [x] The `<agent-upgrade>` block is not parsed when defined at _agent.conf_.
## Steps to reproduce
- Put an `<agent-upgrade>` stanza into file _agent.conf_.
- Check that the agent applied that via API request or behavior change.
## Expected results
The agent must not parse `<agent-upgrade>` from _agent.conf_.
|
test
|
restrict agent upgrade module configuration to local settings target version related issue related pr description this pr aims to prevent the agent from parsing configuration from file agent conf proposed checks the block is not parsed when defined at agent conf steps to reproduce put an stanza into file agent conf check that the agent applied that via api request or behavior change expected results the agent must not parse from agent conf
| 1
|
516,895
| 14,990,125,580
|
IssuesEvent
|
2021-01-29 05:39:33
|
buddyboss/buddyboss-platform
|
https://api.github.com/repos/buddyboss/buddyboss-platform
|
closed
|
Problem loading members in Group invite tab while creating group or after group created
|
bug priority: medium
|
**Describe the bug**
Member listing does not load new members on bottom scroll in group invite tab. It shows loader only.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to any group -> invite member tab
2. Scroll to bottom in left side member listing
3. It just shows the loader
**Expected behavior**
It should load new member on scroll to bottom.
**Screenshots**
https://prnt.sc/xpcdkr
|
1.0
|
Problem loading members in Group invite tab while creating group or after group created - **Describe the bug**
Member listing does not load new members on bottom scroll in group invite tab. It shows loader only.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to any group -> invite member tab
2. Scroll to bottom in left side member listing
3. It just shows the loader
**Expected behavior**
It should load new member on scroll to bottom.
**Screenshots**
https://prnt.sc/xpcdkr
|
non_test
|
problem loading members in group invite tab while creating group or after group created describe the bug member listing does not load new members on bottom scroll in group invite tab it shows loader only to reproduce steps to reproduce the behavior go to any group invite member tab scroll to bottom in left side member listing it just shows the loader expected behavior it should load new member on scroll to bottom screenshots
| 0
|
151,094
| 5,798,145,852
|
IssuesEvent
|
2017-05-03 00:20:37
|
Brevada/brv
|
https://api.github.com/repos/Brevada/brv
|
closed
|
Correlation between Brevada scores and financials
|
Business Development low priority
|
We should keep this in mind as a stat to look at. This would prove our accuracy.
|
1.0
|
Correlation between Brevada scores and financials - We should keep this in mind as a stat to look at. This would prove our accuracy.
|
non_test
|
correlation between brevada scores and financials we should keep this in mind as a stat to look at this would prove our accuracy
| 0
|
184,427
| 14,979,620,212
|
IssuesEvent
|
2021-01-28 12:31:08
|
tridactyl/tridactyl
|
https://api.github.com/repos/tridactyl/tridactyl
|
closed
|
Stop recommending sanitise at top of RC files
|
P4 documentation enhancement
|
https://github.com/tridactyl/tridactyl/blob/134bc4d1ee6fdffda425788cccda6eebbb099670/src/excmds.ts#L757
A better suggestion would be `bind ZZ composite sanitise ... ; qall` in light of #1409
Edit: should first check that that bind actually works
|
1.0
|
Stop recommending sanitise at top of RC files - https://github.com/tridactyl/tridactyl/blob/134bc4d1ee6fdffda425788cccda6eebbb099670/src/excmds.ts#L757
A better suggestion would be `bind ZZ composite sanitise ... ; qall` in light of #1409
Edit: should first check that that bind actually works
|
non_test
|
stop recommending sanitise at top of rc files a better suggestion would be bind zz composite sanitise qall in light of edit should first check that that bind actually works
| 0
|
693,600
| 23,783,045,766
|
IssuesEvent
|
2022-09-02 07:29:13
|
googleapis/python-aiplatform
|
https://api.github.com/repos/googleapis/python-aiplatform
|
closed
|
tests.system.aiplatform.test_model_monitoring.TestModelDeploymentMonitoring: test_mdm_two_models_one_valid_config failed
|
type: bug priority: p1 flakybot: issue flakybot: flaky api: vertex-ai
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 6bafca5890c3eec759c7303ba0f441fa606504aa
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/8b3098a7-01f1-4f02-a4dd-93187cb2506f), [Sponge](http://sponge2/8b3098a7-01f1-4f02-a4dd-93187cb2506f)
status: failed
<details><summary>Test output</summary><br><pre>args = (name: "projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320"
,)
kwargs = {'metadata': [('x-goog-request-params', 'name=projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320'), ('x-goog-api-client', 'model-builder/1.16.1 gl-python/3.8.12 grpc/1.47.0 gax/1.32.0 gapic/1.16.1')]}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f2590382d90>
request = name: "projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320"
timeout = None
metadata = [('x-goog-request-params', 'name=projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320'), ('x-goog-api-client', 'model-builder/1.16.1 gl-python/3.8.12 grpc/1.47.0 gax/1.32.0 gapic/1.16.1')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:946:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7f2591c7da30>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f2592a15840>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.NOT_FOUND
E details = "Endpoint projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320 is not found."
E debug_error_string = "{"created":"@1662004156.135679889","description":"Error received from peer ipv4:74.125.142.95:443","file":"src/core/lib/surface/call.cc","file_line":966,"grpc_message":"Endpoint projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320 is not found.","grpc_status":5}"
E >
.nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self = <tests.system.aiplatform.test_model_monitoring.TestModelDeploymentMonitoring object at 0x7f25982c90a0>
def test_mdm_two_models_one_valid_config(self):
"""
Enable model monitoring on two existing models deployed to the same endpoint.
"""
# test model monitoring configurations
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=alert_config,
objective_configs=objective_config,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
predict_instance_schema_uri="",
analysis_instance_schema_uri="",
)
tests/system/aiplatform/test_model_monitoring.py:109:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:2330: in create
mdm_objective_config_seq = cls._parse_configs(
google/cloud/aiplatform/jobs.py:2107: in _parse_configs
for model in endpoint.list_models():
google/cloud/aiplatform/models.py:1650: in list_models
self._sync_gca_resource()
google/cloud/aiplatform/base.py:642: in _sync_gca_resource
self._gca_resource = self._get_gca_resource(resource_name=self.resource_name)
google/cloud/aiplatform/base.py:672: in resource_name
self._assert_gca_resource_is_available()
google/cloud/aiplatform/models.py:224: in _assert_gca_resource_is_available
self._sync_gca_resource_if_skipped()
google/cloud/aiplatform/models.py:216: in _sync_gca_resource_if_skipped
self._gca_resource = self._get_gca_resource(
google/cloud/aiplatform/base.py:635: in _get_gca_resource
return getattr(self.api_client, self._getter_method)(
google/cloud/aiplatform_v1/services/endpoint_service/client.py:732: in get_endpoint
response = rpc(
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py:145: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:286: in retry_wrapped_func
return retry_target(
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:189: in retry_target
return target()
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:69: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.NOT_FOUND
details = "Endpoint projects/precise-t...point projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320 is not found.","grpc_status":5}"
>
> ???
E google.api_core.exceptions.NotFound: 404 Endpoint projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320 is not found.
<string>:3: NotFound</pre></details>
|
1.0
|
tests.system.aiplatform.test_model_monitoring.TestModelDeploymentMonitoring: test_mdm_two_models_one_valid_config failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 6bafca5890c3eec759c7303ba0f441fa606504aa
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/8b3098a7-01f1-4f02-a4dd-93187cb2506f), [Sponge](http://sponge2/8b3098a7-01f1-4f02-a4dd-93187cb2506f)
status: failed
<details><summary>Test output</summary><br><pre>args = (name: "projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320"
,)
kwargs = {'metadata': [('x-goog-request-params', 'name=projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320'), ('x-goog-api-client', 'model-builder/1.16.1 gl-python/3.8.12 grpc/1.47.0 gax/1.32.0 gapic/1.16.1')]}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f2590382d90>
request = name: "projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320"
timeout = None
metadata = [('x-goog-request-params', 'name=projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320'), ('x-goog-api-client', 'model-builder/1.16.1 gl-python/3.8.12 grpc/1.47.0 gax/1.32.0 gapic/1.16.1')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:946:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7f2591c7da30>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f2592a15840>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.NOT_FOUND
E details = "Endpoint projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320 is not found."
E debug_error_string = "{"created":"@1662004156.135679889","description":"Error received from peer ipv4:74.125.142.95:443","file":"src/core/lib/surface/call.cc","file_line":966,"grpc_message":"Endpoint projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320 is not found.","grpc_status":5}"
E >
.nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self = <tests.system.aiplatform.test_model_monitoring.TestModelDeploymentMonitoring object at 0x7f25982c90a0>
def test_mdm_two_models_one_valid_config(self):
"""
Enable model monitoring on two existing models deployed to the same endpoint.
"""
# test model monitoring configurations
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=alert_config,
objective_configs=objective_config,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
predict_instance_schema_uri="",
analysis_instance_schema_uri="",
)
tests/system/aiplatform/test_model_monitoring.py:109:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:2330: in create
mdm_objective_config_seq = cls._parse_configs(
google/cloud/aiplatform/jobs.py:2107: in _parse_configs
for model in endpoint.list_models():
google/cloud/aiplatform/models.py:1650: in list_models
self._sync_gca_resource()
google/cloud/aiplatform/base.py:642: in _sync_gca_resource
self._gca_resource = self._get_gca_resource(resource_name=self.resource_name)
google/cloud/aiplatform/base.py:672: in resource_name
self._assert_gca_resource_is_available()
google/cloud/aiplatform/models.py:224: in _assert_gca_resource_is_available
self._sync_gca_resource_if_skipped()
google/cloud/aiplatform/models.py:216: in _sync_gca_resource_if_skipped
self._gca_resource = self._get_gca_resource(
google/cloud/aiplatform/base.py:635: in _get_gca_resource
return getattr(self.api_client, self._getter_method)(
google/cloud/aiplatform_v1/services/endpoint_service/client.py:732: in get_endpoint
response = rpc(
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py:145: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:286: in retry_wrapped_func
return retry_target(
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:189: in retry_target
return target()
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:69: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.NOT_FOUND
details = "Endpoint projects/precise-t...point projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320 is not found.","grpc_status":5}"
>
> ???
E google.api_core.exceptions.NotFound: 404 Endpoint projects/precise-truck-742/locations/us-central1/endpoints/8289570005524152320 is not found.
<string>:3: NotFound</pre></details>
|
non_test
|
tests system aiplatform test model monitoring testmodeldeploymentmonitoring test mdm two models one valid config failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output args name projects precise truck locations us endpoints kwargs metadata six wraps callable def error remapped callable args kwargs try return callable args kwargs nox system lib site packages google api core grpc helpers py self request name projects precise truck locations us endpoints timeout none metadata credentials none wait for ready none compression none def call self request timeout none metadata none credentials none wait for ready none compression none state call self blocking request timeout metadata credentials wait for ready compression return end unary response blocking state call false none nox system lib site packages grpc channel py state call with call false deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous multithreadedrendezvous state call none deadline return state response rendezvous else return state response else raise inactiverpcerror state e grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with e status statuscode not found e details endpoint projects precise truck locations us endpoints is not found e debug error string created description error received from peer file src core lib surface call cc file line grpc message endpoint projects precise truck locations us endpoints is not found grpc status e nox system lib site packages grpc channel py inactiverpcerror the above exception was the direct cause of the following exception self def test mdm two models one valid config self enable model monitoring on two existing models deployed to the same endpoint test model monitoring configurations job aiplatform modeldeploymentmonitoringjob create display name self make display name key job name logging sampling strategy sampling strategy schedule config schedule config alert config alert config objective configs objective config create request timeout project base project location base location endpoint self endpoint predict instance schema uri analysis instance schema uri tests system aiplatform test model monitoring py google cloud aiplatform jobs py in create mdm objective config seq cls parse configs google cloud aiplatform jobs py in parse configs for model in endpoint list models google cloud aiplatform models py in list models self sync gca resource google cloud aiplatform base py in sync gca resource self gca resource self get gca resource resource name self resource name google cloud aiplatform base py in resource name self assert gca resource is available google cloud aiplatform models py in assert gca resource is available self sync gca resource if skipped google cloud aiplatform models py in sync gca resource if skipped self gca resource self get gca resource google cloud aiplatform base py in get gca resource return getattr self api client self getter method google cloud aiplatform services endpoint service client py in get endpoint response rpc nox system lib site packages google api core gapic method py in call return wrapped func args kwargs nox system lib site packages google api core retry py in retry wrapped func return retry target nox system lib site packages google api core retry py in retry target return target nox system lib site packages google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value none from value inactiverpcerror of rpc that terminated with status statuscode not found details endpoint projects precise t point projects precise truck locations us endpoints is not found grpc status e google api core exceptions notfound endpoint projects precise truck locations us endpoints is not found notfound
| 0
|
322,431
| 27,603,054,942
|
IssuesEvent
|
2023-03-09 11:11:45
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: hibernate failed
|
C-test-failure O-robot O-roachtest release-blocker branch-release-22.1
|
roachtest.hibernate [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8981523?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8981523?buildTab=artifacts#/hibernate) on release-22.1 @ [14204b0e190f146ca46f2cc16f15d0949fc97bbd](https://github.com/cockroachdb/cockroach/commits/14204b0e190f146ca46f2cc16f15d0949fc97bbd):
```
test artifacts and logs in: /artifacts/hibernate/run_1
(orm_helpers.go:193).summarizeFailed:
Tests run on Cockroach v22.1.16-47-g14204b0e19
Tests run against hibernate 5.4.30
8106 Total Tests Run
8088 tests passed
18 tests failed
1901 tests skipped
0 tests ignored
0 tests passed unexpectedly
2 tests failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- FAIL: org.hibernate.serialization.SessionFactorySerializationTest.testUnNamedSessionFactorySerialization - unknown (unexpected)
--- FAIL: org.hibernate.serialization.SessionFactorySerializationTest.testNamedSessionFactorySerialization - unknown (unexpected)
For a full summary look at the hibernate artifacts
An updated blocklist (hibernateBlockList22_1) is available in the artifacts' hibernate log
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #97842 roachtest: hibernate failed [C-test-failure O-roachtest O-robot T-sql-sessions branch-master]
- #96493 roachtest: hibernate failed [C-test-failure O-roachtest O-robot T-sql-sessions branch-release-22.2]
</p>
</details>
/cc @cockroachdb/sql-sessions
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*hibernate.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: hibernate failed - roachtest.hibernate [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8981523?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8981523?buildTab=artifacts#/hibernate) on release-22.1 @ [14204b0e190f146ca46f2cc16f15d0949fc97bbd](https://github.com/cockroachdb/cockroach/commits/14204b0e190f146ca46f2cc16f15d0949fc97bbd):
```
test artifacts and logs in: /artifacts/hibernate/run_1
(orm_helpers.go:193).summarizeFailed:
Tests run on Cockroach v22.1.16-47-g14204b0e19
Tests run against hibernate 5.4.30
8106 Total Tests Run
8088 tests passed
18 tests failed
1901 tests skipped
0 tests ignored
0 tests passed unexpectedly
2 tests failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- FAIL: org.hibernate.serialization.SessionFactorySerializationTest.testUnNamedSessionFactorySerialization - unknown (unexpected)
--- FAIL: org.hibernate.serialization.SessionFactorySerializationTest.testNamedSessionFactorySerialization - unknown (unexpected)
For a full summary look at the hibernate artifacts
An updated blocklist (hibernateBlockList22_1) is available in the artifacts' hibernate log
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #97842 roachtest: hibernate failed [C-test-failure O-roachtest O-robot T-sql-sessions branch-master]
- #96493 roachtest: hibernate failed [C-test-failure O-roachtest O-robot T-sql-sessions branch-release-22.2]
</p>
</details>
/cc @cockroachdb/sql-sessions
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*hibernate.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
roachtest hibernate failed roachtest hibernate with on release test artifacts and logs in artifacts hibernate run orm helpers go summarizefailed tests run on cockroach tests run against hibernate total tests run tests passed tests failed tests skipped tests ignored tests passed unexpectedly tests failed unexpectedly tests expected failed but skipped tests expected failed but not run fail org hibernate serialization sessionfactoryserializationtest testunnamedsessionfactoryserialization unknown unexpected fail org hibernate serialization sessionfactoryserializationtest testnamedsessionfactoryserialization unknown unexpected for a full summary look at the hibernate artifacts an updated blocklist is available in the artifacts hibernate log parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see same failure on other branches roachtest hibernate failed roachtest hibernate failed cc cockroachdb sql sessions
| 1
|
6,748
| 3,452,380,140
|
IssuesEvent
|
2015-12-17 03:43:46
|
learning-unlimited/ESP-Website
|
https://api.github.com/repos/learning-unlimited/ESP-Website
|
closed
|
Remove old version of print schedules script
|
Old/unused code
|
Apparently when writing 4368beb7b, I was not very observant and didn't notice that the old print schedules script *was* still around, just in /esp rather than useful_scripts. We should remove the old one.
|
1.0
|
Remove old version of print schedules script - Apparently when writing 4368beb7b, I was not very observant and didn't notice that the old print schedules script *was* still around, just in /esp rather than useful_scripts. We should remove the old one.
|
non_test
|
remove old version of print schedules script apparently when writing i was not very observant and didn t notice that the old print schedules script was still around just in esp rather than useful scripts we should remove the old one
| 0
|
81,719
| 7,800,860,236
|
IssuesEvent
|
2018-06-09 14:27:38
|
SunwellTracker/issues
|
https://api.github.com/repos/SunwellTracker/issues
|
closed
|
The Obsidian Sanctum - Drakes not coming down
|
Works locally | Requires testing
|
Decription: We were trying to do OS25 2D. After we initiated Sartharion, Tenebron started to fly towards the raid. When someone attacked him he kept flying and never came down during the whole encounter. What we did was to wait for them to come down before doing any damage to them. This temporarily solved the problem.
How it works: When someone attacks the dragons before they come down, they keep flying for the entirety of the encounter which makes it nearly impossible to finish the boss.
How it should work: It should not matter whether someone attacks the dragons before they come down or not, either way they should come down in due time.
Source (you should point out proofs of your report, please give us some source): -
|
1.0
|
The Obsidian Sanctum - Drakes not coming down - Decription: We were trying to do OS25 2D. After we initiated Sartharion, Tenebron started to fly towards the raid. When someone attacked him he kept flying and never came down during the whole encounter. What we did was to wait for them to come down before doing any damage to them. This temporarily solved the problem.
How it works: When someone attacks the dragons before they come down, they keep flying for the entirety of the encounter which makes it nearly impossible to finish the boss.
How it should work: It should not matter whether someone attacks the dragons before they come down or not, either way they should come down in due time.
Source (you should point out proofs of your report, please give us some source): -
|
test
|
the obsidian sanctum drakes not coming down decription we were trying to do after we initiated sartharion tenebron started to fly towards the raid when someone attacked him he kept flying and never came down during the whole encounter what we did was to wait for them to come down before doing any damage to them this temporarily solved the problem how it works when someone attacks the dragons before they come down they keep flying for the entirety of the encounter which makes it nearly impossible to finish the boss how it should work it should not matter whether someone attacks the dragons before they come down or not either way they should come down in due time source you should point out proofs of your report please give us some source
| 1
|
135,636
| 11,014,055,124
|
IssuesEvent
|
2019-12-04 21:50:58
|
GreenImp/rpg-dice-roller
|
https://api.github.com/repos/GreenImp/rpg-dice-roller
|
opened
|
Testing for v4.0.0
|
testing
|
v4.0.0 is feature complete, but I don't want to release without tests to cover the functionality.
Previous versions had pretty full tests, but these are no longer valid as the code-base has changed so much.
I've started on the tests already, and have replaced Jasmine with Jest. I've also got unit tests for all the dice and a lot of the modifiers sorted.
Here's an up-to-date list of what still needs to be done:
* [ ] Unit tests
* [x] Dice
* [x] Standard
* [x] Percentile
* [x] Fudge
* [ ] Modifiers
* [x] Comparison
* [x] Critical Success
* [x] Critical Failure
* [x] Drop
* [x] Keep
* [ ] Explode
* [ ] ReRoll
* [ ] Sorting
* [ ] Target (Success / Failure)
* [ ] Parser
* [ ] Results
* [ ] RollResult
* [ ] RollResults
* [ ] DiceRoll
* [ ] DiceRoller
* [ ] Feature tests
* [ ] Rolling dice
|
1.0
|
Testing for v4.0.0 - v4.0.0 is feature complete, but I don't want to release without tests to cover the functionality.
Previous versions had pretty full tests, but these are no longer valid as the code-base has changed so much.
I've started on the tests already, and have replaced Jasmine with Jest. I've also got unit tests for all the dice and a lot of the modifiers sorted.
Here's an up-to-date list of what still needs to be done:
* [ ] Unit tests
* [x] Dice
* [x] Standard
* [x] Percentile
* [x] Fudge
* [ ] Modifiers
* [x] Comparison
* [x] Critical Success
* [x] Critical Failure
* [x] Drop
* [x] Keep
* [ ] Explode
* [ ] ReRoll
* [ ] Sorting
* [ ] Target (Success / Failure)
* [ ] Parser
* [ ] Results
* [ ] RollResult
* [ ] RollResults
* [ ] DiceRoll
* [ ] DiceRoller
* [ ] Feature tests
* [ ] Rolling dice
|
test
|
testing for is feature complete but i don t want to release without tests to cover the functionality previous versions had pretty full tests but these are no longer valid as the code base has changed so much i ve started on the tests already and have replaced jasmine with jest i ve also got unit tests for all the dice and a lot of the modifiers sorted here s an up to date list of what still needs to be done unit tests dice standard percentile fudge modifiers comparison critical success critical failure drop keep explode reroll sorting target success failure parser results rollresult rollresults diceroll diceroller feature tests rolling dice
| 1
|
318,950
| 9,725,671,874
|
IssuesEvent
|
2019-05-30 09:21:14
|
reconhub/earlyR
|
https://api.github.com/repos/reconhub/earlyR
|
closed
|
loglike_to_density is wrong
|
bug top_priority
|
line 60:
`out <- x / abs(max(x, na.rm = TRUE))`
should not be there I think (you can't renormalise like this on the log scale as this is equivalent to raising to a power on the natural scale - this distorts your likelihood profile massively and gives a wrong impression of the likelihood landscape - this will be important to fix if we want to use this to compute confidence intervals on R as suggested in issue #10
|
1.0
|
loglike_to_density is wrong - line 60:
`out <- x / abs(max(x, na.rm = TRUE))`
should not be there I think (you can't renormalise like this on the log scale as this is equivalent to raising to a power on the natural scale - this distorts your likelihood profile massively and gives a wrong impression of the likelihood landscape - this will be important to fix if we want to use this to compute confidence intervals on R as suggested in issue #10
|
non_test
|
loglike to density is wrong line out x abs max x na rm true should not be there i think you can t renormalise like this on the log scale as this is equivalent to raising to a power on the natural scale this distorts your likelihood profile massively and gives a wrong impression of the likelihood landscape this will be important to fix if we want to use this to compute confidence intervals on r as suggested in issue
| 0
|
580,357
| 17,242,073,842
|
IssuesEvent
|
2021-07-21 00:59:13
|
eatmyvenom/hyarcade
|
https://api.github.com/repos/eatmyvenom/hyarcade
|
closed
|
Update dependencies to use CommandInteractionOptionResolver
|
High priority disc:interactions enhancement refractor t:discord
|
Discord.js recently merged a patch implementing `CommandInteractionOptionResolver` which I should use instead of my current solution
|
1.0
|
Update dependencies to use CommandInteractionOptionResolver - Discord.js recently merged a patch implementing `CommandInteractionOptionResolver` which I should use instead of my current solution
|
non_test
|
update dependencies to use commandinteractionoptionresolver discord js recently merged a patch implementing commandinteractionoptionresolver which i should use instead of my current solution
| 0
|
128,454
| 10,533,684,083
|
IssuesEvent
|
2019-10-01 13:31:38
|
eclipse/openj9
|
https://api.github.com/repos/eclipse/openj9
|
opened
|
DaaLoadTest special_22 Windows crash vmState=0x0002000f
|
comp:jit test failure
|
First occurrence see https://github.com/eclipse/openj9/issues/7276#issuecomment-536573381 and the other comments in that issue.
DaaLoadTest_daa1_special_22
https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_special.system_x86-64_windows_Nightly/369
DaaLoadTest_all_special_22
variation: Mode687
JVM_OPTIONS: -Xcompressedrefs -XX:+UseCompressedOops -Xjit -Xgcpolicy:gencon -Xaggressive
```
DLT stderr Unhandled exception
DLT stderr Type=Segmentation error vmState=0x0002000f
DLT stderr Windows_ExceptionCode=c0000005 J9Generic_Signal=00000004 ExceptionAddress=00007FFC8B85B457 ContextFlags=0010005f
DLT stderr Handler1=00007FFC8BAAC6C0 Handler2=00007FFC8C03CB00 InaccessibleReadAddress=0000000084723318
DLT stderr RDI=000000000DF73AC0 RSI=0000000084723300 RAX=0000000000BB73E0 RBX=0000000000000000
DLT stderr RCX=0000000001620130 RDX=000000000EE77AD8 R8=0000000023DDEA68 R9=0000000023DDEA88
DLT stderr R10=0000000023DDEFF0 R11=00000000072DC1E0 R12=0000000000000000 R13=0000000000000000
DLT stderr R14=00000000016200A0 R15=0000000001ADAE50
DLT stderr RIP=00007FFC8B85B457 RSP=0000000023DDE9B0 RBP=0000000000000000 GS=002B
DLT stderr FS=0053 ES=002B DS=002B
DLT stderr XMM0 43e0000000000000 (f: 0.000000, d: 9.223372e+018)
DLT stderr XMM1 bf2e302be03ce543 (f: 3762087168.000000, d: -2.303175e-004)
DLT stderr XMM2 3f2e5d24f64f6339 (f: 4132397824.000000, d: 2.316578e-004)
DLT stderr XMM3 3fc7565060000000 (f: 1610612736.000000, d: 1.823216e-001)
DLT stderr XMM4 402fe28022000000 (f: 570425344.000000, d: 1.594238e+001)
DLT stderr XMM5 3f2e5cb39384c221 (f: 2474951168.000000, d: 2.316446e-004)
DLT stderr XMM6 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM7 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM8 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM9 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM10 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM11 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM12 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM13 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM14 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM15 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr Module=C:\Users\jenkins\workspace\Test_openjdk8_j9_special.system_x86-64_windows_Nightly\openjdkbinary\j2sdk-image\jre\bin\compressedrefs\j9gc29.dll
DLT stderr Module_base_address=00007FFC8B6A0000 Offset_in_DLL=00000000001bb457
DLT stderr Target=2_90_20190930_162 (Windows Server 2012 R2 6.3 build 9600)
DLT stderr CPU=amd64 (8 logical CPUs) (0x1ffb9c000 RAM)
```
https://ci.eclipse.org/openj9//job/Grinder/parambuild/?JDK_VERSION=8&JDK_IMPL=openj9&BUILD_LIST=system/daaLoadTest&JenkinsFile=openjdk_x86-64_windows&TARGET=DaaLoadTest_all_special_22&SDK_RESOURCE=upstream&CUSTOMIZED_SDK_URL=https://140-211-168-230-openstack.osuosl.org/artifactory/ci-eclipse-openj9/Build_JDK8_x86-64_windows_Nightly/162/OpenJ9-JDK8-x86-64_windows-20191001-031444.tar.gz%20https://140-211-168-230-openstack.osuosl.org/artifactory/ci-eclipse-openj9/Build_JDK8_x86-64_windows_Nightly/162/native-test-libs.tar.gz&CUSTOMIZED_SDK_URL_CREDENTIAL_ID=ab89294b-5ba1-48e9-8c85-107daca5a2e9
|
1.0
|
DaaLoadTest special_22 Windows crash vmState=0x0002000f - First occurrence see https://github.com/eclipse/openj9/issues/7276#issuecomment-536573381 and the other comments in that issue.
DaaLoadTest_daa1_special_22
https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_special.system_x86-64_windows_Nightly/369
DaaLoadTest_all_special_22
variation: Mode687
JVM_OPTIONS: -Xcompressedrefs -XX:+UseCompressedOops -Xjit -Xgcpolicy:gencon -Xaggressive
```
DLT stderr Unhandled exception
DLT stderr Type=Segmentation error vmState=0x0002000f
DLT stderr Windows_ExceptionCode=c0000005 J9Generic_Signal=00000004 ExceptionAddress=00007FFC8B85B457 ContextFlags=0010005f
DLT stderr Handler1=00007FFC8BAAC6C0 Handler2=00007FFC8C03CB00 InaccessibleReadAddress=0000000084723318
DLT stderr RDI=000000000DF73AC0 RSI=0000000084723300 RAX=0000000000BB73E0 RBX=0000000000000000
DLT stderr RCX=0000000001620130 RDX=000000000EE77AD8 R8=0000000023DDEA68 R9=0000000023DDEA88
DLT stderr R10=0000000023DDEFF0 R11=00000000072DC1E0 R12=0000000000000000 R13=0000000000000000
DLT stderr R14=00000000016200A0 R15=0000000001ADAE50
DLT stderr RIP=00007FFC8B85B457 RSP=0000000023DDE9B0 RBP=0000000000000000 GS=002B
DLT stderr FS=0053 ES=002B DS=002B
DLT stderr XMM0 43e0000000000000 (f: 0.000000, d: 9.223372e+018)
DLT stderr XMM1 bf2e302be03ce543 (f: 3762087168.000000, d: -2.303175e-004)
DLT stderr XMM2 3f2e5d24f64f6339 (f: 4132397824.000000, d: 2.316578e-004)
DLT stderr XMM3 3fc7565060000000 (f: 1610612736.000000, d: 1.823216e-001)
DLT stderr XMM4 402fe28022000000 (f: 570425344.000000, d: 1.594238e+001)
DLT stderr XMM5 3f2e5cb39384c221 (f: 2474951168.000000, d: 2.316446e-004)
DLT stderr XMM6 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM7 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM8 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM9 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM10 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM11 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM12 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM13 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM14 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr XMM15 0000000000000000 (f: 0.000000, d: 0.000000e+000)
DLT stderr Module=C:\Users\jenkins\workspace\Test_openjdk8_j9_special.system_x86-64_windows_Nightly\openjdkbinary\j2sdk-image\jre\bin\compressedrefs\j9gc29.dll
DLT stderr Module_base_address=00007FFC8B6A0000 Offset_in_DLL=00000000001bb457
DLT stderr Target=2_90_20190930_162 (Windows Server 2012 R2 6.3 build 9600)
DLT stderr CPU=amd64 (8 logical CPUs) (0x1ffb9c000 RAM)
```
https://ci.eclipse.org/openj9//job/Grinder/parambuild/?JDK_VERSION=8&JDK_IMPL=openj9&BUILD_LIST=system/daaLoadTest&JenkinsFile=openjdk_x86-64_windows&TARGET=DaaLoadTest_all_special_22&SDK_RESOURCE=upstream&CUSTOMIZED_SDK_URL=https://140-211-168-230-openstack.osuosl.org/artifactory/ci-eclipse-openj9/Build_JDK8_x86-64_windows_Nightly/162/OpenJ9-JDK8-x86-64_windows-20191001-031444.tar.gz%20https://140-211-168-230-openstack.osuosl.org/artifactory/ci-eclipse-openj9/Build_JDK8_x86-64_windows_Nightly/162/native-test-libs.tar.gz&CUSTOMIZED_SDK_URL_CREDENTIAL_ID=ab89294b-5ba1-48e9-8c85-107daca5a2e9
|
test
|
daaloadtest special windows crash vmstate first occurrence see and the other comments in that issue daaloadtest special daaloadtest all special variation jvm options xcompressedrefs xx usecompressedoops xjit xgcpolicy gencon xaggressive dlt stderr unhandled exception dlt stderr type segmentation error vmstate dlt stderr windows exceptioncode signal exceptionaddress contextflags dlt stderr inaccessiblereadaddress dlt stderr rdi rsi rax rbx dlt stderr rcx rdx dlt stderr dlt stderr dlt stderr rip rsp rbp gs dlt stderr fs es ds dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr f d dlt stderr module c users jenkins workspace test special system windows nightly openjdkbinary image jre bin compressedrefs dll dlt stderr module base address offset in dll dlt stderr target windows server build dlt stderr cpu logical cpus ram
| 1
|
66,423
| 3,253,730,110
|
IssuesEvent
|
2015-10-19 20:23:57
|
uclouvain/openjpeg
|
https://api.github.com/repos/uclouvain/openjpeg
|
closed
|
OpenJPEG doesn't compile on mac with gcc 4
|
bug Priority-Medium
|
Originally reported on Google Code with ID 243
```
Several methods are forward declared as static but the actual definition does not have
the static keyword (gcc4 with our build options chokes on this).
This was reported in OpenJPEG 2.0.0
MacOSX 10.7.5
Xcode 3.2.6
Attached patch is generated with diff.
```
Reported by `brta.enfocus` on 2013-10-08 12:29:48
<hr>
* *Attachment: [fix gcc 4 compile errors](https://storage.googleapis.com/google-code-attachments/openjpeg/issue-243/comment-0/fix gcc 4 compile errors)*
|
1.0
|
OpenJPEG doesn't compile on mac with gcc 4 - Originally reported on Google Code with ID 243
```
Several methods are forward declared as static but the actual definition does not have
the static keyword (gcc4 with our build options chokes on this).
This was reported in OpenJPEG 2.0.0
MacOSX 10.7.5
Xcode 3.2.6
Attached patch is generated with diff.
```
Reported by `brta.enfocus` on 2013-10-08 12:29:48
<hr>
* *Attachment: [fix gcc 4 compile errors](https://storage.googleapis.com/google-code-attachments/openjpeg/issue-243/comment-0/fix gcc 4 compile errors)*
|
non_test
|
openjpeg doesn t compile on mac with gcc originally reported on google code with id several methods are forward declared as static but the actual definition does not have the static keyword with our build options chokes on this this was reported in openjpeg macosx xcode attached patch is generated with diff reported by brta enfocus on attachment gcc compile errors
| 0
|
339,082
| 30,342,930,965
|
IssuesEvent
|
2023-07-11 13:48:28
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
opened
|
[CI] org.elasticsearch.snapshots.SnapshotStressTestsIT testRandomActivities
|
:Distributed/Snapshot/Restore >test-failure
|
### CI Link
https://gradle-enterprise.elastic.co/s/2wbpgc72g2qsm/tests/:server:internalClusterTest/org.elasticsearch.snapshots.SnapshotStressTestsIT/testRandomActivities?top-execution=1
### Repro line
./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.snapshots.SnapshotStressTestsIT.testRandomActivities" -Dtests.seed=7FD24A23B626E47 -Dtests.locale=sq-AL -Dtests.timezone=Asia/Khandyga -Druntime.java=20
### Does it reproduce?
Yes
### Applicable branches
8.8
### Failure history
https://gradle-enterprise.elastic.co/scans/tests?search.startTimeMax=1689083199013&search.startTimeMin=1688418000000&search.timeZoneId=Europe/Bucharest&tests.container=org.elasticsearch.snapshots.SnapshotStressTestsIT&tests.test=testRandomActivities
### Failure excerpt
```
java.lang.AssertionError: java.lang.AssertionError: java.lang.NullPointerException: Cannot invoke "org.elasticsearch.cluster.SnapshotsInProgress$Entry.failure()" because "entry" is null
at __randomizedtesting.SeedInfo.seed([7FD24A23B626E47]:0)
at org.elasticsearch.snapshots.SnapshotsService.finalizeSnapshotEntry(SnapshotsService.java:1460)
at org.elasticsearch.snapshots.SnapshotsService.runNextQueuedOperation(SnapshotsService.java:1548)
at org.elasticsearch.snapshots.SnapshotsService.lambda$finalizeSnapshotEntry$28(SnapshotsService.java:1441)
at org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:158)
at org.elasticsearch.repositories.FinalizeSnapshotContext.onResponse(FinalizeSnapshotContext.java:117)
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.lambda$finalizeSnapshot$41(BlobStoreRepository.java:1390)
at org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:158)
at org.elasticsearch.action.ActionRunnable$2.accept(ActionRunnable.java:50)
at org.elasticsearch.action.ActionRunnable$2.accept(ActionRunnable.java:47)
at org.elasticsearch.action.ActionRunnable$3.doRun(ActionRunnable.java:72)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1623)
Caused by: java.lang.AssertionError: java.lang.NullPointerException: Cannot invoke "org.elasticsearch.cluster.SnapshotsInProgress$Entry.failure()" because "entry" is null
... 15 more
Caused by: java.lang.NullPointerException: Cannot invoke "org.elasticsearch.cluster.SnapshotsInProgress$Entry.failure()" because "entry" is null
at org.elasticsearch.snapshots.SnapshotsService.finalizeSnapshotEntry(SnapshotsService.java:1324)
... 14 more
```
|
1.0
|
[CI] org.elasticsearch.snapshots.SnapshotStressTestsIT testRandomActivities - ### CI Link
https://gradle-enterprise.elastic.co/s/2wbpgc72g2qsm/tests/:server:internalClusterTest/org.elasticsearch.snapshots.SnapshotStressTestsIT/testRandomActivities?top-execution=1
### Repro line
./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.snapshots.SnapshotStressTestsIT.testRandomActivities" -Dtests.seed=7FD24A23B626E47 -Dtests.locale=sq-AL -Dtests.timezone=Asia/Khandyga -Druntime.java=20
### Does it reproduce?
Yes
### Applicable branches
8.8
### Failure history
https://gradle-enterprise.elastic.co/scans/tests?search.startTimeMax=1689083199013&search.startTimeMin=1688418000000&search.timeZoneId=Europe/Bucharest&tests.container=org.elasticsearch.snapshots.SnapshotStressTestsIT&tests.test=testRandomActivities
### Failure excerpt
```
java.lang.AssertionError: java.lang.AssertionError: java.lang.NullPointerException: Cannot invoke "org.elasticsearch.cluster.SnapshotsInProgress$Entry.failure()" because "entry" is null
at __randomizedtesting.SeedInfo.seed([7FD24A23B626E47]:0)
at org.elasticsearch.snapshots.SnapshotsService.finalizeSnapshotEntry(SnapshotsService.java:1460)
at org.elasticsearch.snapshots.SnapshotsService.runNextQueuedOperation(SnapshotsService.java:1548)
at org.elasticsearch.snapshots.SnapshotsService.lambda$finalizeSnapshotEntry$28(SnapshotsService.java:1441)
at org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:158)
at org.elasticsearch.repositories.FinalizeSnapshotContext.onResponse(FinalizeSnapshotContext.java:117)
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.lambda$finalizeSnapshot$41(BlobStoreRepository.java:1390)
at org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:158)
at org.elasticsearch.action.ActionRunnable$2.accept(ActionRunnable.java:50)
at org.elasticsearch.action.ActionRunnable$2.accept(ActionRunnable.java:47)
at org.elasticsearch.action.ActionRunnable$3.doRun(ActionRunnable.java:72)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1623)
Caused by: java.lang.AssertionError: java.lang.NullPointerException: Cannot invoke "org.elasticsearch.cluster.SnapshotsInProgress$Entry.failure()" because "entry" is null
... 15 more
Caused by: java.lang.NullPointerException: Cannot invoke "org.elasticsearch.cluster.SnapshotsInProgress$Entry.failure()" because "entry" is null
at org.elasticsearch.snapshots.SnapshotsService.finalizeSnapshotEntry(SnapshotsService.java:1324)
... 14 more
```
|
test
|
org elasticsearch snapshots snapshotstresstestsit testrandomactivities ci link repro line gradlew server internalclustertest tests org elasticsearch snapshots snapshotstresstestsit testrandomactivities dtests seed dtests locale sq al dtests timezone asia khandyga druntime java does it reproduce yes applicable branches failure history failure excerpt java lang assertionerror java lang assertionerror java lang nullpointerexception cannot invoke org elasticsearch cluster snapshotsinprogress entry failure because entry is null at randomizedtesting seedinfo seed at org elasticsearch snapshots snapshotsservice finalizesnapshotentry snapshotsservice java at org elasticsearch snapshots snapshotsservice runnextqueuedoperation snapshotsservice java at org elasticsearch snapshots snapshotsservice lambda finalizesnapshotentry snapshotsservice java at org elasticsearch action actionlistener onresponse actionlistener java at org elasticsearch repositories finalizesnapshotcontext onresponse finalizesnapshotcontext java at org elasticsearch repositories blobstore blobstorerepository lambda finalizesnapshot blobstorerepository java at org elasticsearch action actionlistener onresponse actionlistener java at org elasticsearch action actionrunnable accept actionrunnable java at org elasticsearch action actionrunnable accept actionrunnable java at org elasticsearch action actionrunnable dorun actionrunnable java at org elasticsearch common util concurrent threadcontext contextpreservingabstractrunnable dorun threadcontext java at org elasticsearch common util concurrent abstractrunnable run abstractrunnable java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by java lang assertionerror java lang nullpointerexception cannot invoke org elasticsearch cluster snapshotsinprogress entry failure because entry is null more caused by java lang nullpointerexception cannot invoke org elasticsearch cluster snapshotsinprogress entry failure because entry is null at org elasticsearch snapshots snapshotsservice finalizesnapshotentry snapshotsservice java more
| 1
|
150,517
| 11,964,309,543
|
IssuesEvent
|
2020-04-05 19:05:44
|
bajuwa/ComicCompiler
|
https://api.github.com/repos/bajuwa/ComicCompiler
|
closed
|
Page not being broken on suspected whitespace
|
ComCom Needs Retesting bug
|
Under `./test/default` image40.jpg appears to end in whitespace, but does not 'break' at that point (adding colour error does not resolve the issue).
Test with imagemagick directly to see if the bottom of the image is actually pure white. If not, test with an increased standard deviation to see if that fixes the problem.
|
1.0
|
Page not being broken on suspected whitespace - Under `./test/default` image40.jpg appears to end in whitespace, but does not 'break' at that point (adding colour error does not resolve the issue).
Test with imagemagick directly to see if the bottom of the image is actually pure white. If not, test with an increased standard deviation to see if that fixes the problem.
|
test
|
page not being broken on suspected whitespace under test default jpg appears to end in whitespace but does not break at that point adding colour error does not resolve the issue test with imagemagick directly to see if the bottom of the image is actually pure white if not test with an increased standard deviation to see if that fixes the problem
| 1
|
278,223
| 24,135,095,350
|
IssuesEvent
|
2022-09-21 10:35:39
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[Fleet] Agent activity flyout
|
Team:Fleet QA:Ready for Testing v8.5.0
|
UI changes coming out of https://github.com/elastic/fleet-server/issues/1660
Given the changes of bulk actions, the execution of large agent batches are going to be async, so the users are not immediately notified of the action outcome.
In order to improve the UX, there is going to be a new Flyout added on Agent list, to show the progress of the user actions.
[Figma designs](https://www.figma.com/file/3HvuewyuiOMmiPRVUkcltT/Fleet-scalability?node-id=964%3A306001)
Minimum goals:
- Agent activity flyout, should read the data from the new `/action_status` endpoint added in https://github.com/elastic/kibana/pull/138870
- Show actions with agent count, version, scheduled time (for upgrades), time of completion, status with green, red, grey color
- Show in progress actions at the top
- `Abort upgrade` button for Upgrade action (move existing Upgrade callout functionality)
- Finished actions should show up in descending time order
- Show last 10 actions by default, grouped by days
- First action taken, show a guidance tour above `Agent activity` button
Stretch goals:
- `View agents` button that navigates to a filtered list of agents included in the selected action
- `Change schedule` button for Upgrade action
- `Review error log` button that navigates to Discover app to show relevant error logs
- `Show more` button that loads more activity (10 more actions?)
- `Jump to...` button that displays a datepicker to load activity of a selected day.
- `Review errors` button in agent list
- Include Agent policy update in Agent activity
<details>
<summary>Screenshots</summary>
<img width="1156" alt="image" src="https://user-images.githubusercontent.com/90178898/189106630-fa2beb0d-2f27-4bfb-a498-5864aa83581c.png">
<img width="486" alt="image" src="https://user-images.githubusercontent.com/90178898/189106696-4af126e2-e0e6-4d3b-a13e-95d8f7a9ae80.png">
<img width="1081" alt="image" src="https://user-images.githubusercontent.com/90178898/189108023-e281d955-2b2e-4971-8afc-edf632da4c74.png">
<img width="1051" alt="image" src="https://user-images.githubusercontent.com/90178898/189108158-e712191a-5cc5-43fd-9ec8-1466966e014f.png">
<img width="740" alt="image" src="https://user-images.githubusercontent.com/90178898/189108429-2ed32a83-c14a-4c2d-b37d-72a8f54a5b98.png">
<img width="729" alt="image" src="https://user-images.githubusercontent.com/90178898/189108502-8a8bb812-9bb8-4ace-9e3f-d18944df7278.png">
<img width="744" alt="image" src="https://user-images.githubusercontent.com/90178898/189108676-6aed2090-6110-46b6-8000-4e323b8a1497.png">
</details>
|
1.0
|
[Fleet] Agent activity flyout - UI changes coming out of https://github.com/elastic/fleet-server/issues/1660
Given the changes of bulk actions, the execution of large agent batches are going to be async, so the users are not immediately notified of the action outcome.
In order to improve the UX, there is going to be a new Flyout added on Agent list, to show the progress of the user actions.
[Figma designs](https://www.figma.com/file/3HvuewyuiOMmiPRVUkcltT/Fleet-scalability?node-id=964%3A306001)
Minimum goals:
- Agent activity flyout, should read the data from the new `/action_status` endpoint added in https://github.com/elastic/kibana/pull/138870
- Show actions with agent count, version, scheduled time (for upgrades), time of completion, status with green, red, grey color
- Show in progress actions at the top
- `Abort upgrade` button for Upgrade action (move existing Upgrade callout functionality)
- Finished actions should show up in descending time order
- Show last 10 actions by default, grouped by days
- First action taken, show a guidance tour above `Agent activity` button
Stretch goals:
- `View agents` button that navigates to a filtered list of agents included in the selected action
- `Change schedule` button for Upgrade action
- `Review error log` button that navigates to Discover app to show relevant error logs
- `Show more` button that loads more activity (10 more actions?)
- `Jump to...` button that displays a datepicker to load activity of a selected day.
- `Review errors` button in agent list
- Include Agent policy update in Agent activity
<details>
<summary>Screenshots</summary>
<img width="1156" alt="image" src="https://user-images.githubusercontent.com/90178898/189106630-fa2beb0d-2f27-4bfb-a498-5864aa83581c.png">
<img width="486" alt="image" src="https://user-images.githubusercontent.com/90178898/189106696-4af126e2-e0e6-4d3b-a13e-95d8f7a9ae80.png">
<img width="1081" alt="image" src="https://user-images.githubusercontent.com/90178898/189108023-e281d955-2b2e-4971-8afc-edf632da4c74.png">
<img width="1051" alt="image" src="https://user-images.githubusercontent.com/90178898/189108158-e712191a-5cc5-43fd-9ec8-1466966e014f.png">
<img width="740" alt="image" src="https://user-images.githubusercontent.com/90178898/189108429-2ed32a83-c14a-4c2d-b37d-72a8f54a5b98.png">
<img width="729" alt="image" src="https://user-images.githubusercontent.com/90178898/189108502-8a8bb812-9bb8-4ace-9e3f-d18944df7278.png">
<img width="744" alt="image" src="https://user-images.githubusercontent.com/90178898/189108676-6aed2090-6110-46b6-8000-4e323b8a1497.png">
</details>
|
test
|
agent activity flyout ui changes coming out of given the changes of bulk actions the execution of large agent batches are going to be async so the users are not immediately notified of the action outcome in order to improve the ux there is going to be a new flyout added on agent list to show the progress of the user actions minimum goals agent activity flyout should read the data from the new action status endpoint added in show actions with agent count version scheduled time for upgrades time of completion status with green red grey color show in progress actions at the top abort upgrade button for upgrade action move existing upgrade callout functionality finished actions should show up in descending time order show last actions by default grouped by days first action taken show a guidance tour above agent activity button stretch goals view agents button that navigates to a filtered list of agents included in the selected action change schedule button for upgrade action review error log button that navigates to discover app to show relevant error logs show more button that loads more activity more actions jump to button that displays a datepicker to load activity of a selected day review errors button in agent list include agent policy update in agent activity screenshots img width alt image src img width alt image src img width alt image src img width alt image src img width alt image src img width alt image src img width alt image src
| 1
|
170,937
| 13,210,037,674
|
IssuesEvent
|
2020-08-15 14:48:49
|
dunossauro/todo_list_flask_brython
|
https://api.github.com/repos/dunossauro/todo_list_flask_brython
|
closed
|
Tempo de reposta de testes que cadastram TODOS
|
enhancement testing
|
Nos steps que cadastram tasks:
```feature
Quando registrar as tarefas
| nome | descrição | urgente |
| Liga para Beto | Telefone +15 51515151 | False |
| ir no mercado | Promoção no mercado x | False |
```
O teste é executado rápido de mais, em alguns casos o post do form ainda não foi feito e os dados ainda não estão limpos. Isso faz com que os valores no banco fiquem confusos e a asserção seja dada de maneira errada. Pois o registro não foi inserido como esperado.

|
1.0
|
Tempo de reposta de testes que cadastram TODOS - Nos steps que cadastram tasks:
```feature
Quando registrar as tarefas
| nome | descrição | urgente |
| Liga para Beto | Telefone +15 51515151 | False |
| ir no mercado | Promoção no mercado x | False |
```
O teste é executado rápido de mais, em alguns casos o post do form ainda não foi feito e os dados ainda não estão limpos. Isso faz com que os valores no banco fiquem confusos e a asserção seja dada de maneira errada. Pois o registro não foi inserido como esperado.

|
test
|
tempo de reposta de testes que cadastram todos nos steps que cadastram tasks feature quando registrar as tarefas nome descrição urgente liga para beto telefone false ir no mercado promoção no mercado x false o teste é executado rápido de mais em alguns casos o post do form ainda não foi feito e os dados ainda não estão limpos isso faz com que os valores no banco fiquem confusos e a asserção seja dada de maneira errada pois o registro não foi inserido como esperado
| 1
|
312,016
| 26,831,763,873
|
IssuesEvent
|
2023-02-02 16:28:22
|
dotnetcore/BootstrapBlazor
|
https://api.github.com/repos/dotnetcore/BootstrapBlazor
|
closed
|
test: add isPopover unit test for MultiSelect
|
test
|
### Which class is this unit test associated with?
MultiSelect
|
1.0
|
test: add isPopover unit test for MultiSelect - ### Which class is this unit test associated with?
MultiSelect
|
test
|
test add ispopover unit test for multiselect which class is this unit test associated with multiselect
| 1
|
1,723
| 2,978,718,756
|
IssuesEvent
|
2015-07-16 08:43:16
|
tyrasd/overpass-turbo
|
https://api.github.com/repos/tyrasd/overpass-turbo
|
opened
|
Don't show error message when LocalStorage is not available
|
browser-specific enhancement usability
|
When localstorage is not available, overpass turbo could grey out the save button and continue to work with the rest of its functionality. (What to do with settings, then?)
|
True
|
Don't show error message when LocalStorage is not available - When localstorage is not available, overpass turbo could grey out the save button and continue to work with the rest of its functionality. (What to do with settings, then?)
|
non_test
|
don t show error message when localstorage is not available when localstorage is not available overpass turbo could grey out the save button and continue to work with the rest of its functionality what to do with settings then
| 0
|
219,926
| 7,348,119,580
|
IssuesEvent
|
2018-03-08 04:28:17
|
intel-analytics/BigDL
|
https://api.github.com/repos/intel-analytics/BigDL
|
closed
|
add items to FAQ and Trouble Shooting
|
document low priority
|
1. put this to higher level of headings. This is now barried too deep in docs https://bigdl-project.github.io/0.4.0/#PythonUserGuide/run-without-pip/#faq.
1. add TypeError: 'JavaPackage' object is not callable may also be caused by mismatching BigDL self-built version and spark version and if any additional lib, the verison of the lib it compiles against.
1. also a new error: "Py4JError: com.intel.analytics.bigdl.python.api.PythonBigDLKeras.ofFloat does not exist in the JVM" could be caused by version mismatch too.
|
1.0
|
add items to FAQ and Trouble Shooting - 1. put this to higher level of headings. This is now barried too deep in docs https://bigdl-project.github.io/0.4.0/#PythonUserGuide/run-without-pip/#faq.
1. add TypeError: 'JavaPackage' object is not callable may also be caused by mismatching BigDL self-built version and spark version and if any additional lib, the verison of the lib it compiles against.
1. also a new error: "Py4JError: com.intel.analytics.bigdl.python.api.PythonBigDLKeras.ofFloat does not exist in the JVM" could be caused by version mismatch too.
|
non_test
|
add items to faq and trouble shooting put this to higher level of headings this is now barried too deep in docs add typeerror javapackage object is not callable may also be caused by mismatching bigdl self built version and spark version and if any additional lib the verison of the lib it compiles against also a new error com intel analytics bigdl python api pythonbigdlkeras offloat does not exist in the jvm could be caused by version mismatch too
| 0
|
27,692
| 4,326,141,434
|
IssuesEvent
|
2016-07-26 04:14:37
|
angular/angular
|
https://api.github.com/repos/angular/angular
|
closed
|
Protractor errors when bootstrapping the @angular/upgrade demo app
|
comp: core/testbed
|
With the latest protractor and selenium-webdriver I get:
```
Failed: Cannot assign to read only property 'stack' of Error while waiting for Protractor to sync with the page: "window.getAllAngularTestabilities is not a function"
```
|
1.0
|
Protractor errors when bootstrapping the @angular/upgrade demo app - With the latest protractor and selenium-webdriver I get:
```
Failed: Cannot assign to read only property 'stack' of Error while waiting for Protractor to sync with the page: "window.getAllAngularTestabilities is not a function"
```
|
test
|
protractor errors when bootstrapping the angular upgrade demo app with the latest protractor and selenium webdriver i get failed cannot assign to read only property stack of error while waiting for protractor to sync with the page window getallangulartestabilities is not a function
| 1
|
319,835
| 27,401,147,591
|
IssuesEvent
|
2023-03-01 00:51:13
|
MicrosoftDocs/windows-driver-docs
|
https://api.github.com/repos/MicrosoftDocs/windows-driver-docs
|
closed
|
More detail for running/finding this tool
|
Pri2 windows-hardware/prod devtest/tech
|
It would be helpful if more context was provided on running this command (and other WDK commands). The command is not on the path unless you are running it from the Visual Studio developer command prompt. Additionally, why not just list the path to the command?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: a022532b-d70d-ac23-d31a-a13313714465
* Version Independent ID: bd640550-ec35-0656-1e5a-8264e84d31fb
* Content: [ComputerHardwareIds Overview - Windows drivers](https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/computerhardwareids-overview)
* Content Source: [windows-driver-docs-pr/devtest/computerhardwareids-overview.md](https://github.com/MicrosoftDocs/windows-driver-docs/blob/staging/windows-driver-docs-pr/devtest/computerhardwareids-overview.md)
* Product: **windows-hardware**
* Technology: **devtest**
* GitHub Login: @DOMARS
* Microsoft Alias: **domars**
|
1.0
|
More detail for running/finding this tool -
It would be helpful if more context was provided on running this command (and other WDK commands). The command is not on the path unless you are running it from the Visual Studio developer command prompt. Additionally, why not just list the path to the command?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: a022532b-d70d-ac23-d31a-a13313714465
* Version Independent ID: bd640550-ec35-0656-1e5a-8264e84d31fb
* Content: [ComputerHardwareIds Overview - Windows drivers](https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/computerhardwareids-overview)
* Content Source: [windows-driver-docs-pr/devtest/computerhardwareids-overview.md](https://github.com/MicrosoftDocs/windows-driver-docs/blob/staging/windows-driver-docs-pr/devtest/computerhardwareids-overview.md)
* Product: **windows-hardware**
* Technology: **devtest**
* GitHub Login: @DOMARS
* Microsoft Alias: **domars**
|
test
|
more detail for running finding this tool it would be helpful if more context was provided on running this command and other wdk commands the command is not on the path unless you are running it from the visual studio developer command prompt additionally why not just list the path to the command document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source product windows hardware technology devtest github login domars microsoft alias domars
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.