Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21,385 | 29,202,230,368 | IssuesEvent | 2023-05-21 00:37:18 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [Hibrido / ] Test Analyst (Híbrido - Belo Horizonte) na Coodesh | SALVADOR TESTE REQUISITOS CYPRESS PROCESSOS INOVAÇÃO GITHUB CI UMA QUALIDADE TESTES DE SOFTWARE METODOLOGIAS ÁGEIS HIBRIDO AUTOMAÇÃO DE TESTES TESTES MANUAIS ALOCADO Stale | ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/test-analyst-hibrido-belo-horizonte-124221813?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Prime Results </strong>está buscando <strong>Test Analyst</strong> para compor seu time!</p>
<p>Acreditamos no poder de transformação social realizado pelas empresas Acreditamos no poder transformador das pessoas, aliado à gestão e tecnologia. Compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Desenvolver e executar testes exploratórios e automatizados pra nos ajudar a garantir a qualidade dos nossos produtos;</li>
<li>Monitorar, priorizar e planejar atividades de teste de qualidade de softwares e hardwares, além de ser referência para o time de desenvolvimento ajudando a melhorar os seus processos e entregas;</li>
<li>Execução de testes manuais;</li>
<li>Análise das causas raiz das falhas identificadas;</li>
<li>Registro de evidências;</li>
<li>Reports do sistema e gestão de bugs;</li>
<li>Criação e acompanhamento de documentações;</li>
<li>Acompanhamento e validação de Deploys;</li>
<li>Promoção de melhorias contínuas no processo de análise e testes de software.</li>
</ul>
<p></p>
## Prime Results :
<p>O Best Seller Simon Sinek, diz que a maioria das empresas sabem o que fazem, porém não sabem por que o fazem. Não é o nosso caso. A Prime Results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade. Nossos clientes hoje, fazem a diferença na vida de mais de 250.000 brasileiros, nas áreas de proteção patrimonial, saúde e assistência 24 horas. </p>
<p>Nosso objetivo central é criar um ambiente criativo, dinâmico e engajado, sempre aliados a métodos, processos inteligentes e muita inovação.</p><a href='https://coodesh.com/empresas/prime-results'>Veja mais no site</a>
## Habilidades:
- Cypress
- API
- Automação de Testes
## Local:
undefined
## Requisitos:
- Cursando ensino superior ou concluído em Sistemas de Informação, Ciência da Computação e afins;
- Conhecimento básico sobre metodologias ágeis;
- Conhecimento das técnicas e da execução de testes manuais e funcionais;
- Conhecimento em Critérios, Estratégias, Procedimentos e Requisitos de testes;
- Saber escrever bugs reports;
- Conhecimento em Cypress.
## Diferenciais:
- Conhecimento em testes API (Post, GET, Delete e outros);
- Conhecimento em ferramentas de automação (Cypress).
## Benefícios:
- Vale Refeição - 25,00 o dia trabalhado (Cartão Flash);
- Vale Transporte ou Auxilio Combustível ;
- Assistência Médica após o período de experiência;
- Acesso ao Clube Certo - Clube de Benefícios ;
- Gympass
- Parceria com instituições de ensino (Cursos de graduação e pós graduação);.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Test Analyst (Híbrido - Belo Horizonte) na Prime Results ](https://coodesh.com/vagas/test-analyst-hibrido-belo-horizonte-124221813?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Regime
CLT
#### Categoria
Testes/Q.A | 1.0 | [Hibrido / ] Test Analyst (Híbrido - Belo Horizonte) na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/test-analyst-hibrido-belo-horizonte-124221813?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Prime Results </strong>está buscando <strong>Test Analyst</strong> para compor seu time!</p>
<p>Acreditamos no poder de transformação social realizado pelas empresas Acreditamos no poder transformador das pessoas, aliado à gestão e tecnologia. Compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Desenvolver e executar testes exploratórios e automatizados pra nos ajudar a garantir a qualidade dos nossos produtos;</li>
<li>Monitorar, priorizar e planejar atividades de teste de qualidade de softwares e hardwares, além de ser referência para o time de desenvolvimento ajudando a melhorar os seus processos e entregas;</li>
<li>Execução de testes manuais;</li>
<li>Análise das causas raiz das falhas identificadas;</li>
<li>Registro de evidências;</li>
<li>Reports do sistema e gestão de bugs;</li>
<li>Criação e acompanhamento de documentações;</li>
<li>Acompanhamento e validação de Deploys;</li>
<li>Promoção de melhorias contínuas no processo de análise e testes de software.</li>
</ul>
<p></p>
## Prime Results :
<p>O Best Seller Simon Sinek, diz que a maioria das empresas sabem o que fazem, porém não sabem por que o fazem. Não é o nosso caso. A Prime Results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade. Nossos clientes hoje, fazem a diferença na vida de mais de 250.000 brasileiros, nas áreas de proteção patrimonial, saúde e assistência 24 horas. </p>
<p>Nosso objetivo central é criar um ambiente criativo, dinâmico e engajado, sempre aliados a métodos, processos inteligentes e muita inovação.</p><a href='https://coodesh.com/empresas/prime-results'>Veja mais no site</a>
## Habilidades:
- Cypress
- API
- Automação de Testes
## Local:
undefined
## Requisitos:
- Cursando ensino superior ou concluído em Sistemas de Informação, Ciência da Computação e afins;
- Conhecimento básico sobre metodologias ágeis;
- Conhecimento das técnicas e da execução de testes manuais e funcionais;
- Conhecimento em Critérios, Estratégias, Procedimentos e Requisitos de testes;
- Saber escrever bugs reports;
- Conhecimento em Cypress.
## Diferenciais:
- Conhecimento em testes API (Post, GET, Delete e outros);
- Conhecimento em ferramentas de automação (Cypress).
## Benefícios:
- Vale Refeição - 25,00 o dia trabalhado (Cartão Flash);
- Vale Transporte ou Auxilio Combustível ;
- Assistência Médica após o período de experiência;
- Acesso ao Clube Certo - Clube de Benefícios ;
- Gympass
- Parceria com instituições de ensino (Cursos de graduação e pós graduação);.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Test Analyst (Híbrido - Belo Horizonte) na Prime Results ](https://coodesh.com/vagas/test-analyst-hibrido-belo-horizonte-124221813?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Regime
CLT
#### Categoria
Testes/Q.A | process | test analyst híbrido belo horizonte na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a prime results está buscando test analyst para compor seu time acreditamos no poder de transformação social realizado pelas empresas acreditamos no poder transformador das pessoas aliado à gestão e tecnologia compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes responsabilidades desenvolver e executar testes exploratórios e automatizados pra nos ajudar a garantir a qualidade dos nossos produtos monitorar priorizar e planejar atividades de teste de qualidade de softwares e hardwares além de ser referência para o time de desenvolvimento ajudando a melhorar os seus processos e entregas execução de testes manuais análise das causas raiz das falhas identificadas registro de evidências reports do sistema e gestão de bugs criação e acompanhamento de documentações acompanhamento e validação de deploys promoção de melhorias contínuas no processo de análise e testes de software prime results o best seller simon sinek diz que a maioria das empresas sabem o que fazem porém não sabem por que o fazem não é o nosso caso a prime results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade nossos clientes hoje fazem a diferença na vida de mais de brasileiros nas áreas de proteção patrimonial saúde e assistência horas nbsp nosso objetivo central é criar um ambiente criativo dinâmico e engajado sempre aliados a métodos processos inteligentes e muita inovação habilidades cypress api automação de testes local undefined requisitos cursando ensino superior ou concluído em sistemas de informação ciência da computação e afins conhecimento básico sobre metodologias ágeis conhecimento das técnicas e da execução de testes manuais e funcionais conhecimento em critérios estratégias procedimentos e requisitos de testes saber escrever bugs reports conhecimento em cypress diferenciais conhecimento em testes api post get delete e outros conhecimento em ferramentas de automação cypress benefícios vale refeição o dia trabalhado cartão flash vale transporte ou auxilio combustível assistência médica após o período de experiência acesso ao clube certo clube de benefícios gympass parceria com instituições de ensino cursos de graduação e pós graduação como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação alocado regime clt categoria testes q a | 1 |
543 | 3,004,255,184 | IssuesEvent | 2015-07-25 19:01:27 | e-government-ua/i | https://api.github.com/repos/e-government-ua/i | closed | На бэке централа добавить в сущности Document поле "oSignData" и доработать сервисы для работы с ним | hi priority In process of testing test | 1) Добавить в сущность Document - стринговое обязательное поле oSignData.
Если ЕЩЕ НЕ реализована задача: https://github.com/e-government-ua/i/issues/588, то:
2) При использовании сервиса setDocumentFile - сохранять в это поле значение "{}"
3) При использовании сервиса setDocument - сохранять в это поле значение "{}"
| 1.0 | На бэке централа добавить в сущности Document поле "oSignData" и доработать сервисы для работы с ним - 1) Добавить в сущность Document - стринговое обязательное поле oSignData.
Если ЕЩЕ НЕ реализована задача: https://github.com/e-government-ua/i/issues/588, то:
2) При использовании сервиса setDocumentFile - сохранять в это поле значение "{}"
3) При использовании сервиса setDocument - сохранять в это поле значение "{}"
| process | на бэке централа добавить в сущности document поле osigndata и доработать сервисы для работы с ним добавить в сущность document стринговое обязательное поле osigndata если еще не реализована задача то при использовании сервиса setdocumentfile сохранять в это поле значение при использовании сервиса setdocument сохранять в это поле значение | 1 |
12,782 | 15,164,805,116 | IssuesEvent | 2021-02-12 14:14:55 | prisma/prisma | https://api.github.com/repos/prisma/prisma | closed | [Native Types] Ensure that fields of unsupported types aren't dropped | kind/feature process/candidate team/client team/migrations topic: introspection topic: migrate topic: native database types | ## Problem
It is difficult to maintain the schema and using the client for applications dependent on types that are unsupported by Prisma.
More specifically, whenever introspecting the database, columns of unsupported types are commented out. In these situations, using migrate to update the database schema again later will trigger the deletion of the column which isn't wanted.
## Suggested solution
Have a way to keep columns of unsupported types around in the database, even if not having these usable within the Prisma client.
Possibilities:
```
model Bla {
myField Unsupported("MACADDR")
}
// Or
model Bla {
myField Unsupported @Unsupported("MACADDR")
}
```
## Additional context
Using `$queryRaw` on unsupported field types may trigger errors depending on the type as the Rust engine may not be able to deserialize it properly. I assume that going into that would probably mean to look into supporting these field types as I suspect that could be equivalent to actually supporting them. | 1.0 | [Native Types] Ensure that fields of unsupported types aren't dropped - ## Problem
It is difficult to maintain the schema and using the client for applications dependent on types that are unsupported by Prisma.
More specifically, whenever introspecting the database, columns of unsupported types are commented out. In these situations, using migrate to update the database schema again later will trigger the deletion of the column which isn't wanted.
## Suggested solution
Have a way to keep columns of unsupported types around in the database, even if not having these usable within the Prisma client.
Possibilities:
```
model Bla {
myField Unsupported("MACADDR")
}
// Or
model Bla {
myField Unsupported @Unsupported("MACADDR")
}
```
## Additional context
Using `$queryRaw` on unsupported field types may trigger errors depending on the type as the Rust engine may not be able to deserialize it properly. I assume that going into that would probably mean to look into supporting these field types as I suspect that could be equivalent to actually supporting them. | process | ensure that fields of unsupported types aren t dropped problem it is difficult to maintain the schema and using the client for applications dependent on types that are unsupported by prisma more specifically whenever introspecting the database columns of unsupported types are commented out in these situations using migrate to update the database schema again later will trigger the deletion of the column which isn t wanted suggested solution have a way to keep columns of unsupported types around in the database even if not having these usable within the prisma client possibilities model bla myfield unsupported macaddr or model bla myfield unsupported unsupported macaddr additional context using queryraw on unsupported field types may trigger errors depending on the type as the rust engine may not be able to deserialize it properly i assume that going into that would probably mean to look into supporting these field types as i suspect that could be equivalent to actually supporting them | 1 |
8,792 | 11,908,187,285 | IssuesEvent | 2020-03-31 00:13:31 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | [macOS] When attempting to merge two shape files Processing dialog goes behind the main QGIS window | Bug MacOS Processing | I've made a video of my screen:
https://www.dropbox.com/s/t0zzi10xvcb9i5h/merge%20in%20qgis%20not%20working.mov?dl=0
basically once I choose two vector layers to merge, from **Vector->Data Management Tools-> Merge Vector Layers**, the dialogue window shuts and there is no way to complete the process. There are no errors or reports in the console either so it appears the tool simply shuts down but the program never crashes. | 1.0 | [macOS] When attempting to merge two shape files Processing dialog goes behind the main QGIS window - I've made a video of my screen:
https://www.dropbox.com/s/t0zzi10xvcb9i5h/merge%20in%20qgis%20not%20working.mov?dl=0
basically once I choose two vector layers to merge, from **Vector->Data Management Tools-> Merge Vector Layers**, the dialogue window shuts and there is no way to complete the process. There are no errors or reports in the console either so it appears the tool simply shuts down but the program never crashes. | process | when attempting to merge two shape files processing dialog goes behind the main qgis window i ve made a video of my screen basically once i choose two vector layers to merge from vector data management tools merge vector layers the dialogue window shuts and there is no way to complete the process there are no errors or reports in the console either so it appears the tool simply shuts down but the program never crashes | 1 |
245,533 | 20,775,601,501 | IssuesEvent | 2022-03-16 10:11:01 | bats-core/bats-core | https://api.github.com/repos/bats-core/bats-core | closed | ensure unofficial bash script mode test fails when run out of the tree | Type: Bug Priority: High Status: Unconfirmend Component: Self Test Suite Waiting for Contributor Feedback | I am trying to pass the source tree tests out of place, running an installed bats binary.
This is the failing test from the bats.bats file.
```
@test 'ensure compatibility with unofficial Bash strict mode' {
local expected='ok 1 unofficial Bash strict mode conditions met'
# Run Bats under `set -u` to catch as many unset variable accesses as
# possible.
run bash -u "${BATS_TEST_DIRNAME%/*}/bin/bats" \
"$FIXTURE_ROOT/unofficial_bash_strict_mode.bats"
if [[ "$status" -ne 0 || "${lines[1]}" != "$expected" ]]; then
cat <<END_OF_ERR_MSG
...blah blah blah
END_OF_ERR_MSG
emit_debug_output && return 1
fi
}
```
However it is failing because there is no "${BATS_TEST_DIRNAME%/*}/bin/bats" since i copied the tests out of the tree to a different location.
I want to test that the installed (/usr/bin/bats) bats is working!
The test tries to run bats with
```
run -u "${BATS_TEST_DIRNAME%/*}/bin/bats" \
"$FIXTURE/unofficial_bash_script_mode.bats"
```
Is there any reason why i can not just use bats like most other tests at bats.bats
without the relative path trough the tests dirname?
```
run -u bats \
"$FIXTURE/unofficial_bash_script_mode.bats"
```
I have changed it and the test passes in both locations, in the source tree and in the isolated tree. | 1.0 | ensure unofficial bash script mode test fails when run out of the tree - I am trying to pass the source tree tests out of place, running an installed bats binary.
This is the failing test from the bats.bats file.
```
@test 'ensure compatibility with unofficial Bash strict mode' {
local expected='ok 1 unofficial Bash strict mode conditions met'
# Run Bats under `set -u` to catch as many unset variable accesses as
# possible.
run bash -u "${BATS_TEST_DIRNAME%/*}/bin/bats" \
"$FIXTURE_ROOT/unofficial_bash_strict_mode.bats"
if [[ "$status" -ne 0 || "${lines[1]}" != "$expected" ]]; then
cat <<END_OF_ERR_MSG
...blah blah blah
END_OF_ERR_MSG
emit_debug_output && return 1
fi
}
```
However it is failing because there is no "${BATS_TEST_DIRNAME%/*}/bin/bats" since i copied the tests out of the tree to a different location.
I want to test that the installed (/usr/bin/bats) bats is working!
The test tries to run bats with
```
run -u "${BATS_TEST_DIRNAME%/*}/bin/bats" \
"$FIXTURE/unofficial_bash_script_mode.bats"
```
Is there any reason why i can not just use bats like most other tests at bats.bats
without the relative path trough the tests dirname?
```
run -u bats \
"$FIXTURE/unofficial_bash_script_mode.bats"
```
I have changed it and the test passes in both locations, in the source tree and in the isolated tree. | non_process | ensure unofficial bash script mode test fails when run out of the tree i am trying to pass the source tree tests out of place running an installed bats binary this is the failing test from the bats bats file test ensure compatibility with unofficial bash strict mode local expected ok unofficial bash strict mode conditions met run bats under set u to catch as many unset variable accesses as possible run bash u bats test dirname bin bats fixture root unofficial bash strict mode bats if expected then cat end of err msg blah blah blah end of err msg emit debug output return fi however it is failing because there is no bats test dirname bin bats since i copied the tests out of the tree to a different location i want to test that the installed usr bin bats bats is working the test tries to run bats with run u bats test dirname bin bats fixture unofficial bash script mode bats is there any reason why i can not just use bats like most other tests at bats bats without the relative path trough the tests dirname run u bats fixture unofficial bash script mode bats i have changed it and the test passes in both locations in the source tree and in the isolated tree | 0 |
65,478 | 19,536,596,890 | IssuesEvent | 2021-12-31 08:41:14 | GoldenSoftwareLtd/gedemin | https://api.github.com/repos/GoldenSoftwareLtd/gedemin | closed | Ошибка при печати накладных | Priority-Low Type-Defect Depot | Originally reported on Google Code with ID 1541
```
УНП организации хранится в текстовом поле, но если внести туда текст, то
при распечатке накладных (в складе) возникает ошибка ‘is a not valid
floating point value’
```
Reported by `danilchyk` on 2009-09-02 12:47:40
| 1.0 | Ошибка при печати накладных - Originally reported on Google Code with ID 1541
```
УНП организации хранится в текстовом поле, но если внести туда текст, то
при распечатке накладных (в складе) возникает ошибка ‘is a not valid
floating point value’
```
Reported by `danilchyk` on 2009-09-02 12:47:40
| non_process | ошибка при печати накладных originally reported on google code with id унп организации хранится в текстовом поле но если внести туда текст то при распечатке накладных в складе возникает ошибка ‘is a not valid floating point value’ reported by danilchyk on | 0 |
20,513 | 27,170,972,374 | IssuesEvent | 2023-02-17 19:23:29 | darkside-princeton/sipm-analysis | https://api.github.com/repos/darkside-princeton/sipm-analysis | closed | Pulse processing for scintillation data | pre-processing | Reproduce the tasks done with `script/root_scintillation.py` using the new framework.
1. Have one script under `sipm/exe/` that handles pulse information without pulse shape analysis. Add necessary methods in the classes under `sipm/recon/`.
2. Include more information to analyze and save, including baseline, charge with different time windows, Fprompt, and total PE.
3. Calibration result file needs to be specified.
4. Modify `spectrum.ipynb` to work with h5 files instead of root files. | 1.0 | Pulse processing for scintillation data - Reproduce the tasks done with `script/root_scintillation.py` using the new framework.
1. Have one script under `sipm/exe/` that handles pulse information without pulse shape analysis. Add necessary methods in the classes under `sipm/recon/`.
2. Include more information to analyze and save, including baseline, charge with different time windows, Fprompt, and total PE.
3. Calibration result file needs to be specified.
4. Modify `spectrum.ipynb` to work with h5 files instead of root files. | process | pulse processing for scintillation data reproduce the tasks done with script root scintillation py using the new framework have one script under sipm exe that handles pulse information without pulse shape analysis add necessary methods in the classes under sipm recon include more information to analyze and save including baseline charge with different time windows fprompt and total pe calibration result file needs to be specified modify spectrum ipynb to work with files instead of root files | 1 |
8,651 | 11,790,603,910 | IssuesEvent | 2020-03-17 19:17:59 | metabase/metabase | https://api.github.com/repos/metabase/metabase | opened | H2 losing column type info for date_trunc results | .Backend .Correctness Database/H2 Priority:P2 Querying/Processor Type:Bug | The query `SELECT DATE_TRUNC('day', CREATED_AT) FROM ORDERS` fails against the test database on **H2**, on current `master` (37dfebf), with a `NullPointerException`.
The NPE is caused by [this call to `getColumnClassName`](https://github.com/metabase/metabase/blob/37dfebf0589f82da4aefafabaa5da1559b81fd68/src/metabase/driver/h2.clj#L300) returning `nil`.
The patch below handles the `nil` value, but the root problem persists: it appears that the result set metadata has no type information for this column.
```diff
diff --git a/src/metabase/driver/h2.clj b/src/metabase/driver/h2.clj
index 8ee61b68d..d980ff001 100644
--- a/src/metabase/driver/h2.clj
+++ b/src/metabase/driver/h2.clj
@@ -297,12 +297,13 @@
;; de-CLOB any CLOB values that come back
(defmethod sql-jdbc.execute/read-column-thunk :h2
[_ ^ResultSet rs ^ResultSetMetaData rsmeta ^Integer i]
- (let [classname (Class/forName (.getColumnClassName rsmeta i) true (classloader/the-classloader))]
- (if (isa? classname Clob)
+ (if (some-> (.getColumnClassName rsmeta i)
+ (Class/forName true (classloader/the-classloader))
+ (isa? Clob))
(fn []
(jdbc-protocols/clob->str (.getObject rs i)))
(fn []
- (.getObject rs i)))))
+ (.getObject rs i))))
(defmethod sql-jdbc.execute/set-parameter [:h2 OffsetTime]
[driver prepared-statement i t]
```
The result is that the column has a base type of `types/*` in the API response, and the row values are interpreted as strings instead of timestamps.
_This is a **regression** from 0.34.3._
## Scope
This bug does _not_ manifest for the same query without the `DATE_TRUNC` function call. The same query also works as expected on PostgreSQL.
I did not test other H2 functions for similar issues.
## Preliminary debugging
Here are some concrete values from `rsmeta` that I found during debugging:
* `(.getObject rs i)`: `#inst "2019-02-11T00:00:00.000000000-00:00"` (type `java.sql.Timestamp`)
* `(.getColumnName rsmeta i)`: `"DATE_TRUNC('day', CREATED_AT)"`
* `(.getColumnType rsmeta i)`: `0`
* `(.getColumnTypeName rsmeta i)`: `"NULL"`
The affected `sql-jdbc.execute/read-column-hunk` method seems to be part of the new async query processor for 0.35.0, but I couldn't find any obvious reason why the result metadata would be affected in this way. | 1.0 | H2 losing column type info for date_trunc results - The query `SELECT DATE_TRUNC('day', CREATED_AT) FROM ORDERS` fails against the test database on **H2**, on current `master` (37dfebf), with a `NullPointerException`.
The NPE is caused by [this call to `getColumnClassName`](https://github.com/metabase/metabase/blob/37dfebf0589f82da4aefafabaa5da1559b81fd68/src/metabase/driver/h2.clj#L300) returning `nil`.
The patch below handles the `nil` value, but the root problem persists: it appears that the result set metadata has no type information for this column.
```diff
diff --git a/src/metabase/driver/h2.clj b/src/metabase/driver/h2.clj
index 8ee61b68d..d980ff001 100644
--- a/src/metabase/driver/h2.clj
+++ b/src/metabase/driver/h2.clj
@@ -297,12 +297,13 @@
;; de-CLOB any CLOB values that come back
(defmethod sql-jdbc.execute/read-column-thunk :h2
[_ ^ResultSet rs ^ResultSetMetaData rsmeta ^Integer i]
- (let [classname (Class/forName (.getColumnClassName rsmeta i) true (classloader/the-classloader))]
- (if (isa? classname Clob)
+ (if (some-> (.getColumnClassName rsmeta i)
+ (Class/forName true (classloader/the-classloader))
+ (isa? Clob))
(fn []
(jdbc-protocols/clob->str (.getObject rs i)))
(fn []
- (.getObject rs i)))))
+ (.getObject rs i))))
(defmethod sql-jdbc.execute/set-parameter [:h2 OffsetTime]
[driver prepared-statement i t]
```
The result is that the column has a base type of `types/*` in the API response, and the row values are interpreted as strings instead of timestamps.
_This is a **regression** from 0.34.3._
## Scope
This bug does _not_ manifest for the same query without the `DATE_TRUNC` function call. The same query also works as expected on PostgreSQL.
I did not test other H2 functions for similar issues.
## Preliminary debugging
Here are some concrete values from `rsmeta` that I found during debugging:
* `(.getObject rs i)`: `#inst "2019-02-11T00:00:00.000000000-00:00"` (type `java.sql.Timestamp`)
* `(.getColumnName rsmeta i)`: `"DATE_TRUNC('day', CREATED_AT)"`
* `(.getColumnType rsmeta i)`: `0`
* `(.getColumnTypeName rsmeta i)`: `"NULL"`
The affected `sql-jdbc.execute/read-column-hunk` method seems to be part of the new async query processor for 0.35.0, but I couldn't find any obvious reason why the result metadata would be affected in this way. | process | losing column type info for date trunc results the query select date trunc day created at from orders fails against the test database on on current master with a nullpointerexception the npe is caused by returning nil the patch below handles the nil value but the root problem persists it appears that the result set metadata has no type information for this column diff diff git a src metabase driver clj b src metabase driver clj index a src metabase driver clj b src metabase driver clj de clob any clob values that come back defmethod sql jdbc execute read column thunk let if isa classname clob if some getcolumnclassname rsmeta i class forname true classloader the classloader isa clob fn jdbc protocols clob str getobject rs i fn getobject rs i getobject rs i defmethod sql jdbc execute set parameter the result is that the column has a base type of types in the api response and the row values are interpreted as strings instead of timestamps this is a regression from scope this bug does not manifest for the same query without the date trunc function call the same query also works as expected on postgresql i did not test other functions for similar issues preliminary debugging here are some concrete values from rsmeta that i found during debugging getobject rs i inst type java sql timestamp getcolumnname rsmeta i date trunc day created at getcolumntype rsmeta i getcolumntypename rsmeta i null the affected sql jdbc execute read column hunk method seems to be part of the new async query processor for but i couldn t find any obvious reason why the result metadata would be affected in this way | 1 |
436,664 | 12,551,310,964 | IssuesEvent | 2020-06-06 14:19:39 | googleapis/nodejs-monitoring-dashboards | https://api.github.com/repos/googleapis/nodejs-monitoring-dashboards | closed | Synthesis failed for nodejs-monitoring-dashboards | api: monitoring autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate nodejs-monitoring-dashboards. :broken_heart:
Here's the output from running `synth.py`:
```
b'googleapis.\n2020-06-05 04:34:38,854 synthtool [DEBUG] > Using precloned repo /home/kbuilder/.cache/synthtool/googleapis\nDEBUG:synthtool:Using precloned repo /home/kbuilder/.cache/synthtool/googleapis\n2020-06-05 04:34:38,858 synthtool [DEBUG] > Pulling Docker image: gapic-generator-typescript:latest\nDEBUG:synthtool:Pulling Docker image: gapic-generator-typescript:latest\nlatest: Pulling from gapic-images/gapic-generator-typescript\nDigest: sha256:c9bc12024eddcfb94501627ff5b3ea302370995e9a2c9cde6b3317375d7e7b66\nStatus: Image is up to date for gcr.io/gapic-images/gapic-generator-typescript:latest\n2020-06-05 04:34:39,753 synthtool [DEBUG] > Generating code for: google/monitoring/dashboard/v1.\nDEBUG:synthtool:Generating code for: google/monitoring/dashboard/v1.\n2020-06-05 04:34:40,593 synthtool [DEBUG] > Wrote metadata to synth.metadata.\nDEBUG:synthtool:Wrote metadata to synth.metadata.\nTraceback (most recent call last):\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code\n exec(code, run_globals)\n File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>\n main()\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__\n return self.main(*args, **kwargs)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main\n rv = self.invoke(ctx)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke\n return callback(*args, **kwargs)\n File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main\n spec.loader.exec_module(synth_module) # type: ignore\n File "<frozen importlib._bootstrap_external>", line 678, in exec_module\n File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed\n File "/home/kbuilder/.cache/synthtool/nodejs-monitoring-dashboards/synth.py", line 36, in <module>\n version=version)\n File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_microgenerator.py", line 66, in typescript_library\n return self._generate_code(service, version, "typescript", **kwargs)\n File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_microgenerator.py", line 195, in _generate_code\n f"Code generation seemed to succeed, but {output_dir} is empty."\nRuntimeError: Code generation seemed to succeed, but /tmpfs/tmp/tmp72t39dx3 is empty.\n2020-06-05 04:34:40,633 autosynth [ERROR] > Synthesis failed\n2020-06-05 04:34:40,634 autosynth [DEBUG] > Running: git reset --hard HEAD\nHEAD is now at ebadd7c build: update protos.js (#83)\n2020-06-05 04:34:40,639 autosynth [DEBUG] > Running: git checkout autosynth-self\nSwitched to branch \'autosynth-self\'\n2020-06-05 04:34:40,643 autosynth [ERROR] > Command \'[\'/tmpfs/src/github/synthtool/env/bin/python3\', \'-m\', \'synthtool\', \'--metadata\', \'synth.metadata\', \'synth.py\', \'--\']\' returned non-zero exit status 1.\n2020-06-05 04:34:40,797 autosynth [DEBUG] > Running: git checkout ebadd7ca65cfd22520c4a54596f47e4badf486af\nNote: checking out \'ebadd7ca65cfd22520c4a54596f47e4badf486af\'.\n\nYou are in \'detached HEAD\' state. You can look around, make experimental\nchanges and commit them, and you can discard any commits you make in this\nstate without impacting any branches by performing another checkout.\n\nIf you want to create a new branch to retain commits you create, you may\ndo so (now or later) by using -b with the checkout command again. Example:\n\n git checkout -b <new-branch-name>\n\nHEAD is now at ebadd7c build: update protos.js (#83)\n2020-06-05 04:34:40,803 autosynth [DEBUG] > Running: git checkout d53a5b45c46920932dbe7d0a95e10d8b58933dae\nPrevious HEAD position was be74d3e build: do not fail builds on codecov errors (#528)\nHEAD is now at d53a5b4 docs: improve README (#600)\n2020-06-05 04:34:40,818 autosynth [DEBUG] > Running: git checkout cd804bab06e46dd1a4f16c32155fd3cddb931b52\nPrevious HEAD position was 83816bb3 Add kokoro-specific .bazelrc file with arguments specific only for kokoro environments. This is to fix autosynth builds when it tries building older commits.\nHEAD is now at cd804bab docs: cleaned docs for the Agents service and resource.\n2020-06-05 04:34:40,888 autosynth [DEBUG] > Running: git branch -f autosynth-119\n2020-06-05 04:34:40,893 autosynth [DEBUG] > Running: git checkout autosynth-119\nSwitched to branch \'autosynth-119\'\n2020-06-05 04:34:40,897 autosynth [INFO] > Running synthtool\n2020-06-05 04:34:40,897 autosynth [INFO] > [\'/tmpfs/src/github/synthtool/env/bin/python3\', \'-m\', \'synthtool\', \'--metadata\', \'synth.metadata\', \'synth.py\', \'--\']\n2020-06-05 04:34:40,899 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata synth.metadata synth.py --\n2020-06-05 04:34:41,107 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/nodejs-monitoring-dashboards/synth.py.\nOn branch autosynth-119\nnothing to commit, working tree clean\n2020-06-05 04:34:41,239 synthtool [DEBUG] > Ensuring dependencies.\nDEBUG:synthtool:Ensuring dependencies.\n2020-06-05 04:34:41,244 synthtool [DEBUG] > Cloning googleapis.\nDEBUG:synthtool:Cloning googleapis.\n2020-06-05 04:34:41,245 synthtool [DEBUG] > Using precloned repo /home/kbuilder/.cache/synthtool/googleapis\nDEBUG:synthtool:Using precloned repo /home/kbuilder/.cache/synthtool/googleapis\n2020-06-05 04:34:41,249 synthtool [DEBUG] > Pulling Docker image: gapic-generator-typescript:latest\nDEBUG:synthtool:Pulling Docker image: gapic-generator-typescript:latest\nlatest: Pulling from gapic-images/gapic-generator-typescript\nDigest: sha256:c9bc12024eddcfb94501627ff5b3ea302370995e9a2c9cde6b3317375d7e7b66\nStatus: Image is up to date for gcr.io/gapic-images/gapic-generator-typescript:latest\n2020-06-05 04:34:42,134 synthtool [DEBUG] > Generating code for: google/monitoring/dashboard/v1.\nDEBUG:synthtool:Generating code for: google/monitoring/dashboard/v1.\n2020-06-05 04:34:42,970 synthtool [DEBUG] > Wrote metadata to synth.metadata.\nDEBUG:synthtool:Wrote metadata to synth.metadata.\nTraceback (most recent call last):\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code\n exec(code, run_globals)\n File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>\n main()\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__\n return self.main(*args, **kwargs)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main\n rv = self.invoke(ctx)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke\n return callback(*args, **kwargs)\n File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main\n spec.loader.exec_module(synth_module) # type: ignore\n File "<frozen importlib._bootstrap_external>", line 678, in exec_module\n File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed\n File "/home/kbuilder/.cache/synthtool/nodejs-monitoring-dashboards/synth.py", line 36, in <module>\n version=version)\n File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_microgenerator.py", line 66, in typescript_library\n return self._generate_code(service, version, "typescript", **kwargs)\n File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_microgenerator.py", line 195, in _generate_code\n f"Code generation seemed to succeed, but {output_dir} is empty."\nRuntimeError: Code generation seemed to succeed, but /tmpfs/tmp/tmpuypjgv0_ is empty.\n2020-06-05 04:34:43,013 autosynth [ERROR] > Synthesis failed\n2020-06-05 04:34:43,013 autosynth [DEBUG] > Running: git reset --hard HEAD\nHEAD is now at ebadd7c build: update protos.js (#83)\n2020-06-05 04:34:43,019 autosynth [DEBUG] > Running: git checkout autosynth\nSwitched to branch \'autosynth\'\n2020-06-05 04:34:43,023 autosynth [DEBUG] > Running: git clean -fdx\nRemoving __pycache__/\nTraceback (most recent call last):\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code\n exec(code, run_globals)\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>\n main()\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main\n return _inner_main(temp_dir)\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 595, in _inner_main\n commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer)\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 371, in synthesize_loop\n synthesize_inner_loop(toolbox, synthesizer)\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 381, in synthesize_inner_loop\n synthesizer, len(toolbox.versions) - 1\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 266, in synthesize_version_in_new_branch\n synthesizer.synthesize(synth_log_path, self.environ)\n File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 119, in synthesize\n synth_proc.check_returncode() # Raise an exception.\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode\n self.stderr)\nsubprocess.CalledProcessError: Command \'[\'/tmpfs/src/github/synthtool/env/bin/python3\', \'-m\', \'synthtool\', \'--metadata\', \'synth.metadata\', \'synth.py\', \'--\']\' returned non-zero exit status 1.\n'
```
Google internal developers can see the full log [here](http://sponge/a84fe804-6b84-41a5-b1f2-e6d681e30fa2).
| 1.0 | Synthesis failed for nodejs-monitoring-dashboards - Hello! Autosynth couldn't regenerate nodejs-monitoring-dashboards. :broken_heart:
Here's the output from running `synth.py`:
```
b'googleapis.\n2020-06-05 04:34:38,854 synthtool [DEBUG] > Using precloned repo /home/kbuilder/.cache/synthtool/googleapis\nDEBUG:synthtool:Using precloned repo /home/kbuilder/.cache/synthtool/googleapis\n2020-06-05 04:34:38,858 synthtool [DEBUG] > Pulling Docker image: gapic-generator-typescript:latest\nDEBUG:synthtool:Pulling Docker image: gapic-generator-typescript:latest\nlatest: Pulling from gapic-images/gapic-generator-typescript\nDigest: sha256:c9bc12024eddcfb94501627ff5b3ea302370995e9a2c9cde6b3317375d7e7b66\nStatus: Image is up to date for gcr.io/gapic-images/gapic-generator-typescript:latest\n2020-06-05 04:34:39,753 synthtool [DEBUG] > Generating code for: google/monitoring/dashboard/v1.\nDEBUG:synthtool:Generating code for: google/monitoring/dashboard/v1.\n2020-06-05 04:34:40,593 synthtool [DEBUG] > Wrote metadata to synth.metadata.\nDEBUG:synthtool:Wrote metadata to synth.metadata.\nTraceback (most recent call last):\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code\n exec(code, run_globals)\n File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>\n main()\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__\n return self.main(*args, **kwargs)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main\n rv = self.invoke(ctx)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke\n return callback(*args, **kwargs)\n File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main\n spec.loader.exec_module(synth_module) # type: ignore\n File "<frozen importlib._bootstrap_external>", line 678, in exec_module\n File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed\n File "/home/kbuilder/.cache/synthtool/nodejs-monitoring-dashboards/synth.py", line 36, in <module>\n version=version)\n File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_microgenerator.py", line 66, in typescript_library\n return self._generate_code(service, version, "typescript", **kwargs)\n File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_microgenerator.py", line 195, in _generate_code\n f"Code generation seemed to succeed, but {output_dir} is empty."\nRuntimeError: Code generation seemed to succeed, but /tmpfs/tmp/tmp72t39dx3 is empty.\n2020-06-05 04:34:40,633 autosynth [ERROR] > Synthesis failed\n2020-06-05 04:34:40,634 autosynth [DEBUG] > Running: git reset --hard HEAD\nHEAD is now at ebadd7c build: update protos.js (#83)\n2020-06-05 04:34:40,639 autosynth [DEBUG] > Running: git checkout autosynth-self\nSwitched to branch \'autosynth-self\'\n2020-06-05 04:34:40,643 autosynth [ERROR] > Command \'[\'/tmpfs/src/github/synthtool/env/bin/python3\', \'-m\', \'synthtool\', \'--metadata\', \'synth.metadata\', \'synth.py\', \'--\']\' returned non-zero exit status 1.\n2020-06-05 04:34:40,797 autosynth [DEBUG] > Running: git checkout ebadd7ca65cfd22520c4a54596f47e4badf486af\nNote: checking out \'ebadd7ca65cfd22520c4a54596f47e4badf486af\'.\n\nYou are in \'detached HEAD\' state. You can look around, make experimental\nchanges and commit them, and you can discard any commits you make in this\nstate without impacting any branches by performing another checkout.\n\nIf you want to create a new branch to retain commits you create, you may\ndo so (now or later) by using -b with the checkout command again. Example:\n\n git checkout -b <new-branch-name>\n\nHEAD is now at ebadd7c build: update protos.js (#83)\n2020-06-05 04:34:40,803 autosynth [DEBUG] > Running: git checkout d53a5b45c46920932dbe7d0a95e10d8b58933dae\nPrevious HEAD position was be74d3e build: do not fail builds on codecov errors (#528)\nHEAD is now at d53a5b4 docs: improve README (#600)\n2020-06-05 04:34:40,818 autosynth [DEBUG] > Running: git checkout cd804bab06e46dd1a4f16c32155fd3cddb931b52\nPrevious HEAD position was 83816bb3 Add kokoro-specific .bazelrc file with arguments specific only for kokoro environments. This is to fix autosynth builds when it tries building older commits.\nHEAD is now at cd804bab docs: cleaned docs for the Agents service and resource.\n2020-06-05 04:34:40,888 autosynth [DEBUG] > Running: git branch -f autosynth-119\n2020-06-05 04:34:40,893 autosynth [DEBUG] > Running: git checkout autosynth-119\nSwitched to branch \'autosynth-119\'\n2020-06-05 04:34:40,897 autosynth [INFO] > Running synthtool\n2020-06-05 04:34:40,897 autosynth [INFO] > [\'/tmpfs/src/github/synthtool/env/bin/python3\', \'-m\', \'synthtool\', \'--metadata\', \'synth.metadata\', \'synth.py\', \'--\']\n2020-06-05 04:34:40,899 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata synth.metadata synth.py --\n2020-06-05 04:34:41,107 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/nodejs-monitoring-dashboards/synth.py.\nOn branch autosynth-119\nnothing to commit, working tree clean\n2020-06-05 04:34:41,239 synthtool [DEBUG] > Ensuring dependencies.\nDEBUG:synthtool:Ensuring dependencies.\n2020-06-05 04:34:41,244 synthtool [DEBUG] > Cloning googleapis.\nDEBUG:synthtool:Cloning googleapis.\n2020-06-05 04:34:41,245 synthtool [DEBUG] > Using precloned repo /home/kbuilder/.cache/synthtool/googleapis\nDEBUG:synthtool:Using precloned repo /home/kbuilder/.cache/synthtool/googleapis\n2020-06-05 04:34:41,249 synthtool [DEBUG] > Pulling Docker image: gapic-generator-typescript:latest\nDEBUG:synthtool:Pulling Docker image: gapic-generator-typescript:latest\nlatest: Pulling from gapic-images/gapic-generator-typescript\nDigest: sha256:c9bc12024eddcfb94501627ff5b3ea302370995e9a2c9cde6b3317375d7e7b66\nStatus: Image is up to date for gcr.io/gapic-images/gapic-generator-typescript:latest\n2020-06-05 04:34:42,134 synthtool [DEBUG] > Generating code for: google/monitoring/dashboard/v1.\nDEBUG:synthtool:Generating code for: google/monitoring/dashboard/v1.\n2020-06-05 04:34:42,970 synthtool [DEBUG] > Wrote metadata to synth.metadata.\nDEBUG:synthtool:Wrote metadata to synth.metadata.\nTraceback (most recent call last):\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code\n exec(code, run_globals)\n File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>\n main()\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__\n return self.main(*args, **kwargs)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main\n rv = self.invoke(ctx)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke\n return callback(*args, **kwargs)\n File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main\n spec.loader.exec_module(synth_module) # type: ignore\n File "<frozen importlib._bootstrap_external>", line 678, in exec_module\n File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed\n File "/home/kbuilder/.cache/synthtool/nodejs-monitoring-dashboards/synth.py", line 36, in <module>\n version=version)\n File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_microgenerator.py", line 66, in typescript_library\n return self._generate_code(service, version, "typescript", **kwargs)\n File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_microgenerator.py", line 195, in _generate_code\n f"Code generation seemed to succeed, but {output_dir} is empty."\nRuntimeError: Code generation seemed to succeed, but /tmpfs/tmp/tmpuypjgv0_ is empty.\n2020-06-05 04:34:43,013 autosynth [ERROR] > Synthesis failed\n2020-06-05 04:34:43,013 autosynth [DEBUG] > Running: git reset --hard HEAD\nHEAD is now at ebadd7c build: update protos.js (#83)\n2020-06-05 04:34:43,019 autosynth [DEBUG] > Running: git checkout autosynth\nSwitched to branch \'autosynth\'\n2020-06-05 04:34:43,023 autosynth [DEBUG] > Running: git clean -fdx\nRemoving __pycache__/\nTraceback (most recent call last):\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n "__main__", mod_spec)\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code\n exec(code, run_globals)\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>\n main()\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main\n return _inner_main(temp_dir)\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 595, in _inner_main\n commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer)\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 371, in synthesize_loop\n synthesize_inner_loop(toolbox, synthesizer)\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 381, in synthesize_inner_loop\n synthesizer, len(toolbox.versions) - 1\n File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 266, in synthesize_version_in_new_branch\n synthesizer.synthesize(synth_log_path, self.environ)\n File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 119, in synthesize\n synth_proc.check_returncode() # Raise an exception.\n File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode\n self.stderr)\nsubprocess.CalledProcessError: Command \'[\'/tmpfs/src/github/synthtool/env/bin/python3\', \'-m\', \'synthtool\', \'--metadata\', \'synth.metadata\', \'synth.py\', \'--\']\' returned non-zero exit status 1.\n'
```
Google internal developers can see the full log [here](http://sponge/a84fe804-6b84-41a5-b1f2-e6d681e30fa2).
| non_process | synthesis failed for nodejs monitoring dashboards hello autosynth couldn t regenerate nodejs monitoring dashboards broken heart here s the output from running synth py b googleapis synthtool using precloned repo home kbuilder cache synthtool googleapis ndebug synthtool using precloned repo home kbuilder cache synthtool googleapis synthtool pulling docker image gapic generator typescript latest ndebug synthtool pulling docker image gapic generator typescript latest nlatest pulling from gapic images gapic generator typescript ndigest nstatus image is up to date for gcr io gapic images gapic generator typescript latest synthtool generating code for google monitoring dashboard ndebug synthtool generating code for google monitoring dashboard synthtool wrote metadata to synth metadata ndebug synthtool wrote metadata to synth metadata ntraceback most recent call last n file home kbuilder pyenv versions lib runpy py line in run module as main n main mod spec n file home kbuilder pyenv versions lib runpy py line in run code n exec code run globals n file tmpfs src github synthtool synthtool main py line in n main n file tmpfs src github synthtool env lib site packages click core py line in call n return self main args kwargs n file tmpfs src github synthtool env lib site packages click core py line in main n rv self invoke ctx n file tmpfs src github synthtool env lib site packages click core py line in invoke n return ctx invoke self callback ctx params n file tmpfs src github synthtool env lib site packages click core py line in invoke n return callback args kwargs n file tmpfs src github synthtool synthtool main py line in main n spec loader exec module synth module type ignore n file line in exec module n file line in call with frames removed n file home kbuilder cache synthtool nodejs monitoring dashboards synth py line in n version version n file tmpfs src github synthtool synthtool gcp gapic microgenerator py line in typescript library n return self generate code service version typescript kwargs n file tmpfs src github synthtool synthtool gcp gapic microgenerator py line in generate code n f code generation seemed to succeed but output dir is empty nruntimeerror code generation seemed to succeed but tmpfs tmp is empty autosynth synthesis failed autosynth running git reset hard head nhead is now at build update protos js autosynth running git checkout autosynth self nswitched to branch autosynth self autosynth command returned non zero exit status autosynth running git checkout nnote checking out n nyou are in detached head state you can look around make experimental nchanges and commit them and you can discard any commits you make in this nstate without impacting any branches by performing another checkout n nif you want to create a new branch to retain commits you create you may ndo so now or later by using b with the checkout command again example n n git checkout b n nhead is now at build update protos js autosynth running git checkout nprevious head position was build do not fail builds on codecov errors nhead is now at docs improve readme autosynth running git checkout nprevious head position was add kokoro specific bazelrc file with arguments specific only for kokoro environments this is to fix autosynth builds when it tries building older commits nhead is now at docs cleaned docs for the agents service and resource autosynth running git branch f autosynth autosynth running git checkout autosynth nswitched to branch autosynth autosynth running synthtool autosynth autosynth running tmpfs src github synthtool env bin m synthtool metadata synth metadata synth py synthtool executing home kbuilder cache synthtool nodejs monitoring dashboards synth py non branch autosynth nnothing to commit working tree clean synthtool ensuring dependencies ndebug synthtool ensuring dependencies synthtool cloning googleapis ndebug synthtool cloning googleapis synthtool using precloned repo home kbuilder cache synthtool googleapis ndebug synthtool using precloned repo home kbuilder cache synthtool googleapis synthtool pulling docker image gapic generator typescript latest ndebug synthtool pulling docker image gapic generator typescript latest nlatest pulling from gapic images gapic generator typescript ndigest nstatus image is up to date for gcr io gapic images gapic generator typescript latest synthtool generating code for google monitoring dashboard ndebug synthtool generating code for google monitoring dashboard synthtool wrote metadata to synth metadata ndebug synthtool wrote metadata to synth metadata ntraceback most recent call last n file home kbuilder pyenv versions lib runpy py line in run module as main n main mod spec n file home kbuilder pyenv versions lib runpy py line in run code n exec code run globals n file tmpfs src github synthtool synthtool main py line in n main n file tmpfs src github synthtool env lib site packages click core py line in call n return self main args kwargs n file tmpfs src github synthtool env lib site packages click core py line in main n rv self invoke ctx n file tmpfs src github synthtool env lib site packages click core py line in invoke n return ctx invoke self callback ctx params n file tmpfs src github synthtool env lib site packages click core py line in invoke n return callback args kwargs n file tmpfs src github synthtool synthtool main py line in main n spec loader exec module synth module type ignore n file line in exec module n file line in call with frames removed n file home kbuilder cache synthtool nodejs monitoring dashboards synth py line in n version version n file tmpfs src github synthtool synthtool gcp gapic microgenerator py line in typescript library n return self generate code service version typescript kwargs n file tmpfs src github synthtool synthtool gcp gapic microgenerator py line in generate code n f code generation seemed to succeed but output dir is empty nruntimeerror code generation seemed to succeed but tmpfs tmp is empty autosynth synthesis failed autosynth running git reset hard head nhead is now at build update protos js autosynth running git checkout autosynth nswitched to branch autosynth autosynth running git clean fdx nremoving pycache ntraceback most recent call last n file home kbuilder pyenv versions lib runpy py line in run module as main n main mod spec n file home kbuilder pyenv versions lib runpy py line in run code n exec code run globals n file tmpfs src github synthtool autosynth synth py line in n main n file tmpfs src github synthtool autosynth synth py line in main n return inner main temp dir n file tmpfs src github synthtool autosynth synth py line in inner main n commit count synthesize loop x multiple prs change pusher synthesizer n file tmpfs src github synthtool autosynth synth py line in synthesize loop n synthesize inner loop toolbox synthesizer n file tmpfs src github synthtool autosynth synth py line in synthesize inner loop n synthesizer len toolbox versions n file tmpfs src github synthtool autosynth synth py line in synthesize version in new branch n synthesizer synthesize synth log path self environ n file tmpfs src github synthtool autosynth synthesizer py line in synthesize n synth proc check returncode raise an exception n file home kbuilder pyenv versions lib subprocess py line in check returncode n self stderr nsubprocess calledprocesserror command returned non zero exit status n google internal developers can see the full log | 0 |
84 | 2,533,338,628 | IssuesEvent | 2015-01-23 22:33:16 | GsDevKit/Seaside31 | https://api.github.com/repos/GsDevKit/Seaside31 | closed | WAGemStoneRunSeasideGems should be able to handle multiple named servers | in process | Perhaps it already can ... in which case the webServer script should be updated to provide info about the registered servers and their status ... also let's add multi-port registration for fastcgi support ... | 1.0 | WAGemStoneRunSeasideGems should be able to handle multiple named servers - Perhaps it already can ... in which case the webServer script should be updated to provide info about the registered servers and their status ... also let's add multi-port registration for fastcgi support ... | process | wagemstonerunseasidegems should be able to handle multiple named servers perhaps it already can in which case the webserver script should be updated to provide info about the registered servers and their status also let s add multi port registration for fastcgi support | 1 |
16,276 | 20,884,553,889 | IssuesEvent | 2022-03-23 02:34:49 | lynnandtonic/nestflix.fun | https://api.github.com/repos/lynnandtonic/nestflix.fun | closed | Add A Carrot | suggested title in process | Please add as much of the following info as you can:
Title: A Carrot
Type (film/tv show): Film
Film or show in which it appears: South Park (https://www.imdb.com/title/tt0705968/ Season 06 Episode 15)
Is the parent film/show streaming anywhere? Amazon Prime in the UK.
About when in the parent film/show does it appear? 09 minutes 12 seconds
Actual footage of the film/show can be seen (yes/no)? Yes. https://www.youtube.com/watch?v=6uK2WMc9L8c
| 1.0 | Add A Carrot - Please add as much of the following info as you can:
Title: A Carrot
Type (film/tv show): Film
Film or show in which it appears: South Park (https://www.imdb.com/title/tt0705968/ Season 06 Episode 15)
Is the parent film/show streaming anywhere? Amazon Prime in the UK.
About when in the parent film/show does it appear? 09 minutes 12 seconds
Actual footage of the film/show can be seen (yes/no)? Yes. https://www.youtube.com/watch?v=6uK2WMc9L8c
| process | add a carrot please add as much of the following info as you can title a carrot type film tv show film film or show in which it appears south park season episode is the parent film show streaming anywhere amazon prime in the uk about when in the parent film show does it appear minutes seconds actual footage of the film show can be seen yes no yes | 1 |
156,079 | 5,964,051,687 | IssuesEvent | 2017-05-30 07:45:45 | karmaradio/karma | https://api.github.com/repos/karmaradio/karma | opened | All CAPS for specific words in labels | enhancement priority-4 | Consistent sentence-case is being applied across app.
Some exceptions need to be handled. eg: acronyms like IBAN, SWIFT and VAT:
Presume we can just manually relevant labels?

| 1.0 | All CAPS for specific words in labels - Consistent sentence-case is being applied across app.
Some exceptions need to be handled. eg: acronyms like IBAN, SWIFT and VAT:
Presume we can just manually relevant labels?

| non_process | all caps for specific words in labels consistent sentence case is being applied across app some exceptions need to be handled eg acronyms like iban swift and vat presume we can just manually relevant labels | 0 |
14,383 | 10,788,714,824 | IssuesEvent | 2019-11-05 10:20:11 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | closed | scripts/ci/dependency_check.bat doesn't fail CI, even when it detects errors. | Bug Infrastructure | If you check #9750, you'll see that CI fails dependency checks for Darwin and Linux, however the Windows check passes. Digging in, it's clear that the Windows script even detects the error here: [Build 60891, Verify src/azure-cli/requirements.*.Windows.txt, Line 342](https://dev.azure.com/azure-sdk/public/_build/results?buildId=60891&view=logs&j=796343fd-f04d-59ce-7e73-c4eab21e4249&t=3e5b4737-4f47-5027-dc25-be482d8c1eaf&l=342). However, the appropriate exit code isn't getting set. | 1.0 | scripts/ci/dependency_check.bat doesn't fail CI, even when it detects errors. - If you check #9750, you'll see that CI fails dependency checks for Darwin and Linux, however the Windows check passes. Digging in, it's clear that the Windows script even detects the error here: [Build 60891, Verify src/azure-cli/requirements.*.Windows.txt, Line 342](https://dev.azure.com/azure-sdk/public/_build/results?buildId=60891&view=logs&j=796343fd-f04d-59ce-7e73-c4eab21e4249&t=3e5b4737-4f47-5027-dc25-be482d8c1eaf&l=342). However, the appropriate exit code isn't getting set. | non_process | scripts ci dependency check bat doesn t fail ci even when it detects errors if you check you ll see that ci fails dependency checks for darwin and linux however the windows check passes digging in it s clear that the windows script even detects the error here however the appropriate exit code isn t getting set | 0 |
135,888 | 19,680,268,446 | IssuesEvent | 2022-01-11 16:07:59 | emory-libraries/blacklight-catalog | https://api.github.com/repos/emory-libraries/blacklight-catalog | closed | Refine generic images for thumbnails to be implemented post-production | UI Design | As discussed during one of our previous meetings, I would like you to complete the work to refine the generic images for the various resources types in the event thumbnail artwork is not available. | 1.0 | Refine generic images for thumbnails to be implemented post-production - As discussed during one of our previous meetings, I would like you to complete the work to refine the generic images for the various resources types in the event thumbnail artwork is not available. | non_process | refine generic images for thumbnails to be implemented post production as discussed during one of our previous meetings i would like you to complete the work to refine the generic images for the various resources types in the event thumbnail artwork is not available | 0 |
19,815 | 26,203,043,475 | IssuesEvent | 2023-01-03 19:27:08 | esmero/ami | https://api.github.com/repos/esmero/ami | opened | Add alternate mimetype mappings | File processing | ## What's needed?
Add alternate mimetype mapping for standard (correct) extension for incorrectly classified source files (ie, currently .bin -> 'application/octet-stream' for image files with older/nonstandard 'image/jpeg' mimetype in original exif info).
Related to #108. As reviewed and discussed with @DiegoPino. 🤓 | 1.0 | Add alternate mimetype mappings - ## What's needed?
Add alternate mimetype mapping for standard (correct) extension for incorrectly classified source files (ie, currently .bin -> 'application/octet-stream' for image files with older/nonstandard 'image/jpeg' mimetype in original exif info).
Related to #108. As reviewed and discussed with @DiegoPino. 🤓 | process | add alternate mimetype mappings what s needed add alternate mimetype mapping for standard correct extension for incorrectly classified source files ie currently bin application octet stream for image files with older nonstandard image jpeg mimetype in original exif info related to as reviewed and discussed with diegopino 🤓 | 1 |
307,386 | 26,528,055,585 | IssuesEvent | 2023-01-19 10:17:16 | apache/shardingsphere | https://api.github.com/repos/apache/shardingsphere | closed | Refactor the distribution for metrics E2E | in: test feature: agent | now, the proxy distribution contains the agent in default. it's useless for the assembly process of matrics E2E.
this should be refactored as followings :
1. delete the assembly file for matrics E2E
2. add proxy distribution into matrics docker image
3. copy the matrics config into docker image
4. test by docker image | 1.0 | Refactor the distribution for metrics E2E - now, the proxy distribution contains the agent in default. it's useless for the assembly process of matrics E2E.
this should be refactored as followings :
1. delete the assembly file for matrics E2E
2. add proxy distribution into matrics docker image
3. copy the matrics config into docker image
4. test by docker image | non_process | refactor the distribution for metrics now the proxy distribution contains the agent in default it s useless for the assembly process of matrics this should be refactored as followings delete the assembly file for matrics add proxy distribution into matrics docker image copy the matrics config into docker image test by docker image | 0 |
18,931 | 24,886,911,806 | IssuesEvent | 2022-10-28 08:34:53 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | Error: Error in migration engine. Reason: [migration-engine\cli\src/main.rs:102:23] Error opening datamodel file in `C:\Users\shubham.halder\Documents\blogr-nextjs-prisma\prisma\schema.prisma`: Access is denied. (os error 5) | kind/bug process/candidate topic: windows tech/engines/migration engine topic: error reporting team/schema | <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db push`
Version: `4.5.0`
Binary Version: `0362da9eebca54d94c8ef5edd3b2e90af99ba452`
Report: https://prisma-errors.netlify.app/report/14389
OS: `x64 win32 10.0.19044`
| 1.0 | Error: Error in migration engine. Reason: [migration-engine\cli\src/main.rs:102:23] Error opening datamodel file in `C:\Users\shubham.halder\Documents\blogr-nextjs-prisma\prisma\schema.prisma`: Access is denied. (os error 5) - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db push`
Version: `4.5.0`
Binary Version: `0362da9eebca54d94c8ef5edd3b2e90af99ba452`
Report: https://prisma-errors.netlify.app/report/14389
OS: `x64 win32 10.0.19044`
| process | error error in migration engine reason error opening datamodel file in c users shubham halder documents blogr nextjs prisma prisma schema prisma access is denied os error command prisma db push version binary version report os | 1 |
9,272 | 12,301,542,882 | IssuesEvent | 2020-05-11 15:33:43 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | opened | r.cost in modeler does not accept output temporary files | Bug Processing | **Describe the bug**
Attached model fail to start running if output is set as temporary file
[test_r_cost.zip](https://github.com/qgis/QGIS/files/4610546/test_r_cost.zip)
**How to Reproduce**
1. create or use a fake raster as input
2. create or use a fake input point layer
3. run model with the above input as in the following run
```
{
'R' : <raswter of point1>,
'StartPoints' : <vector of point2>,
'VERBOSE_LOG' : True,
'grass7:r.cost_1:allocation_map' : 'TEMPORARY_OUTPUT',
'grass7:r.cost_1:cumulative_cost' : 'TEMPORARY_OUTPUT',
'grass7:r.cost_1:movement_directions' : 'TEMPORARY_OUTPUT'
}
```
4. See error --> “.cost_1_allocation_map.tif” files are not supported as outputs for this algorithm
If algoritm is run in processing as algoritm alone it works correctly. The problem is using it in modeler because the output path is setup in uncorrect way
**QGIS and OS versions**
QGIS version | 3.13.0-Master | QGIS code revision | d3a7a65c90
-- | -- | -- | --
Compiled against Qt | 5.9.5 | Running against Qt | 5.9.5
Compiled against GDAL/OGR | 2.2.3 | Running against GDAL/OGR | 2.2.3
Compiled against GEOS | 3.7.1-CAPI-1.11.1 | Running against GEOS | 3.7.1-CAPI-1.11.1 27a5e771
Compiled against SQLite | 3.22.0 | Running against SQLite | 3.22.0
PostgreSQL Client Version | 12.2 (Ubuntu 12.2-2.pgdg18.04+1) | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.3 | QScintilla2 Version | 2.10.2
PROJ.4 Version | 493
OS Version | Ubuntu 18.04.4 LTS | This copy of QGIS writes debugging output.
Active python plugins | plugin_reloader; qgis_resource_sharing; IPyConsole; remotedebug; Qgis2threejs; db_manager; processing; MetaSearch
btw error happen also in qgis 3.10 LTR
| 1.0 | r.cost in modeler does not accept output temporary files - **Describe the bug**
Attached model fail to start running if output is set as temporary file
[test_r_cost.zip](https://github.com/qgis/QGIS/files/4610546/test_r_cost.zip)
**How to Reproduce**
1. create or use a fake raster as input
2. create or use a fake input point layer
3. run model with the above input as in the following run
```
{
'R' : <raswter of point1>,
'StartPoints' : <vector of point2>,
'VERBOSE_LOG' : True,
'grass7:r.cost_1:allocation_map' : 'TEMPORARY_OUTPUT',
'grass7:r.cost_1:cumulative_cost' : 'TEMPORARY_OUTPUT',
'grass7:r.cost_1:movement_directions' : 'TEMPORARY_OUTPUT'
}
```
4. See error --> “.cost_1_allocation_map.tif” files are not supported as outputs for this algorithm
If algoritm is run in processing as algoritm alone it works correctly. The problem is using it in modeler because the output path is setup in uncorrect way
**QGIS and OS versions**
QGIS version | 3.13.0-Master | QGIS code revision | d3a7a65c90
-- | -- | -- | --
Compiled against Qt | 5.9.5 | Running against Qt | 5.9.5
Compiled against GDAL/OGR | 2.2.3 | Running against GDAL/OGR | 2.2.3
Compiled against GEOS | 3.7.1-CAPI-1.11.1 | Running against GEOS | 3.7.1-CAPI-1.11.1 27a5e771
Compiled against SQLite | 3.22.0 | Running against SQLite | 3.22.0
PostgreSQL Client Version | 12.2 (Ubuntu 12.2-2.pgdg18.04+1) | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.3 | QScintilla2 Version | 2.10.2
PROJ.4 Version | 493
OS Version | Ubuntu 18.04.4 LTS | This copy of QGIS writes debugging output.
Active python plugins | plugin_reloader; qgis_resource_sharing; IPyConsole; remotedebug; Qgis2threejs; db_manager; processing; MetaSearch
btw error happen also in qgis 3.10 LTR
| process | r cost in modeler does not accept output temporary files describe the bug attached model fail to start running if output is set as temporary file how to reproduce create or use a fake raster as input create or use a fake input point layer run model with the above input as in the following run r startpoints verbose log true r cost allocation map temporary output r cost cumulative cost temporary output r cost movement directions temporary output see error “ cost allocation map tif” files are not supported as outputs for this algorithm if algoritm is run in processing as algoritm alone it works correctly the problem is using it in modeler because the output path is setup in uncorrect way qgis and os versions qgis version master qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version ubuntu spatialite version qwt version version proj version os version ubuntu lts this copy of qgis writes debugging output active python plugins plugin reloader qgis resource sharing ipyconsole remotedebug db manager processing metasearch btw error happen also in qgis ltr | 1 |
20,790 | 27,533,118,411 | IssuesEvent | 2023-03-07 00:11:21 | aolabNeuro/analyze | https://api.github.com/repos/aolabNeuro/analyze | opened | precondition.downsample() | enhancement preprocessing | Current implementation of downsample uses averaging and upsamples data to lcm to prevent data loss. This means every time cursor data is downsampled from 25kHz to 120, it is upsampled first to 75kHz. Not efficient for memory handling and computation.
Todo: optimize downsampling function for our use cases in future. | 1.0 | precondition.downsample() - Current implementation of downsample uses averaging and upsamples data to lcm to prevent data loss. This means every time cursor data is downsampled from 25kHz to 120, it is upsampled first to 75kHz. Not efficient for memory handling and computation.
Todo: optimize downsampling function for our use cases in future. | process | precondition downsample current implementation of downsample uses averaging and upsamples data to lcm to prevent data loss this means every time cursor data is downsampled from to it is upsampled first to not efficient for memory handling and computation todo optimize downsampling function for our use cases in future | 1 |
5,599 | 8,460,074,984 | IssuesEvent | 2018-10-22 17:45:49 | aspnet/IISIntegration | https://api.github.com/repos/aspnet/IISIntegration | closed | ANCM V2 - net stop was /y Issue | in-process | 'net stop was /y' causes 'Failed to gracefully shutdown application' warnings being logged to the Application event log.
Is this a known issue? | 1.0 | ANCM V2 - net stop was /y Issue - 'net stop was /y' causes 'Failed to gracefully shutdown application' warnings being logged to the Application event log.
Is this a known issue? | process | ancm net stop was y issue net stop was y causes failed to gracefully shutdown application warnings being logged to the application event log is this a known issue | 1 |
13,475 | 15,983,888,932 | IssuesEvent | 2021-04-18 11:02:20 | brucemiller/LaTeXML | https://api.github.com/repos/brucemiller/LaTeXML | closed | Theorem title in Jats-XML | bug postprocessing schema | When i convert a theorem to JATS-XML the title is not appearing in the XML.
For a theorem environment defined like
**test.tex**
```tex
\documentclass{article}
\newtheorem{theorem}{Theorem}[section]
\begin{document}
\begin{theorem}
Let f be a function.
\end{theorem}
\end{document}
```
I get with `latexml test.tex`:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?latexml searchpaths="/home/robert/Work/ems/tex-json"?>
<?latexml class="article"?>
<?latexml RelaxNGSchema="LaTeXML"?>
<document xmlns="http://dlmf.nist.gov/LaTeXML">
<resource src="LaTeXML.css" type="text/css"/>
<resource src="ltx-article.css" type="text/css"/>
<theorem class="ltx_theorem_theorem" inlist="thm theorem:theorem" xml:id="S0.Thmtheorem1">
<tags>
<tag>Theorem 0.1</tag>
<tag role="refnum">0.1</tag>
<tag role="typerefnum">Theorem 0.1</tag>
</tags>
<title class="ltx_runin"><tag><text font="bold">Theorem 0.1</text></tag></title>
<para xml:id="S0.Thmtheorem1.p1">
<p><text font="italic">Let f be a function.</text></p>
</para>
</theorem>
</document>
```
When i convert to JATS-XML with `latexmlc test.tex --dest=test.jats.xml --pmml --stylesheet=LaTeXML-jats.xsl`:
```xml
<?xml version="1.0"?>
<article>
<front>
<article-meta>
<contrib-group/>
<!-- The element theorem with attributes
class=ltx_theorem_theoreminlist=thm theorem:theoremxml:id=S0.Thmtheorem1fragid=S0.Thmtheorem1
is currently not supported for the front matter.
-->
</article-meta>
</front>
<body>
<statement id="S0.Thmtheorem1">
<title/>
<p id="S0.Thmtheorem1.p1">
<italic>Let f be a function.</italic>
</p>
</statement>
</body>
<back>
<!-- The element theorem with attributes
class=ltx_theorem_theoreminlist=thm theorem:theoremxml:id=S0.Thmtheorem1fragid=S0.Thmtheorem1
is currently not supported for the back matter
-->
<app-group/>
</back>
</article>
```
It seems the conversion should happen here, but is not picking up the title:
```xml
<xsl:template match="ltx:theorem/ltx:title">
<title>
<xsl:apply-templates select="@*|node()"/>
</title>
</xsl:template>
``` | 1.0 | Theorem title in Jats-XML - When i convert a theorem to JATS-XML the title is not appearing in the XML.
For a theorem environment defined like
**test.tex**
```tex
\documentclass{article}
\newtheorem{theorem}{Theorem}[section]
\begin{document}
\begin{theorem}
Let f be a function.
\end{theorem}
\end{document}
```
I get with `latexml test.tex`:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?latexml searchpaths="/home/robert/Work/ems/tex-json"?>
<?latexml class="article"?>
<?latexml RelaxNGSchema="LaTeXML"?>
<document xmlns="http://dlmf.nist.gov/LaTeXML">
<resource src="LaTeXML.css" type="text/css"/>
<resource src="ltx-article.css" type="text/css"/>
<theorem class="ltx_theorem_theorem" inlist="thm theorem:theorem" xml:id="S0.Thmtheorem1">
<tags>
<tag>Theorem 0.1</tag>
<tag role="refnum">0.1</tag>
<tag role="typerefnum">Theorem 0.1</tag>
</tags>
<title class="ltx_runin"><tag><text font="bold">Theorem 0.1</text></tag></title>
<para xml:id="S0.Thmtheorem1.p1">
<p><text font="italic">Let f be a function.</text></p>
</para>
</theorem>
</document>
```
When i convert to JATS-XML with `latexmlc test.tex --dest=test.jats.xml --pmml --stylesheet=LaTeXML-jats.xsl`:
```xml
<?xml version="1.0"?>
<article>
<front>
<article-meta>
<contrib-group/>
<!-- The element theorem with attributes
class=ltx_theorem_theoreminlist=thm theorem:theoremxml:id=S0.Thmtheorem1fragid=S0.Thmtheorem1
is currently not supported for the front matter.
-->
</article-meta>
</front>
<body>
<statement id="S0.Thmtheorem1">
<title/>
<p id="S0.Thmtheorem1.p1">
<italic>Let f be a function.</italic>
</p>
</statement>
</body>
<back>
<!-- The element theorem with attributes
class=ltx_theorem_theoreminlist=thm theorem:theoremxml:id=S0.Thmtheorem1fragid=S0.Thmtheorem1
is currently not supported for the back matter
-->
<app-group/>
</back>
</article>
```
It seems the conversion should happen here, but is not picking up the title:
```xml
<xsl:template match="ltx:theorem/ltx:title">
<title>
<xsl:apply-templates select="@*|node()"/>
</title>
</xsl:template>
``` | process | theorem title in jats xml when i convert a theorem to jats xml the title is not appearing in the xml for a theorem environment defined like test tex tex documentclass article newtheorem theorem theorem begin document begin theorem let f be a function end theorem end document i get with latexml test tex xml document xmlns theorem theorem theorem let f be a function when i convert to jats xml with latexmlc test tex dest test jats xml pmml stylesheet latexml jats xsl xml the element theorem with attributes class ltx theorem theoreminlist thm theorem theoremxml id is currently not supported for the front matter let f be a function the element theorem with attributes class ltx theorem theoreminlist thm theorem theoremxml id is currently not supported for the back matter it seems the conversion should happen here but is not picking up the title xml | 1 |
7,063 | 5,831,982,290 | IssuesEvent | 2017-05-08 20:41:31 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Update use of HashAlgorithm to reduce allocations | Area-Analyzers Bug Tenet-Performance | **Version Used**: 15.1
PerfView is showing substantial overhead for the creation of a temporary array in `HashAlgorithm.ComputeHash(Stream)`, originating from Microsoft.CodeAnalysis and Microsoft.CodeAnalysis.Workspaces. On my machine, this accounted for more than 3 seconds of time while Roslyn.sln was opening. Code using this method should be updated to use an alternative that allows the use of a caller-specified buffer from a pool. | True | Update use of HashAlgorithm to reduce allocations - **Version Used**: 15.1
PerfView is showing substantial overhead for the creation of a temporary array in `HashAlgorithm.ComputeHash(Stream)`, originating from Microsoft.CodeAnalysis and Microsoft.CodeAnalysis.Workspaces. On my machine, this accounted for more than 3 seconds of time while Roslyn.sln was opening. Code using this method should be updated to use an alternative that allows the use of a caller-specified buffer from a pool. | non_process | update use of hashalgorithm to reduce allocations version used perfview is showing substantial overhead for the creation of a temporary array in hashalgorithm computehash stream originating from microsoft codeanalysis and microsoft codeanalysis workspaces on my machine this accounted for more than seconds of time while roslyn sln was opening code using this method should be updated to use an alternative that allows the use of a caller specified buffer from a pool | 0 |
14,398 | 17,410,358,685 | IssuesEvent | 2021-08-03 11:31:12 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Please consider not throwing an exception when Process.Kill() is called for a process that exited previously | area-System.Diagnostics.Process | This is inspired by an earlier issue https://github.com/dotnet/runtime/issues/16848 If I want to wait for a running process to exit in some short period of time and terminate it if it fails to exits on itself then I'm likely to write code like this:
if(!process.WaitForExit(5000))
{
process.Kill();
}
process.Dispose();
however if the process exits right after `WaitForExit()` returns "it's still running" then call to `Kill()` will yield an `InvalidOperationException`. This is the documented behavior. This causes a lot of extra code to be necessary to deal with this race condition.
Is the current behavior really needed? The purpose of `Kill()` is to get rid of the process. If the process exited - okay, less work, just do nothing. Who would want the exception instead of just doing nothing?
Yes, that would change the contract but it looks like a change for the better.
Could you please consider changing this rather inconvenient behavior?
| 1.0 | Please consider not throwing an exception when Process.Kill() is called for a process that exited previously - This is inspired by an earlier issue https://github.com/dotnet/runtime/issues/16848 If I want to wait for a running process to exit in some short period of time and terminate it if it fails to exits on itself then I'm likely to write code like this:
if(!process.WaitForExit(5000))
{
process.Kill();
}
process.Dispose();
however if the process exits right after `WaitForExit()` returns "it's still running" then call to `Kill()` will yield an `InvalidOperationException`. This is the documented behavior. This causes a lot of extra code to be necessary to deal with this race condition.
Is the current behavior really needed? The purpose of `Kill()` is to get rid of the process. If the process exited - okay, less work, just do nothing. Who would want the exception instead of just doing nothing?
Yes, that would change the contract but it looks like a change for the better.
Could you please consider changing this rather inconvenient behavior?
| process | please consider not throwing an exception when process kill is called for a process that exited previously this is inspired by an earlier issue if i want to wait for a running process to exit in some short period of time and terminate it if it fails to exits on itself then i m likely to write code like this if process waitforexit process kill process dispose however if the process exits right after waitforexit returns it s still running then call to kill will yield an invalidoperationexception this is the documented behavior this causes a lot of extra code to be necessary to deal with this race condition is the current behavior really needed the purpose of kill is to get rid of the process if the process exited okay less work just do nothing who would want the exception instead of just doing nothing yes that would change the contract but it looks like a change for the better could you please consider changing this rather inconvenient behavior | 1 |
258,662 | 19,568,050,721 | IssuesEvent | 2022-01-04 05:24:30 | hrushikeshrv/mjxgui | https://api.github.com/repos/hrushikeshrv/mjxgui | closed | Add anchors to headings in docs | documentation enhancement good first issue hacktoberfest | We need to add anchor tags to all headings in the documentation, and change the font family to a monospace font for certain headings in the API section. | 1.0 | Add anchors to headings in docs - We need to add anchor tags to all headings in the documentation, and change the font family to a monospace font for certain headings in the API section. | non_process | add anchors to headings in docs we need to add anchor tags to all headings in the documentation and change the font family to a monospace font for certain headings in the api section | 0 |
262,088 | 27,850,888,736 | IssuesEvent | 2023-03-20 18:36:12 | jgeraigery/dynatrace-service-broker | https://api.github.com/repos/jgeraigery/dynatrace-service-broker | opened | json-simple-1.1.1.jar: 1 vulnerabilities (highest severity is: 5.5) | Mend: dependency security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-simple-1.1.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/junit/junit/4.11/junit-4.11.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/dynatrace-service-broker/commit/075c652078643180fb05751cdbc793df371d6844">075c652078643180fb05751cdbc793df371d6844</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (json-simple version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2020-15250](https://www.mend.io/vulnerability-database/CVE-2020-15250) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | junit-4.11.jar | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-15250</summary>
### Vulnerable Library - <b>junit-4.11.jar</b></p>
<p>JUnit is a regression testing framework written by Erich Gamma and Kent Beck.
It is used by the developer who implements unit tests in Java.</p>
<p>Library home page: <a href="http://junit.org">http://junit.org</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/junit/junit/4.11/junit-4.11.jar</p>
<p>
Dependency Hierarchy:
- json-simple-1.1.1.jar (Root Library)
- :x: **junit-4.11.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/dynatrace-service-broker/commit/075c652078643180fb05751cdbc793df371d6844">075c652078643180fb05751cdbc793df371d6844</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In JUnit4 from version 4.7 and before 4.13.1, the test rule TemporaryFolder contains a local information disclosure vulnerability. On Unix like systems, the system's temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. This vulnerability impacts you if the JUnit tests write sensitive information, like API keys or passwords, into the temporary folder, and the JUnit tests execute in an environment where the OS has other untrusted users. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. For Java 1.7 and higher users: this vulnerability is fixed in 4.13.1. For Java 1.6 and lower users: no patch is available, you must use the workaround below. If you are unable to patch, or are stuck running on Java 1.6, specifying the `java.io.tmpdir` system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability. For more information, including an example of vulnerable code, see the referenced GitHub Security Advisory.
<p>Publish Date: 2020-10-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-15250>CVE-2020-15250</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp">https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp</a></p>
<p>Release Date: 2020-10-12</p>
<p>Fix Resolution: junit:junit:4.13.1</p>
</p>
<p></p>
</details> | True | json-simple-1.1.1.jar: 1 vulnerabilities (highest severity is: 5.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-simple-1.1.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/junit/junit/4.11/junit-4.11.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/dynatrace-service-broker/commit/075c652078643180fb05751cdbc793df371d6844">075c652078643180fb05751cdbc793df371d6844</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (json-simple version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2020-15250](https://www.mend.io/vulnerability-database/CVE-2020-15250) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | junit-4.11.jar | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-15250</summary>
### Vulnerable Library - <b>junit-4.11.jar</b></p>
<p>JUnit is a regression testing framework written by Erich Gamma and Kent Beck.
It is used by the developer who implements unit tests in Java.</p>
<p>Library home page: <a href="http://junit.org">http://junit.org</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/junit/junit/4.11/junit-4.11.jar</p>
<p>
Dependency Hierarchy:
- json-simple-1.1.1.jar (Root Library)
- :x: **junit-4.11.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/dynatrace-service-broker/commit/075c652078643180fb05751cdbc793df371d6844">075c652078643180fb05751cdbc793df371d6844</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In JUnit4 from version 4.7 and before 4.13.1, the test rule TemporaryFolder contains a local information disclosure vulnerability. On Unix like systems, the system's temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. This vulnerability impacts you if the JUnit tests write sensitive information, like API keys or passwords, into the temporary folder, and the JUnit tests execute in an environment where the OS has other untrusted users. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. For Java 1.7 and higher users: this vulnerability is fixed in 4.13.1. For Java 1.6 and lower users: no patch is available, you must use the workaround below. If you are unable to patch, or are stuck running on Java 1.6, specifying the `java.io.tmpdir` system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability. For more information, including an example of vulnerable code, see the referenced GitHub Security Advisory.
<p>Publish Date: 2020-10-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-15250>CVE-2020-15250</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp">https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp</a></p>
<p>Release Date: 2020-10-12</p>
<p>Fix Resolution: junit:junit:4.13.1</p>
</p>
<p></p>
</details> | non_process | json simple jar vulnerabilities highest severity is vulnerable library json simple jar path to dependency file pom xml path to vulnerable library home wss scanner repository junit junit junit jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in json simple version remediation available medium junit jar transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library junit jar junit is a regression testing framework written by erich gamma and kent beck it is used by the developer who implements unit tests in java library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository junit junit junit jar dependency hierarchy json simple jar root library x junit jar vulnerable library found in head commit a href found in base branch master vulnerability details in from version and before the test rule temporaryfolder contains a local information disclosure vulnerability on unix like systems the system s temporary directory is shared between all users on that system because of this when files and directories are written into this directory they are by default readable by other users on that same system this vulnerability does not allow other users to overwrite the contents of these directories or files this is purely an information disclosure vulnerability this vulnerability impacts you if the junit tests write sensitive information like api keys or passwords into the temporary folder and the junit tests execute in an environment where the os has other untrusted users because certain jdk file system apis were only added in jdk this this fix is dependent upon the version of the jdk you are using for java and higher users this vulnerability is fixed in for java and lower users no patch is available you must use the workaround below if you are unable to patch or are stuck running on java specifying the java io tmpdir system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability for more information including an example of vulnerable code see the referenced github security advisory publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution junit junit | 0 |
399,307 | 27,236,161,737 | IssuesEvent | 2023-02-21 16:28:11 | mindsdb/mindsdb | https://api.github.com/repos/mindsdb/mindsdb | closed | [Docs] Add a community tutorial link to the `Using MindsDB via Mongo API -> Machine Learning Examples -> Regression` page | help wanted good first issue documentation first-timers-only | ## Instructions :page_facing_up:
Here are the step-by-step instructions:
1. Go to the `/docs/using-mongo-api/regression.mdx` file.
2. Go to the end of this file and add another item to the list, as follows:
```
- [Tutorial to Predict the Energy Usage using MindsDB and MongoDB](https://dev.to/dohrisalim/tutorial-to-predict-the-energy-usage-using-mindsdb-and-mongodb-g60)
by [Salim Dohri](https://github.com/dohrisalim)
```
3. Save the changes and create a PR.
## Hackathon Issue :loudspeaker:
MindsDB has organized a hackathon to let in more contributors to the in-database ML world!
Each hackathon issue is worth a certain amount of points that will bring you prizes by the end of the MindsDB Hackathon.
Stay tuned for the detailed rules of the MindsDB Hackathon!
## The https://github.com/mindsdb/mindsdb/labels/first-timers-only Label
We are happy to welcome you on board! Please take a look at the rules below for first-time contributors.
1. You can solve only one issue labeled as https://github.com/mindsdb/mindsdb/labels/first-timers-only. After that, please look at other issues labeled as https://github.com/mindsdb/mindsdb/labels/good%20first%20issue, https://github.com/mindsdb/mindsdb/labels/help%20wanted, or https://github.com/mindsdb/mindsdb/labels/integration.
2. After you create your first PR in the MindsDB repository, please sign our CLA to become a MindsDB contributor. You can do that by leaving a comment that contains the following: `I have read the CLA Document and I hereby sign the CLA`
Thank you for contributing to MindsDB! | 1.0 | [Docs] Add a community tutorial link to the `Using MindsDB via Mongo API -> Machine Learning Examples -> Regression` page - ## Instructions :page_facing_up:
Here are the step-by-step instructions:
1. Go to the `/docs/using-mongo-api/regression.mdx` file.
2. Go to the end of this file and add another item to the list, as follows:
```
- [Tutorial to Predict the Energy Usage using MindsDB and MongoDB](https://dev.to/dohrisalim/tutorial-to-predict-the-energy-usage-using-mindsdb-and-mongodb-g60)
by [Salim Dohri](https://github.com/dohrisalim)
```
3. Save the changes and create a PR.
## Hackathon Issue :loudspeaker:
MindsDB has organized a hackathon to let in more contributors to the in-database ML world!
Each hackathon issue is worth a certain amount of points that will bring you prizes by the end of the MindsDB Hackathon.
Stay tuned for the detailed rules of the MindsDB Hackathon!
## The https://github.com/mindsdb/mindsdb/labels/first-timers-only Label
We are happy to welcome you on board! Please take a look at the rules below for first-time contributors.
1. You can solve only one issue labeled as https://github.com/mindsdb/mindsdb/labels/first-timers-only. After that, please look at other issues labeled as https://github.com/mindsdb/mindsdb/labels/good%20first%20issue, https://github.com/mindsdb/mindsdb/labels/help%20wanted, or https://github.com/mindsdb/mindsdb/labels/integration.
2. After you create your first PR in the MindsDB repository, please sign our CLA to become a MindsDB contributor. You can do that by leaving a comment that contains the following: `I have read the CLA Document and I hereby sign the CLA`
Thank you for contributing to MindsDB! | non_process | add a community tutorial link to the using mindsdb via mongo api machine learning examples regression page instructions page facing up here are the step by step instructions go to the docs using mongo api regression mdx file go to the end of this file and add another item to the list as follows by save the changes and create a pr hackathon issue loudspeaker mindsdb has organized a hackathon to let in more contributors to the in database ml world each hackathon issue is worth a certain amount of points that will bring you prizes by the end of the mindsdb hackathon stay tuned for the detailed rules of the mindsdb hackathon the label we are happy to welcome you on board please take a look at the rules below for first time contributors you can solve only one issue labeled as after that please look at other issues labeled as or after you create your first pr in the mindsdb repository please sign our cla to become a mindsdb contributor you can do that by leaving a comment that contains the following i have read the cla document and i hereby sign the cla thank you for contributing to mindsdb | 0 |
1,448 | 4,020,060,313 | IssuesEvent | 2016-05-16 17:03:28 | emergence-lab/emergence-lab | https://api.github.com/repos/emergence-lab/emergence-lab | closed | Run Process on Sample | backend bug process | Using the Run Process action from the sample detail page gives an error. | 1.0 | Run Process on Sample - Using the Run Process action from the sample detail page gives an error. | process | run process on sample using the run process action from the sample detail page gives an error | 1 |
2,576 | 5,332,395,989 | IssuesEvent | 2017-02-15 21:58:18 | MikePopoloski/slang | https://api.github.com/repos/MikePopoloski/slang | closed | Make macro stringification more robust | area-lexing area-preprocessor cleanup medium | Implemented in Lexer::stringify. It's not always clear what kind of spacing should be in the resulting string; the standard doesn't have much to say about it. Probably we should take a look at what existing Verilog compilers do and try to match them. | 1.0 | Make macro stringification more robust - Implemented in Lexer::stringify. It's not always clear what kind of spacing should be in the resulting string; the standard doesn't have much to say about it. Probably we should take a look at what existing Verilog compilers do and try to match them. | process | make macro stringification more robust implemented in lexer stringify it s not always clear what kind of spacing should be in the resulting string the standard doesn t have much to say about it probably we should take a look at what existing verilog compilers do and try to match them | 1 |
17,337 | 23,155,452,350 | IssuesEvent | 2022-07-29 12:35:51 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | [processing] Add FORCE_RASTER (Fix #48921) and IMAGE_COMPRESSION parameters to printlayouttopdf, atlaslayouttopdf and atlaslayouttomultiplepdf algorithms (Request in QGIS) | Processing Alg 3.28 | ### Request for documentation
From pull request QGIS/qgis#49122
Author: @agiudiceandrea
QGIS version: 3.28
**[processing] Add FORCE_RASTER (Fix #48921) and IMAGE_COMPRESSION parameters to printlayouttopdf, atlaslayouttopdf and atlaslayouttomultiplepdf algorithms**
### PR Description:
## Description
Adds the `FORCE_RASTER` (Fixes #48921) and `IMAGE_COMPRESSION` parameters to the "Export print layout as PDF" (`native:printlayouttopdf`), "Export atlas layout as PDF (single file)" (`native:atlaslayouttopdf`) and "Export atlas layout as PDF (multiple files)" (`native:atlaslayouttomultiplepdf`) algorithms (https://github.com/qgis/QGIS/pull/36916).
The `FORCE_RASTER` parameter is mutually exclusive with and takes the precedence over the `FORCE_VECTOR` parameter: see the corresponding [`rasterizeWholeImage`](https://api.qgis.org/api/structQgsLayoutExporter_1_1PdfExportSettings.html#a437c1bbd1c1b980dc8f729ce1e84b18a) and [`forceVectorOutput`](https://api.qgis.org/api/structQgsLayoutExporter_1_1PdfExportSettings.html#a4250a34ad62a6d0a40fbcc717c3706f5) attributes of `QgsLayoutExporter::PdfExportSettings`.
The `IMAGE_COMPRESSION` parameter corresponds to the `FlagLosslessImageRendering` flag of [`QgsLayoutRenderContext`](https://api.qgis.org/api/classQgsLayoutRenderContext.html#aae823feceb47451b481ce3be51e37456) and has effect with QGIS builds based on Qt 5.13 or later.
<!--
BEFORE HITTING SUBMIT -- Please BUILD AND TEST your changes thoroughly. This is YOUR responsibility! Do NOT rely on the QGIS code maintainers to do this for you!!
IMPORTANT NOTES FOR FIRST TIME CONTRIBUTORS
===========================================
Congratulations, you are about to make a pull request to QGIS! To make this as easy and pleasurable for everyone, please take the time to read these lines before opening the pull request.
Include a few sentences describing the overall goals for this pull request (PR). If applicable also add screenshots or - even better - screencasts.
Include both: *what* you changed and *why* you changed it.
If this is a pull request that adds new functionality which needs documentation, give an especially detailed explanation.
In this case, start with a short abstract and then write some text that can be copied 1:1 to the documentation in the best case.
Also mention if you think this PR needs to be backported. And list relevant or fixed issues.
------------------------
Reviewing is a process done by project maintainers, mostly on a volunteer basis. We try to keep the overhead as small as possible and appreciate if you help us to do so by checking the following list.
Feel free to ask in a comment if you have troubles with any of them.
- Commit messages are descriptive and explain the rationale for changes.
- Commits which fix bugs include `Fixes #11111` at the bottom of the commit message. If this is your first pull request and you forgot to do this, write the same statement into this text field with the pull request description.
- New unit tests have been added for relevant changes
- You have run the `scripts/prepare_commit.sh` script (https://github.com/qgis/QGIS/blob/master/.github/CONTRIBUTING.md#contributing-to-qgis) before each commit.
If you didn't do this, you can also run `./scripts/astyle_all.sh` from your source folder.
- You have read the QGIS Coding Standards (https://docs.qgis.org/testing/en/docs/developers_guide/codingstandards.html) and this PR complies with them
-->
### Commits tagged with [need-docs] or [FEATURE] | 1.0 | [processing] Add FORCE_RASTER (Fix #48921) and IMAGE_COMPRESSION parameters to printlayouttopdf, atlaslayouttopdf and atlaslayouttomultiplepdf algorithms (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#49122
Author: @agiudiceandrea
QGIS version: 3.28
**[processing] Add FORCE_RASTER (Fix #48921) and IMAGE_COMPRESSION parameters to printlayouttopdf, atlaslayouttopdf and atlaslayouttomultiplepdf algorithms**
### PR Description:
## Description
Adds the `FORCE_RASTER` (Fixes #48921) and `IMAGE_COMPRESSION` parameters to the "Export print layout as PDF" (`native:printlayouttopdf`), "Export atlas layout as PDF (single file)" (`native:atlaslayouttopdf`) and "Export atlas layout as PDF (multiple files)" (`native:atlaslayouttomultiplepdf`) algorithms (https://github.com/qgis/QGIS/pull/36916).
The `FORCE_RASTER` parameter is mutually exclusive with and takes the precedence over the `FORCE_VECTOR` parameter: see the corresponding [`rasterizeWholeImage`](https://api.qgis.org/api/structQgsLayoutExporter_1_1PdfExportSettings.html#a437c1bbd1c1b980dc8f729ce1e84b18a) and [`forceVectorOutput`](https://api.qgis.org/api/structQgsLayoutExporter_1_1PdfExportSettings.html#a4250a34ad62a6d0a40fbcc717c3706f5) attributes of `QgsLayoutExporter::PdfExportSettings`.
The `IMAGE_COMPRESSION` parameter corresponds to the `FlagLosslessImageRendering` flag of [`QgsLayoutRenderContext`](https://api.qgis.org/api/classQgsLayoutRenderContext.html#aae823feceb47451b481ce3be51e37456) and has effect with QGIS builds based on Qt 5.13 or later.
<!--
BEFORE HITTING SUBMIT -- Please BUILD AND TEST your changes thoroughly. This is YOUR responsibility! Do NOT rely on the QGIS code maintainers to do this for you!!
IMPORTANT NOTES FOR FIRST TIME CONTRIBUTORS
===========================================
Congratulations, you are about to make a pull request to QGIS! To make this as easy and pleasurable for everyone, please take the time to read these lines before opening the pull request.
Include a few sentences describing the overall goals for this pull request (PR). If applicable also add screenshots or - even better - screencasts.
Include both: *what* you changed and *why* you changed it.
If this is a pull request that adds new functionality which needs documentation, give an especially detailed explanation.
In this case, start with a short abstract and then write some text that can be copied 1:1 to the documentation in the best case.
Also mention if you think this PR needs to be backported. And list relevant or fixed issues.
------------------------
Reviewing is a process done by project maintainers, mostly on a volunteer basis. We try to keep the overhead as small as possible and appreciate if you help us to do so by checking the following list.
Feel free to ask in a comment if you have troubles with any of them.
- Commit messages are descriptive and explain the rationale for changes.
- Commits which fix bugs include `Fixes #11111` at the bottom of the commit message. If this is your first pull request and you forgot to do this, write the same statement into this text field with the pull request description.
- New unit tests have been added for relevant changes
- You have run the `scripts/prepare_commit.sh` script (https://github.com/qgis/QGIS/blob/master/.github/CONTRIBUTING.md#contributing-to-qgis) before each commit.
If you didn't do this, you can also run `./scripts/astyle_all.sh` from your source folder.
- You have read the QGIS Coding Standards (https://docs.qgis.org/testing/en/docs/developers_guide/codingstandards.html) and this PR complies with them
-->
### Commits tagged with [need-docs] or [FEATURE] | process | add force raster fix and image compression parameters to printlayouttopdf atlaslayouttopdf and atlaslayouttomultiplepdf algorithms request in qgis request for documentation from pull request qgis qgis author agiudiceandrea qgis version add force raster fix and image compression parameters to printlayouttopdf atlaslayouttopdf and atlaslayouttomultiplepdf algorithms pr description description adds the force raster fixes and image compression parameters to the export print layout as pdf native printlayouttopdf export atlas layout as pdf single file native atlaslayouttopdf and export atlas layout as pdf multiple files native atlaslayouttomultiplepdf algorithms the force raster parameter is mutually exclusive with and takes the precedence over the force vector parameter see the corresponding and attributes of qgslayoutexporter pdfexportsettings the image compression parameter corresponds to the flaglosslessimagerendering flag of and has effect with qgis builds based on qt or later before hitting submit please build and test your changes thoroughly this is your responsibility do not rely on the qgis code maintainers to do this for you important notes for first time contributors congratulations you are about to make a pull request to qgis to make this as easy and pleasurable for everyone please take the time to read these lines before opening the pull request include a few sentences describing the overall goals for this pull request pr if applicable also add screenshots or even better screencasts include both what you changed and why you changed it if this is a pull request that adds new functionality which needs documentation give an especially detailed explanation in this case start with a short abstract and then write some text that can be copied to the documentation in the best case also mention if you think this pr needs to be backported and list relevant or fixed issues reviewing is a process done by project maintainers mostly on a volunteer basis we try to keep the overhead as small as possible and appreciate if you help us to do so by checking the following list feel free to ask in a comment if you have troubles with any of them commit messages are descriptive and explain the rationale for changes commits which fix bugs include fixes at the bottom of the commit message if this is your first pull request and you forgot to do this write the same statement into this text field with the pull request description new unit tests have been added for relevant changes you have run the scripts prepare commit sh script before each commit if you didn t do this you can also run scripts astyle all sh from your source folder you have read the qgis coding standards and this pr complies with them commits tagged with or | 1 |
47,861 | 13,066,295,012 | IssuesEvent | 2020-07-30 21:23:50 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | ipdf - docs are out of date and incomplete (Trac #1298) | Migrated from Trac combo simulation defect | rst docs are light, and barely gloss over IPDF usage. links are dead.
doxygen docs are incomplete (several "Comming soon"'s)
doxygen docs are out of date (refer to the old Makefile build system, "plan to use the icetray unit system")
Migrated from https://code.icecube.wisc.edu/ticket/1298
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "rst docs are light, and barely gloss over IPDF usage. links are dead.\n\ndoxygen docs are incomplete (several \"Comming soon\"'s)\ndoxygen docs are out of date (refer to the old Makefile build system, \"plan to use the icetray unit system\")",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo simulation",
"summary": "ipdf - docs are out of date and incomplete",
"priority": "blocker",
"keywords": "documentation",
"time": "2015-08-28T20:21:09",
"milestone": "",
"owner": "kjmeagher",
"type": "defect"
}
```
| 1.0 | ipdf - docs are out of date and incomplete (Trac #1298) - rst docs are light, and barely gloss over IPDF usage. links are dead.
doxygen docs are incomplete (several "Comming soon"'s)
doxygen docs are out of date (refer to the old Makefile build system, "plan to use the icetray unit system")
Migrated from https://code.icecube.wisc.edu/ticket/1298
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "rst docs are light, and barely gloss over IPDF usage. links are dead.\n\ndoxygen docs are incomplete (several \"Comming soon\"'s)\ndoxygen docs are out of date (refer to the old Makefile build system, \"plan to use the icetray unit system\")",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo simulation",
"summary": "ipdf - docs are out of date and incomplete",
"priority": "blocker",
"keywords": "documentation",
"time": "2015-08-28T20:21:09",
"milestone": "",
"owner": "kjmeagher",
"type": "defect"
}
```
| non_process | ipdf docs are out of date and incomplete trac rst docs are light and barely gloss over ipdf usage links are dead doxygen docs are incomplete several comming soon s doxygen docs are out of date refer to the old makefile build system plan to use the icetray unit system migrated from json status closed changetime description rst docs are light and barely gloss over ipdf usage links are dead n ndoxygen docs are incomplete several comming soon s ndoxygen docs are out of date refer to the old makefile build system plan to use the icetray unit system reporter nega cc resolution fixed ts component combo simulation summary ipdf docs are out of date and incomplete priority blocker keywords documentation time milestone owner kjmeagher type defect | 0 |
1,978 | 4,108,651,261 | IssuesEvent | 2016-06-06 16:48:48 | bcgov/aspb-planning | https://api.github.com/repos/bcgov/aspb-planning | closed | Prepare Retrospective presentation | New Service Deliver Bi-Modal TS Governance TS Roadmap / Integration | describe the Product/Service Design efforts from October to March, with timeline, significant events, lessons learned. | 1.0 | Prepare Retrospective presentation - describe the Product/Service Design efforts from October to March, with timeline, significant events, lessons learned. | non_process | prepare retrospective presentation describe the product service design efforts from october to march with timeline significant events lessons learned | 0 |
193,943 | 15,392,951,848 | IssuesEvent | 2021-03-03 16:10:50 | udistrital/financiera_documentacion | https://api.github.com/repos/udistrital/financiera_documentacion | opened | Capacitación Conciliaciones | Documentation | Se planteó mesa de trabajo con el ingeniero a Cargo el día 1 de marzo y se levanto la respectiva acta, la cual queda registrada en el siguiente link
https://docs.google.com/document/d/1QYmdSUPeNIgsqQqjTMlFd5WOyWc0Il4uPcmaQwy7OOc/edit?usp=sharing | 1.0 | Capacitación Conciliaciones - Se planteó mesa de trabajo con el ingeniero a Cargo el día 1 de marzo y se levanto la respectiva acta, la cual queda registrada en el siguiente link
https://docs.google.com/document/d/1QYmdSUPeNIgsqQqjTMlFd5WOyWc0Il4uPcmaQwy7OOc/edit?usp=sharing | non_process | capacitación conciliaciones se planteó mesa de trabajo con el ingeniero a cargo el día de marzo y se levanto la respectiva acta la cual queda registrada en el siguiente link | 0 |
326,854 | 9,961,656,177 | IssuesEvent | 2019-07-07 07:15:14 | dhis2/d2-ui | https://api.github.com/repos/dhis2/d2-ui | closed | Use runtime configurable DHIS 2 server for examples | priority:low stale usability wontfix | Since developers (us included) will be running the examples locally, we can't rely on any external servers for the examples. It would be best to let the user choose what DHIS 2 server to use at run time, including entering username and password for basic auth.
| 1.0 | Use runtime configurable DHIS 2 server for examples - Since developers (us included) will be running the examples locally, we can't rely on any external servers for the examples. It would be best to let the user choose what DHIS 2 server to use at run time, including entering username and password for basic auth.
| non_process | use runtime configurable dhis server for examples since developers us included will be running the examples locally we can t rely on any external servers for the examples it would be best to let the user choose what dhis server to use at run time including entering username and password for basic auth | 0 |
747,088 | 26,073,029,277 | IssuesEvent | 2022-12-24 04:01:36 | pilot-light/pilotlight | https://api.github.com/repos/pilot-light/pilotlight | closed | [FEATURE]: Add Clipping/Scissoring to Drawing API | priority: Normal type: feature system: draw backend: All | ## Description
Add clipping & scissoring to the drawing API. One pull request per OS.
Progress:
* [x] Windows
* [x] Linux
* [x] MacOS | 1.0 | [FEATURE]: Add Clipping/Scissoring to Drawing API - ## Description
Add clipping & scissoring to the drawing API. One pull request per OS.
Progress:
* [x] Windows
* [x] Linux
* [x] MacOS | non_process | add clipping scissoring to drawing api description add clipping scissoring to the drawing api one pull request per os progress windows linux macos | 0 |
169,929 | 26,876,106,284 | IssuesEvent | 2023-02-05 02:54:56 | patternfly/patternfly-elements | https://api.github.com/repos/patternfly/patternfly-elements | closed | [fix] pfe-tabs | vertically align tab text | good first issue design system needs: prioritization fix | If tabs have text which is stacking to two lines, currently the text is top-aligned.

The tab text should be vertically aligned:

```
.product-tabs pfe-tab{
display: flex;
align-items: center;
}
``` | 1.0 | [fix] pfe-tabs | vertically align tab text - If tabs have text which is stacking to two lines, currently the text is top-aligned.

The tab text should be vertically aligned:

```
.product-tabs pfe-tab{
display: flex;
align-items: center;
}
``` | non_process | pfe tabs vertically align tab text if tabs have text which is stacking to two lines currently the text is top aligned the tab text should be vertically aligned product tabs pfe tab display flex align items center | 0 |
15,148 | 18,904,035,893 | IssuesEvent | 2021-11-16 06:41:39 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | reopened | Add auto-compatibility with WordFence for our background processes | type: enhancement 3rd party compatibility tool: background process priority: high effort: [M] | **Describe the bug**
We have seen cases where WordFence is preventing our background processes from running correctly.
WordFence has a learning mode that can be used to allow the background processes. But It seems to be possible to programatically enable them by using some compatibility code. We need to explore this option to simplify our users experience.
**Backlog Grooming (for WP Media dev team use only)**
- [x] Reproduce the problem
- [x] Identify the root cause
- [x] Scope a solution
- [x] Estimate the effort
| 1.0 | Add auto-compatibility with WordFence for our background processes - **Describe the bug**
We have seen cases where WordFence is preventing our background processes from running correctly.
WordFence has a learning mode that can be used to allow the background processes. But It seems to be possible to programatically enable them by using some compatibility code. We need to explore this option to simplify our users experience.
**Backlog Grooming (for WP Media dev team use only)**
- [x] Reproduce the problem
- [x] Identify the root cause
- [x] Scope a solution
- [x] Estimate the effort
| process | add auto compatibility with wordfence for our background processes describe the bug we have seen cases where wordfence is preventing our background processes from running correctly wordfence has a learning mode that can be used to allow the background processes but it seems to be possible to programatically enable them by using some compatibility code we need to explore this option to simplify our users experience backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort | 1 |
14,444 | 17,499,674,019 | IssuesEvent | 2021-08-10 07:51:40 | hexonet/whmcs-ispapi-registrar | https://api.github.com/repos/hexonet/whmcs-ispapi-registrar | closed | X-NICSE-IDNUMBER not required | improvement Technical Process Management | Same issue as in this issue: https://github.com/hexonet/whmcs-ispapi-registrar/issues/200
Client can register a .se domain without entering the ID number. Would be great if that field could be required :-) | 1.0 | X-NICSE-IDNUMBER not required - Same issue as in this issue: https://github.com/hexonet/whmcs-ispapi-registrar/issues/200
Client can register a .se domain without entering the ID number. Would be great if that field could be required :-) | process | x nicse idnumber not required same issue as in this issue client can register a se domain without entering the id number would be great if that field could be required | 1 |
209,196 | 16,178,132,429 | IssuesEvent | 2021-05-03 10:17:15 | kubewarden/docs | https://api.github.com/repos/kubewarden/docs | closed | Update the architecture docs: talk about OCI registries | documentation | The charts on the page have been updated to show also the involvement of OCI registry. The text should be updated to reflect that. | 1.0 | Update the architecture docs: talk about OCI registries - The charts on the page have been updated to show also the involvement of OCI registry. The text should be updated to reflect that. | non_process | update the architecture docs talk about oci registries the charts on the page have been updated to show also the involvement of oci registry the text should be updated to reflect that | 0 |
50,362 | 21,082,032,046 | IssuesEvent | 2022-04-03 03:04:14 | openstreetmap/operations | https://api.github.com/repos/openstreetmap/operations | reopened | Wiki pages with many Wikimedia Commons images often return HTTP 504 error | service:wiki | Over the past couple weeks, I’ve noticed that any wiki page with many images from Wikimedia Commons (a couple dozen or more?) returns a 504 Gateway Timeout error the first time you try to access it but returns the expected response if you retry the request shortly after. I’m not sure if the problem is on our end or Wikimedia’s.
Some examples of affected pages:
https://wiki.openstreetmap.org/wiki/Key:maxweight
https://wiki.openstreetmap.org/wiki/Ohio/Map_features
https://wiki.openstreetmap.org/wiki/United_States_roads_tagging/Routes
https://wiki.openstreetmap.org/wiki/Tag:boundary%3Dadministrative (spotted by another mapper [on OSMUS Slack](https://osmus.slack.com/archives/C029HV951/p1603304299013800?thread_ts=1603245763.006600&cid=C029HV951))
Some of these pages could probably stand to use fewer images – the flag icons on the `maxweight` page don’t make the page much more navigable. But even those icons weren’t causing any timeouts in the past. | 1.0 | Wiki pages with many Wikimedia Commons images often return HTTP 504 error - Over the past couple weeks, I’ve noticed that any wiki page with many images from Wikimedia Commons (a couple dozen or more?) returns a 504 Gateway Timeout error the first time you try to access it but returns the expected response if you retry the request shortly after. I’m not sure if the problem is on our end or Wikimedia’s.
Some examples of affected pages:
https://wiki.openstreetmap.org/wiki/Key:maxweight
https://wiki.openstreetmap.org/wiki/Ohio/Map_features
https://wiki.openstreetmap.org/wiki/United_States_roads_tagging/Routes
https://wiki.openstreetmap.org/wiki/Tag:boundary%3Dadministrative (spotted by another mapper [on OSMUS Slack](https://osmus.slack.com/archives/C029HV951/p1603304299013800?thread_ts=1603245763.006600&cid=C029HV951))
Some of these pages could probably stand to use fewer images – the flag icons on the `maxweight` page don’t make the page much more navigable. But even those icons weren’t causing any timeouts in the past. | non_process | wiki pages with many wikimedia commons images often return http error over the past couple weeks i’ve noticed that any wiki page with many images from wikimedia commons a couple dozen or more returns a gateway timeout error the first time you try to access it but returns the expected response if you retry the request shortly after i’m not sure if the problem is on our end or wikimedia’s some examples of affected pages spotted by another mapper some of these pages could probably stand to use fewer images – the flag icons on the maxweight page don’t make the page much more navigable but even those icons weren’t causing any timeouts in the past | 0 |
8,678 | 11,810,629,390 | IssuesEvent | 2020-03-19 16:47:48 | MHRA/products | https://api.github.com/repos/MHRA/products | opened | Sane error handling & routing | EPIC - Auto Batch Process :oncoming_automobile: Enhancement 💫 STORY :book: | ## User want
As a _user_
I want _to see sensible errors_
So that _I understand what's going wrong_
## Acceptance Criteria
- [ ] Rewrite our routing so that we can implement a rejection handler for both JSON & XML requests;
- [ ] Implement custom Rejections where it makes sense to give the user greater visibility of what went wrong;
- [ ] Implement a rejection handler which understands all of the Rejections it could get and how to convert them to a meaningful error & status code;
- [ ] Use that rejection handler (`.recover(...)` in warp) to handle all rejections.
### Customer Acceptance Criteria
- [ ] Customer should get useful error messages;
- [ ] Customer should get meaningful status codes.
### Technical acceptance criteria
- [ ] Our routing should be optimized so that if, for example, `Content-Type` is set to `application/xml`, we don't try the JSON routes, same for if the Auth header is empty;
- [ ] We should have observability of how failures are happening.
### Data acceptance criteria
- [ ] N/A
### Testing acceptance criteria
- [ ] Needs Discovery
## Data - Potential impact
**Size**
**Value**
**Effort**
### Exit Criteria met
- [ ] Backlog
- [ ] Discovery
- [ ] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
| 1.0 | Sane error handling & routing - ## User want
As a _user_
I want _to see sensible errors_
So that _I understand what's going wrong_
## Acceptance Criteria
- [ ] Rewrite our routing so that we can implement a rejection handler for both JSON & XML requests;
- [ ] Implement custom Rejections where it makes sense to give the user greater visibility of what went wrong;
- [ ] Implement a rejection handler which understands all of the Rejections it could get and how to convert them to a meaningful error & status code;
- [ ] Use that rejection handler (`.recover(...)` in warp) to handle all rejections.
### Customer Acceptance Criteria
- [ ] Customer should get useful error messages;
- [ ] Customer should get meaningful status codes.
### Technical acceptance criteria
- [ ] Our routing should be optimized so that if, for example, `Content-Type` is set to `application/xml`, we don't try the JSON routes, same for if the Auth header is empty;
- [ ] We should have observability of how failures are happening.
### Data acceptance criteria
- [ ] N/A
### Testing acceptance criteria
- [ ] Needs Discovery
## Data - Potential impact
**Size**
**Value**
**Effort**
### Exit Criteria met
- [ ] Backlog
- [ ] Discovery
- [ ] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
| process | sane error handling routing user want as a user i want to see sensible errors so that i understand what s going wrong acceptance criteria rewrite our routing so that we can implement a rejection handler for both json xml requests implement custom rejections where it makes sense to give the user greater visibility of what went wrong implement a rejection handler which understands all of the rejections it could get and how to convert them to a meaningful error status code use that rejection handler recover in warp to handle all rejections customer acceptance criteria customer should get useful error messages customer should get meaningful status codes technical acceptance criteria our routing should be optimized so that if for example content type is set to application xml we don t try the json routes same for if the auth header is empty we should have observability of how failures are happening data acceptance criteria n a testing acceptance criteria needs discovery data potential impact size value effort exit criteria met backlog discovery duxd development quality assurance release and validate | 1 |
152,787 | 5,868,557,878 | IssuesEvent | 2017-05-14 13:51:04 | DV8FromTheWorld/JDA | https://api.github.com/repos/DV8FromTheWorld/JDA | opened | Build 194 is not working | bug priority | When JDA tries to chunk members of a guild or apply guild sync the startup sequence breaks as websocket messages are not sent.
This breaks client login and all bots that are in guild requiring member chunking.
This can be seen by only having 2 log messages from JDA:
```
[13:32:40] [Info] [JDA]: Login Successful!
[13:32:41] [Info] [JDASocket]: Connected to WebSocket
```
And never receiving
```
[13:32:55] [Info] [JDA]: Finished Loading!
```
As a temporary fix you can downgrade to build 193 if you encounter this issue. | 1.0 | Build 194 is not working - When JDA tries to chunk members of a guild or apply guild sync the startup sequence breaks as websocket messages are not sent.
This breaks client login and all bots that are in guild requiring member chunking.
This can be seen by only having 2 log messages from JDA:
```
[13:32:40] [Info] [JDA]: Login Successful!
[13:32:41] [Info] [JDASocket]: Connected to WebSocket
```
And never receiving
```
[13:32:55] [Info] [JDA]: Finished Loading!
```
As a temporary fix you can downgrade to build 193 if you encounter this issue. | non_process | build is not working when jda tries to chunk members of a guild or apply guild sync the startup sequence breaks as websocket messages are not sent this breaks client login and all bots that are in guild requiring member chunking this can be seen by only having log messages from jda login successful connected to websocket and never receiving finished loading as a temporary fix you can downgrade to build if you encounter this issue | 0 |
19,361 | 25,491,529,376 | IssuesEvent | 2022-11-27 05:31:42 | hsmusic/hsmusic-wiki | https://api.github.com/repos/hsmusic/hsmusic-wiki | closed | Rereleased tracks shouldn't count for total duration by an artist | type: bug (user-facing) scope: data processing thing: artists | Good example: https://hsmusic.wiki/preview-en/artist/joseph-aylsworth/
Technically not a "bug" but definitely not as it should be.
Thanks for the catch, Niklink! | 1.0 | Rereleased tracks shouldn't count for total duration by an artist - Good example: https://hsmusic.wiki/preview-en/artist/joseph-aylsworth/
Technically not a "bug" but definitely not as it should be.
Thanks for the catch, Niklink! | process | rereleased tracks shouldn t count for total duration by an artist good example technically not a bug but definitely not as it should be thanks for the catch niklink | 1 |
43,615 | 7,055,784,029 | IssuesEvent | 2018-01-04 09:54:33 | chartjs/chartjs-plugin-datalabels | https://api.github.com/repos/chartjs/chartjs-plugin-datalabels | closed | Apply datalabels to specific datasets | documentation question resolved | Hi, thanks for this awesome plugin in chartjs. I've read the docs but can't seems to find it even in the samples. I have this mixed chart with bar graphs and line graph, what I want is to add the datalabels in specific datasets and not to the whole chart. Can you give me an example as I have no idea on what to do. BTW, thanks! | 1.0 | Apply datalabels to specific datasets - Hi, thanks for this awesome plugin in chartjs. I've read the docs but can't seems to find it even in the samples. I have this mixed chart with bar graphs and line graph, what I want is to add the datalabels in specific datasets and not to the whole chart. Can you give me an example as I have no idea on what to do. BTW, thanks! | non_process | apply datalabels to specific datasets hi thanks for this awesome plugin in chartjs i ve read the docs but can t seems to find it even in the samples i have this mixed chart with bar graphs and line graph what i want is to add the datalabels in specific datasets and not to the whole chart can you give me an example as i have no idea on what to do btw thanks | 0 |
17,028 | 22,406,802,246 | IssuesEvent | 2022-06-18 04:41:50 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | Update example configuration for metrics transform processor | bug good first issue proc: metricstransformprocessor | The configuration on https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/metricstransformprocessor/README.md#configuration appears to be missing `metricstransform:` before ` transforms:`. This is confusing to customers.
| 1.0 | Update example configuration for metrics transform processor - The configuration on https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/metricstransformprocessor/README.md#configuration appears to be missing `metricstransform:` before ` transforms:`. This is confusing to customers.
| process | update example configuration for metrics transform processor the configuration on appears to be missing metricstransform before transforms this is confusing to customers | 1 |
13,405 | 15,878,143,743 | IssuesEvent | 2021-04-09 10:35:15 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [PM] App participant registry > Text change | Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev | App participant registry > Text change > Status 'Pending Activation' should be 'Pending activation'

| 3.0 | [PM] App participant registry > Text change - App participant registry > Text change > Status 'Pending Activation' should be 'Pending activation'

| process | app participant registry text change app participant registry text change status pending activation should be pending activation | 1 |
18,101 | 24,126,989,809 | IssuesEvent | 2022-09-21 02:04:25 | bitPogo/kmock | https://api.github.com/repos/bitPogo/kmock | closed | Decouple BuildIns from Names | enhancement kmock-processor | ## Description
Currently BuildIns are resolved by their name. Overloaded variants should be resolved nevertheless even if no BuildIns are set. However this is not the case at the moment. | 1.0 | Decouple BuildIns from Names - ## Description
Currently BuildIns are resolved by their name. Overloaded variants should be resolved nevertheless even if no BuildIns are set. However this is not the case at the moment. | process | decouple buildins from names description currently buildins are resolved by their name overloaded variants should be resolved nevertheless even if no buildins are set however this is not the case at the moment | 1 |
21,127 | 28,094,719,907 | IssuesEvent | 2023-03-30 15:04:52 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | `queueMicrotask()` is called before `process.nextTick()` in ESM | process | ### Version
v20.0.0-pre
### Platform
Linux deokjinkim-MS-7885 5.15.0-67-generic #74-Ubuntu SMP Wed Feb 22 14:14:39 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
### Subsystem
process
### What steps will reproduce the bug?
When I run example of ESM in document, actual result is different from expected result.
https://nodejs.org/dist/latest-v19.x/docs/api/process.html#when-to-use-queuemicrotask-vs-processnexttick
```
import { nextTick } from 'node:process';
Promise.resolve().then(() => console.log(2));
queueMicrotask(() => console.log(3));
nextTick(() => console.log(1));
// Output:
// 1
// 2
// 3
```
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior? Why is that the expected behavior?
1
2
3
### What do you see instead?
2
3
1
### Additional information
In example of CJS, actual result is same to expected result.
```
const { nextTick } = require('node:process');
Promise.resolve().then(() => console.log(2));
queueMicrotask(() => console.log(3));
nextTick(() => console.log(1));
// Output:
// 1
// 2
// 3
``` | 1.0 | `queueMicrotask()` is called before `process.nextTick()` in ESM - ### Version
v20.0.0-pre
### Platform
Linux deokjinkim-MS-7885 5.15.0-67-generic #74-Ubuntu SMP Wed Feb 22 14:14:39 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
### Subsystem
process
### What steps will reproduce the bug?
When I run example of ESM in document, actual result is different from expected result.
https://nodejs.org/dist/latest-v19.x/docs/api/process.html#when-to-use-queuemicrotask-vs-processnexttick
```
import { nextTick } from 'node:process';
Promise.resolve().then(() => console.log(2));
queueMicrotask(() => console.log(3));
nextTick(() => console.log(1));
// Output:
// 1
// 2
// 3
```
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior? Why is that the expected behavior?
1
2
3
### What do you see instead?
2
3
1
### Additional information
In example of CJS, actual result is same to expected result.
```
const { nextTick } = require('node:process');
Promise.resolve().then(() => console.log(2));
queueMicrotask(() => console.log(3));
nextTick(() => console.log(1));
// Output:
// 1
// 2
// 3
``` | process | queuemicrotask is called before process nexttick in esm version pre platform linux deokjinkim ms generic ubuntu smp wed feb utc gnu linux subsystem process what steps will reproduce the bug when i run example of esm in document actual result is different from expected result import nexttick from node process promise resolve then console log queuemicrotask console log nexttick console log output how often does it reproduce is there a required condition always what is the expected behavior why is that the expected behavior what do you see instead additional information in example of cjs actual result is same to expected result const nexttick require node process promise resolve then console log queuemicrotask console log nexttick console log output | 1 |
29,023 | 11,706,182,749 | IssuesEvent | 2020-03-07 20:27:25 | vlaship/hadoop-wc | https://api.github.com/repos/vlaship/hadoop-wc | opened | CVE-2018-14718 (High) detected in jackson-databind-2.9.5.jar | security vulnerability | ## CVE-2018-14718 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- hadoop-client-3.2.0.jar (Root Library)
- hadoop-common-3.2.0.jar
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vlaship/hadoop-wc/commit/f1363bd417f4ca7591b0fef369881a3acd4cdeb5">f1363bd417f4ca7591b0fef369881a3acd4cdeb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization.
<p>Publish Date: 2019-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718>CVE-2018-14718</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14718">https://nvd.nist.gov/vuln/detail/CVE-2018-14718</a></p>
<p>Release Date: 2019-01-02</p>
<p>Fix Resolution: 2.9.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-14718 (High) detected in jackson-databind-2.9.5.jar - ## CVE-2018-14718 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- hadoop-client-3.2.0.jar (Root Library)
- hadoop-common-3.2.0.jar
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vlaship/hadoop-wc/commit/f1363bd417f4ca7591b0fef369881a3acd4cdeb5">f1363bd417f4ca7591b0fef369881a3acd4cdeb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization.
<p>Publish Date: 2019-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718>CVE-2018-14718</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14718">https://nvd.nist.gov/vuln/detail/CVE-2018-14718</a></p>
<p>Release Date: 2019-01-02</p>
<p>Fix Resolution: 2.9.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy hadoop client jar root library hadoop common jar x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before might allow remote attackers to execute arbitrary code by leveraging failure to block the ext class from polymorphic deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
381,514 | 26,456,841,598 | IssuesEvent | 2023-01-16 14:54:41 | CryptoBlades/cryptoblades | https://api.github.com/repos/CryptoBlades/cryptoblades | closed | [Feature] - Website Team Image Updates | documentation enhancement | ### Prerequisites
- [X] I checked to make sure that this feature has not already been filed
- [X] I'm reporting this information to the correct repository
- [X] I understand enough about this issue to complete a comprehensive document
### Describe the feature and its requirements
Please remove David Diebels from the team sections of the following websites as he is no longer a part of the team:
- [x] CryptoBlades - https://www.cryptoblades.io/

- [x] CryptoBlades: Kingdoms - https://cryptobladeskingdoms.io/#team

- [x] Riveted Games - https://rivetedtechnology.com/

- [x] Riveted Games >Services > Consulting - https://rivetedtechnology.com/consulting

### Is your feature request related to an existing issue? Please describe.
No
### Is there anything stopping this feature being completed?
No
### Describe alternatives you've considered
N/A
### Additional context
David should be removed from all websites. If there are any that were missed in the listing above, it was not intentional | 1.0 | [Feature] - Website Team Image Updates - ### Prerequisites
- [X] I checked to make sure that this feature has not already been filed
- [X] I'm reporting this information to the correct repository
- [X] I understand enough about this issue to complete a comprehensive document
### Describe the feature and its requirements
Please remove David Diebels from the team sections of the following websites as he is no longer a part of the team:
- [x] CryptoBlades - https://www.cryptoblades.io/

- [x] CryptoBlades: Kingdoms - https://cryptobladeskingdoms.io/#team

- [x] Riveted Games - https://rivetedtechnology.com/

- [x] Riveted Games >Services > Consulting - https://rivetedtechnology.com/consulting

### Is your feature request related to an existing issue? Please describe.
No
### Is there anything stopping this feature being completed?
No
### Describe alternatives you've considered
N/A
### Additional context
David should be removed from all websites. If there are any that were missed in the listing above, it was not intentional | non_process | website team image updates prerequisites i checked to make sure that this feature has not already been filed i m reporting this information to the correct repository i understand enough about this issue to complete a comprehensive document describe the feature and its requirements please remove david diebels from the team sections of the following websites as he is no longer a part of the team cryptoblades cryptoblades kingdoms riveted games riveted games services consulting is your feature request related to an existing issue please describe no is there anything stopping this feature being completed no describe alternatives you ve considered n a additional context david should be removed from all websites if there are any that were missed in the listing above it was not intentional | 0 |
12,871 | 5,257,900,935 | IssuesEvent | 2017-02-02 21:48:38 | quicklisp/quicklisp-projects | https://api.github.com/repos/quicklisp/quicklisp-projects | closed | Please Add Lichat-TCP-Server | canbuild | This is a simple, threaded, TCP-based server for the Lichat Protocol.
Author: Nicolas Hafner
Source: https://github.com/Shirakumo/lichat-tcp-server.git
Documentation: https://shirakumo.github.io/lichat-tcp-server/ | 1.0 | Please Add Lichat-TCP-Server - This is a simple, threaded, TCP-based server for the Lichat Protocol.
Author: Nicolas Hafner
Source: https://github.com/Shirakumo/lichat-tcp-server.git
Documentation: https://shirakumo.github.io/lichat-tcp-server/ | non_process | please add lichat tcp server this is a simple threaded tcp based server for the lichat protocol author nicolas hafner source documentation | 0 |
26,466 | 7,840,649,544 | IssuesEvent | 2018-06-18 17:01:14 | wps-2017-2018-apcs/whs | https://api.github.com/repos/wps-2017-2018-apcs/whs | closed | ¡BROKEN BUILD! | broken-build | By removing `public enum State {BOMB, FLAG, DEFAULT};` from Tile.java, the build is now broken, as Minesweeper.java relies on them. Use git status to see what other files you must commit to fix the build. | 1.0 | ¡BROKEN BUILD! - By removing `public enum State {BOMB, FLAG, DEFAULT};` from Tile.java, the build is now broken, as Minesweeper.java relies on them. Use git status to see what other files you must commit to fix the build. | non_process | ¡broken build by removing public enum state bomb flag default from tile java the build is now broken as minesweeper java relies on them use git status to see what other files you must commit to fix the build | 0 |
118,546 | 25,332,789,847 | IssuesEvent | 2022-11-18 14:32:05 | arduino/arduino-serial-plotter-webapp | https://api.github.com/repos/arduino/arduino-serial-plotter-webapp | closed | Do a display range adjustable X-axis, Add repeatable options when displaying the same contents | type: enhancement topic: code | When I use the Serial Plotter, I want to extend the display range of the horizontal axis time so that the latest data acquisition and the previous data acquisition can be displayed on the same graph.
Second, when the x-coordinate data is a cyclic change in a range, I want the x-coordinate data to be displayed within the range instead of forward. | 1.0 | Do a display range adjustable X-axis, Add repeatable options when displaying the same contents - When I use the Serial Plotter, I want to extend the display range of the horizontal axis time so that the latest data acquisition and the previous data acquisition can be displayed on the same graph.
Second, when the x-coordinate data is a cyclic change in a range, I want the x-coordinate data to be displayed within the range instead of forward. | non_process | do a display range adjustable x axis add repeatable options when displaying the same contents when i use the serial plotter i want to extend the display range of the horizontal axis time so that the latest data acquisition and the previous data acquisition can be displayed on the same graph second when the x coordinate data is a cyclic change in a range i want the x coordinate data to be displayed within the range instead of forward | 0 |
363,274 | 25,415,852,046 | IssuesEvent | 2022-11-22 23:56:17 | opensim-org/opensim-core | https://api.github.com/repos/opensim-org/opensim-core | closed | Update Confluence wiki based on V&V paper guidelines | Documentation | The V&V paper provides guidelines for what reasonable errors in simulations are. We should update the Confluence wiki to reflect these new guidelines.
| 1.0 | Update Confluence wiki based on V&V paper guidelines - The V&V paper provides guidelines for what reasonable errors in simulations are. We should update the Confluence wiki to reflect these new guidelines.
| non_process | update confluence wiki based on v v paper guidelines the v v paper provides guidelines for what reasonable errors in simulations are we should update the confluence wiki to reflect these new guidelines | 0 |
6,134 | 8,998,465,460 | IssuesEvent | 2019-02-02 22:03:16 | leg2015/Aagos | https://api.github.com/repos/leg2015/Aagos | closed | Fix Data node overlap entry | Data Tracking Data visualization data processing | Currently, data for each overlap section is added as `1_overlap` etc, but in python a variable can't start with a number so I have to go in and manually change all column headers. Come up with a versatile, streamlined way to overcome this issue. | 1.0 | Fix Data node overlap entry - Currently, data for each overlap section is added as `1_overlap` etc, but in python a variable can't start with a number so I have to go in and manually change all column headers. Come up with a versatile, streamlined way to overcome this issue. | process | fix data node overlap entry currently data for each overlap section is added as overlap etc but in python a variable can t start with a number so i have to go in and manually change all column headers come up with a versatile streamlined way to overcome this issue | 1 |
29,768 | 13,169,588,216 | IssuesEvent | 2020-08-11 13:58:19 | dockstore/dockstore | https://api.github.com/repos/dockstore/dockstore | closed | Improve addUserToDockstoreWorkflows performance | enhancement review web-service | **Is your feature request related to a problem? Please describe.**
The [endpoint](https://github.com/dockstore/dockstore/blob/2caf914821431764f13228fad1539bf0d1588656/dockstore-webservice/src/main/java/io/dockstore/webservice/resources/UserResource.java#L805) to discover a user's workflows is inefficient and has the potential to consume a lot of memory.
The current implementation:
1. Fetches all source control orgs the user belongs to, except hosted.
2. Gets all workflows for each org
3. Adds the user to each workflow from step 2
4. Fetches all of the user's workflows
This ends up with all of the user's workflows loaded into memory. Twice, although maybe Hibernate optimizes that.
**Describe the solution you'd like**
Some of this may be rendered moot by our upcoming and ongoing optimizations, but:
* Step 4 could arguably be skipped; we've already fetched all of the users workflows in steps 1-3, and we could just return that, adding the hosted workflows. However, there is a edge case where the user belongs to a workflow but no longer belongs to the source control org, so I'm not sure.
* Depending on what we do for the previous step, steps 2 and 3 could be changed in a couple of ways
* Only fetch workflows for each org that don't already have the user
* Don't even fetch the workflows at all, and instead do a JPQL update.
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-1462)
┆Issue Type: Story
┆Fix Versions: Dockstore 1.9.X
┆Sprint: Sprint 39 Narwhal
┆Issue Number: DOCK-1462
| 1.0 | Improve addUserToDockstoreWorkflows performance - **Is your feature request related to a problem? Please describe.**
The [endpoint](https://github.com/dockstore/dockstore/blob/2caf914821431764f13228fad1539bf0d1588656/dockstore-webservice/src/main/java/io/dockstore/webservice/resources/UserResource.java#L805) to discover a user's workflows is inefficient and has the potential to consume a lot of memory.
The current implementation:
1. Fetches all source control orgs the user belongs to, except hosted.
2. Gets all workflows for each org
3. Adds the user to each workflow from step 2
4. Fetches all of the user's workflows
This ends up with all of the user's workflows loaded into memory. Twice, although maybe Hibernate optimizes that.
**Describe the solution you'd like**
Some of this may be rendered moot by our upcoming and ongoing optimizations, but:
* Step 4 could arguably be skipped; we've already fetched all of the users workflows in steps 1-3, and we could just return that, adding the hosted workflows. However, there is a edge case where the user belongs to a workflow but no longer belongs to the source control org, so I'm not sure.
* Depending on what we do for the previous step, steps 2 and 3 could be changed in a couple of ways
* Only fetch workflows for each org that don't already have the user
* Don't even fetch the workflows at all, and instead do a JPQL update.
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-1462)
┆Issue Type: Story
┆Fix Versions: Dockstore 1.9.X
┆Sprint: Sprint 39 Narwhal
┆Issue Number: DOCK-1462
| non_process | improve addusertodockstoreworkflows performance is your feature request related to a problem please describe the to discover a user s workflows is inefficient and has the potential to consume a lot of memory the current implementation fetches all source control orgs the user belongs to except hosted gets all workflows for each org adds the user to each workflow from step fetches all of the user s workflows this ends up with all of the user s workflows loaded into memory twice although maybe hibernate optimizes that describe the solution you d like some of this may be rendered moot by our upcoming and ongoing optimizations but step could arguably be skipped we ve already fetched all of the users workflows in steps and we could just return that adding the hosted workflows however there is a edge case where the user belongs to a workflow but no longer belongs to the source control org so i m not sure depending on what we do for the previous step steps and could be changed in a couple of ways only fetch workflows for each org that don t already have the user don t even fetch the workflows at all and instead do a jpql update ┆issue is synchronized with this ┆issue type story ┆fix versions dockstore x ┆sprint sprint narwhal ┆issue number dock | 0 |
10,209 | 13,067,981,162 | IssuesEvent | 2020-07-31 02:10:53 | kevinhenneigh/ECommerceSite | https://api.github.com/repos/kevinhenneigh/ECommerceSite | closed | Add CI Pipeline | developer process | Add continuous integration pipeline that will check to make sure code in a pull request compiles successfully | 1.0 | Add CI Pipeline - Add continuous integration pipeline that will check to make sure code in a pull request compiles successfully | process | add ci pipeline add continuous integration pipeline that will check to make sure code in a pull request compiles successfully | 1 |
32,001 | 6,676,598,252 | IssuesEvent | 2017-10-05 06:47:38 | proarc/proarc | https://api.github.com/repos/proarc/proarc | closed | Povinnost vyplnění - příloha monografie | auto-migrated Priority-Low Type-Defect | ```
Prosíme opravit:
1. type of resource - názvy hodnot v aj
2. subject - name - pole chybí
3. identifier - type - chybí hodnoty oclc a sysno
```
Original issue reported on code.google.com by `daneck...@knav.cz` on 25 Jun 2014 at 3:20
| 1.0 | Povinnost vyplnění - příloha monografie - ```
Prosíme opravit:
1. type of resource - názvy hodnot v aj
2. subject - name - pole chybí
3. identifier - type - chybí hodnoty oclc a sysno
```
Original issue reported on code.google.com by `daneck...@knav.cz` on 25 Jun 2014 at 3:20
| non_process | povinnost vyplnění příloha monografie prosíme opravit type of resource názvy hodnot v aj subject name pole chybí identifier type chybí hodnoty oclc a sysno original issue reported on code google com by daneck knav cz on jun at | 0 |
154,411 | 5,918,603,865 | IssuesEvent | 2017-05-22 15:43:04 | ZooeyMiller/anna-freud-hackday-makatalk | https://api.github.com/repos/ZooeyMiller/anna-freud-hackday-makatalk | opened | user story: as a clinician, I would like to give service users the ability to replay makaton answer options | priority-1 user-story | so that they are able to play again the ones they missed or didn't fully understand | 1.0 | user story: as a clinician, I would like to give service users the ability to replay makaton answer options - so that they are able to play again the ones they missed or didn't fully understand | non_process | user story as a clinician i would like to give service users the ability to replay makaton answer options so that they are able to play again the ones they missed or didn t fully understand | 0 |
14,704 | 17,874,742,050 | IssuesEvent | 2021-09-07 00:31:40 | lynnandtonic/nestflix.fun | https://api.github.com/repos/lynnandtonic/nestflix.fun | closed | Add Mister Parker's Cul-De-Sac | suggested title in process | Please add as much of the following info as you can:
Title: Mister Parker's Cul-De-Sac
Type (film/tv show): TV Show
Film or show in which it appears: _Legends of Tomorrow_, Season 5, Episode 6 "Mister Parker's Cul-De-Sac"
Is the parent film/show streaming anywhere? [Netflix](https://www.netflix.com/search?q=Legends%20of%20Tomorrow&jbv=80066080), [The CW](https://www.cwtv.com/shows/dcs-legends-of-tomorrow/) (most recent season only)
About when in the parent film/show does it appear? Throughout
Actual footage of the film/show can be seen (yes/no)? Yes, there were also shorts from the show that had been released on YouTube at one point.
| 1.0 | Add Mister Parker's Cul-De-Sac - Please add as much of the following info as you can:
Title: Mister Parker's Cul-De-Sac
Type (film/tv show): TV Show
Film or show in which it appears: _Legends of Tomorrow_, Season 5, Episode 6 "Mister Parker's Cul-De-Sac"
Is the parent film/show streaming anywhere? [Netflix](https://www.netflix.com/search?q=Legends%20of%20Tomorrow&jbv=80066080), [The CW](https://www.cwtv.com/shows/dcs-legends-of-tomorrow/) (most recent season only)
About when in the parent film/show does it appear? Throughout
Actual footage of the film/show can be seen (yes/no)? Yes, there were also shorts from the show that had been released on YouTube at one point.
| process | add mister parker s cul de sac please add as much of the following info as you can title mister parker s cul de sac type film tv show tv show film or show in which it appears legends of tomorrow season episode mister parker s cul de sac is the parent film show streaming anywhere most recent season only about when in the parent film show does it appear throughout actual footage of the film show can be seen yes no yes there were also shorts from the show that had been released on youtube at one point | 1 |
48,333 | 13,325,321,539 | IssuesEvent | 2020-08-27 09:47:14 | solidify/fitbit-api-demo | https://api.github.com/repos/solidify/fitbit-api-demo | opened | CVE-2018-15756 (High) detected in spring-web-4.3.2.RELEASE.jar | security vulnerability | ## CVE-2018-15756 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-4.3.2.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /tmp/ws-scm/fitbit-api-demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/4.3.2.RELEASE/spring-web-4.3.2.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library)
- :x: **spring-web-4.3.2.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/solidify/fitbit-api-demo/commit/0b2ec4ebe69f408230a396984bb6ebaef3b6b3ab">0b2ec4ebe69f408230a396984bb6ebaef3b6b3ab</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework, version 5.1, versions 5.0.x prior to 5.0.10, versions 4.3.x prior to 4.3.20, and older unsupported versions on the 4.2.x branch provide support for range requests when serving static resources through the ResourceHttpRequestHandler, or starting in 5.0 when an annotated controller returns an org.springframework.core.io.Resource. A malicious user (or attacker) can add a range header with a high number of ranges, or with wide ranges that overlap, or both, for a denial of service attack. This vulnerability affects applications that depend on either spring-webmvc or spring-webflux. Such applications must also have a registration for serving static resources (e.g. JS, CSS, images, and others), or have an annotated controller that returns an org.springframework.core.io.Resource. Spring Boot applications that depend on spring-boot-starter-web or spring-boot-starter-webflux are ready to serve static resources out of the box and are therefore vulnerable.
<p>Publish Date: 2018-10-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-15756>CVE-2018-15756</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pivotal.io/security/cve-2018-15756">https://pivotal.io/security/cve-2018-15756</a></p>
<p>Release Date: 2018-10-18</p>
<p>Fix Resolution: 4.3.20,5.0.10,5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-15756 (High) detected in spring-web-4.3.2.RELEASE.jar - ## CVE-2018-15756 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-4.3.2.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /tmp/ws-scm/fitbit-api-demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/4.3.2.RELEASE/spring-web-4.3.2.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library)
- :x: **spring-web-4.3.2.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/solidify/fitbit-api-demo/commit/0b2ec4ebe69f408230a396984bb6ebaef3b6b3ab">0b2ec4ebe69f408230a396984bb6ebaef3b6b3ab</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework, version 5.1, versions 5.0.x prior to 5.0.10, versions 4.3.x prior to 4.3.20, and older unsupported versions on the 4.2.x branch provide support for range requests when serving static resources through the ResourceHttpRequestHandler, or starting in 5.0 when an annotated controller returns an org.springframework.core.io.Resource. A malicious user (or attacker) can add a range header with a high number of ranges, or with wide ranges that overlap, or both, for a denial of service attack. This vulnerability affects applications that depend on either spring-webmvc or spring-webflux. Such applications must also have a registration for serving static resources (e.g. JS, CSS, images, and others), or have an annotated controller that returns an org.springframework.core.io.Resource. Spring Boot applications that depend on spring-boot-starter-web or spring-boot-starter-webflux are ready to serve static resources out of the box and are therefore vulnerable.
<p>Publish Date: 2018-10-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-15756>CVE-2018-15756</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pivotal.io/security/cve-2018-15756">https://pivotal.io/security/cve-2018-15756</a></p>
<p>Release Date: 2018-10-18</p>
<p>Fix Resolution: 4.3.20,5.0.10,5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in spring web release jar cve high severity vulnerability vulnerable library spring web release jar spring web library home page a href path to dependency file tmp ws scm fitbit api demo pom xml path to vulnerable library home wss scanner repository org springframework spring web release spring web release jar dependency hierarchy spring boot starter web release jar root library x spring web release jar vulnerable library found in head commit a href vulnerability details spring framework version versions x prior to versions x prior to and older unsupported versions on the x branch provide support for range requests when serving static resources through the resourcehttprequesthandler or starting in when an annotated controller returns an org springframework core io resource a malicious user or attacker can add a range header with a high number of ranges or with wide ranges that overlap or both for a denial of service attack this vulnerability affects applications that depend on either spring webmvc or spring webflux such applications must also have a registration for serving static resources e g js css images and others or have an annotated controller that returns an org springframework core io resource spring boot applications that depend on spring boot starter web or spring boot starter webflux are ready to serve static resources out of the box and are therefore vulnerable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
21,046 | 27,992,170,789 | IssuesEvent | 2023-03-27 05:17:29 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | [attributeprocessor] Support applying an action conditionally on the existence of another attribute | Stale processor/attributes closed as inactive | **Is your feature request related to a problem? Please describe.**
We would like to be able to apply an insert/update/upsert action only if another specified attribute exists.
**Describe the solution you'd like**
Implement this change in the AttrProc whose logic is shared by both the Attribute Processor and Resource Processor.
Add the ability to configure an attribute key which must exist for the action to be applied.
An example configuration would look like
```yaml
processors:
attributes:
actions:
- action: upsert
key: k8s.cluster.name
from_attribute: k8s-cluster
dependent_on_attribute: k8s-pod-name
```
In this case, the attribute k8s.cluster.name will be added with the value from k8s-cluster only if the attribute k8s-pod-name exists.
We have a custom processor that implements this feature which we are willing to contribute. | 1.0 | [attributeprocessor] Support applying an action conditionally on the existence of another attribute - **Is your feature request related to a problem? Please describe.**
We would like to be able to apply an insert/update/upsert action only if another specified attribute exists.
**Describe the solution you'd like**
Implement this change in the AttrProc whose logic is shared by both the Attribute Processor and Resource Processor.
Add the ability to configure an attribute key which must exist for the action to be applied.
An example configuration would look like
```yaml
processors:
attributes:
actions:
- action: upsert
key: k8s.cluster.name
from_attribute: k8s-cluster
dependent_on_attribute: k8s-pod-name
```
In this case, the attribute k8s.cluster.name will be added with the value from k8s-cluster only if the attribute k8s-pod-name exists.
We have a custom processor that implements this feature which we are willing to contribute. | process | support applying an action conditionally on the existence of another attribute is your feature request related to a problem please describe we would like to be able to apply an insert update upsert action only if another specified attribute exists describe the solution you d like implement this change in the attrproc whose logic is shared by both the attribute processor and resource processor add the ability to configure an attribute key which must exist for the action to be applied an example configuration would look like yaml processors attributes actions action upsert key cluster name from attribute cluster dependent on attribute pod name in this case the attribute cluster name will be added with the value from cluster only if the attribute pod name exists we have a custom processor that implements this feature which we are willing to contribute | 1 |
256,479 | 27,561,678,457 | IssuesEvent | 2023-03-07 22:39:34 | samqws-marketing/amzn-ion-hive-serde | https://api.github.com/repos/samqws-marketing/amzn-ion-hive-serde | closed | CVE-2019-14540 (High) detected in jackson-databind-2.6.5.jar - autoclosed | Mend: dependency security vulnerability | ## CVE-2019-14540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /integration-test/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar</p>
<p>
Dependency Hierarchy:
- hive-serde-2.3.9.jar (Root Library)
- hive-common-2.3.9.jar
- :x: **jackson-databind-2.6.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/amzn-ion-hive-serde/commit/ffb6641ebb10aac58bb7eec412635e91e79fac24">ffb6641ebb10aac58bb7eec412635e91e79fac24</a></p>
<p>Found in base branch: <b>0.3.0</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariConfig.
<p>Publish Date: 2019-09-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-14540>CVE-2019-14540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14540">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14540</a></p>
<p>Release Date: 2019-09-15</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.6.7.4</p>
<p>Direct dependency fix Resolution (org.apache.hive:hive-serde): 3.0.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2019-14540 (High) detected in jackson-databind-2.6.5.jar - autoclosed - ## CVE-2019-14540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /integration-test/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar</p>
<p>
Dependency Hierarchy:
- hive-serde-2.3.9.jar (Root Library)
- hive-common-2.3.9.jar
- :x: **jackson-databind-2.6.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/amzn-ion-hive-serde/commit/ffb6641ebb10aac58bb7eec412635e91e79fac24">ffb6641ebb10aac58bb7eec412635e91e79fac24</a></p>
<p>Found in base branch: <b>0.3.0</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariConfig.
<p>Publish Date: 2019-09-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-14540>CVE-2019-14540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14540">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14540</a></p>
<p>Release Date: 2019-09-15</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.6.7.4</p>
<p>Direct dependency fix Resolution (org.apache.hive:hive-serde): 3.0.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_process | cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file integration test build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy hive serde jar root library hive common jar x jackson databind jar vulnerable library found in head commit a href found in base branch vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind before it is related to com zaxxer hikari hikariconfig publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution org apache hive hive serde check this box to open an automated fix pr | 0 |
348,503 | 10,443,491,598 | IssuesEvent | 2019-09-18 14:57:32 | alphagov/govuk-frontend | https://api.github.com/repos/alphagov/govuk-frontend | closed | Investigate flakey banner test [TIMEBOX: 1 hour] | Effort: hours Priority: low | Maybe to do with cookies being left around, I'm not sure but it is skipped for now: https://github.com/alphagov/govuk-frontend/pull/1383/commits/c44602d578a8a4a8933028823d1516d5ad6a99f0#diff-bac8f4ef40104d619366dd4794cacd26L32 | 1.0 | Investigate flakey banner test [TIMEBOX: 1 hour] - Maybe to do with cookies being left around, I'm not sure but it is skipped for now: https://github.com/alphagov/govuk-frontend/pull/1383/commits/c44602d578a8a4a8933028823d1516d5ad6a99f0#diff-bac8f4ef40104d619366dd4794cacd26L32 | non_process | investigate flakey banner test maybe to do with cookies being left around i m not sure but it is skipped for now | 0 |
9,342 | 12,343,255,265 | IssuesEvent | 2020-05-15 03:24:05 | google/mtail | https://api.github.com/repos/google/mtail | closed | Release builds for power pc users | process |
Hi,
is it fine to open a pull request to enable also release builds for ppc users?
It`s just a one-line change in https://github.com/google/mtail/blob/master/Makefile#L143.
Best,
Tobias | 1.0 | Release builds for power pc users -
Hi,
is it fine to open a pull request to enable also release builds for ppc users?
It`s just a one-line change in https://github.com/google/mtail/blob/master/Makefile#L143.
Best,
Tobias | process | release builds for power pc users hi is it fine to open a pull request to enable also release builds for ppc users it s just a one line change in best tobias | 1 |
10,360 | 13,183,319,726 | IssuesEvent | 2020-08-12 17:17:58 | department-of-veterans-affairs/notification-api | https://api.github.com/repos/department-of-veterans-affairs/notification-api | opened | Internal DNS entry | Needs Prioritization Process Task | Filip to add details
This story depends on the Infra - need the Load Balancer | 1.0 | Internal DNS entry - Filip to add details
This story depends on the Infra - need the Load Balancer | process | internal dns entry filip to add details this story depends on the infra need the load balancer | 1 |
5,002 | 7,836,109,850 | IssuesEvent | 2018-06-17 15:28:41 | GoogleCloudPlatform/google-cloud-cpp | https://api.github.com/repos/GoogleCloudPlatform/google-cloud-cpp | closed | Troubleshoot flaky polling_policy_test | bigtable priority: p0 type: process | The policy test is failing from time to time:
```
44/60 Test #44: polling_policy_test .........................................***Failed 0.41 sec
Running main() from gtest_main.cc
[==========] Running 3 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 3 tests from GenericPollingPolicy
[ RUN ] GenericPollingPolicy.Simple
/v/google/cloud/bigtable/polling_policy_test.cc:59: Failure
Value of: actual
Actual: false
Expected: true
```
| 1.0 | Troubleshoot flaky polling_policy_test - The policy test is failing from time to time:
```
44/60 Test #44: polling_policy_test .........................................***Failed 0.41 sec
Running main() from gtest_main.cc
[==========] Running 3 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 3 tests from GenericPollingPolicy
[ RUN ] GenericPollingPolicy.Simple
/v/google/cloud/bigtable/polling_policy_test.cc:59: Failure
Value of: actual
Actual: false
Expected: true
```
| process | troubleshoot flaky polling policy test the policy test is failing from time to time test polling policy test failed sec running main from gtest main cc running tests from test case global test environment set up tests from genericpollingpolicy genericpollingpolicy simple v google cloud bigtable polling policy test cc failure value of actual actual false expected true | 1 |
287,310 | 21,649,387,103 | IssuesEvent | 2022-05-06 07:43:13 | baobabsoluciones/cornflow | https://api.github.com/repos/baobabsoluciones/cornflow | opened | Update all documentation with core library and move documentation inside monorepo | documentation | The documentation should be on the main folder not on cornflow-server and should have the documentation of cornflow-core as well. | 1.0 | Update all documentation with core library and move documentation inside monorepo - The documentation should be on the main folder not on cornflow-server and should have the documentation of cornflow-core as well. | non_process | update all documentation with core library and move documentation inside monorepo the documentation should be on the main folder not on cornflow server and should have the documentation of cornflow core as well | 0 |
22,471 | 31,387,985,186 | IssuesEvent | 2023-08-26 01:56:54 | Warzone2100/map-submission | https://api.github.com/repos/Warzone2100/map-submission | opened | [MAP]: Crop_Circles | map unprocessed | ### Upload Map
[10c-Crop_Circles.zip](https://github.com/Warzone2100/map-submission/files/12444374/10c-Crop_Circles.zip)
### Authorship
Mine: I am the author of this map
### Map Description (optional)
```text
Concept of various siding and flanking, reverse and surprise potential offensives. While bases are all grouped in a same box area, it will ask a single, determined and powerful action from on side to take the final decision. Snow bowl effect if there really minimalist.
```
### Notes for Reviewers (optional)
_No response_ | 1.0 | [MAP]: Crop_Circles - ### Upload Map
[10c-Crop_Circles.zip](https://github.com/Warzone2100/map-submission/files/12444374/10c-Crop_Circles.zip)
### Authorship
Mine: I am the author of this map
### Map Description (optional)
```text
Concept of various siding and flanking, reverse and surprise potential offensives. While bases are all grouped in a same box area, it will ask a single, determined and powerful action from on side to take the final decision. Snow bowl effect if there really minimalist.
```
### Notes for Reviewers (optional)
_No response_ | process | crop circles upload map authorship mine i am the author of this map map description optional text concept of various siding and flanking reverse and surprise potential offensives while bases are all grouped in a same box area it will ask a single determined and powerful action from on side to take the final decision snow bowl effect if there really minimalist notes for reviewers optional no response | 1 |
7,274 | 10,428,326,762 | IssuesEvent | 2019-09-16 22:13:20 | DO-CV/sara | https://api.github.com/repos/DO-CV/sara | closed | [MAINT] Improve unit testing of Gaussian pyramid computation. | ImageProcessing easy | See if we can clean the code as well.
| 1.0 | [MAINT] Improve unit testing of Gaussian pyramid computation. - See if we can clean the code as well.
| process | improve unit testing of gaussian pyramid computation see if we can clean the code as well | 1 |
56,045 | 6,499,508,223 | IssuesEvent | 2017-08-22 21:51:00 | openid/OpenYOLO-Android | https://api.github.com/repos/openid/OpenYOLO-Android | closed | Test app: do not allow save credential until all mandatory fields populated | bug testapp | You can press the "save credential" button in the test app when mandatory fields are not set:

This results in a crash, due to the RequireViolation thrown from the Credential builder. Either catch and handle this gracefully, or disable the button when mandatory fields do not have values. | 1.0 | Test app: do not allow save credential until all mandatory fields populated - You can press the "save credential" button in the test app when mandatory fields are not set:

This results in a crash, due to the RequireViolation thrown from the Credential builder. Either catch and handle this gracefully, or disable the button when mandatory fields do not have values. | non_process | test app do not allow save credential until all mandatory fields populated you can press the save credential button in the test app when mandatory fields are not set this results in a crash due to the requireviolation thrown from the credential builder either catch and handle this gracefully or disable the button when mandatory fields do not have values | 0 |
12,462 | 14,937,001,038 | IssuesEvent | 2021-01-25 14:06:47 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Android] Study Activities > Remove 50% completion pop-up | Android Bug P2 Process: Tested dev | Steps:
1. Login and enroll into a study
2. Complete 50% of the study activities
3. Observe the pop-up
Actual: 50% completion pop-up is displayed
Expected: Remove 50% completion pop-up | 1.0 | [Android] Study Activities > Remove 50% completion pop-up - Steps:
1. Login and enroll into a study
2. Complete 50% of the study activities
3. Observe the pop-up
Actual: 50% completion pop-up is displayed
Expected: Remove 50% completion pop-up | process | study activities remove completion pop up steps login and enroll into a study complete of the study activities observe the pop up actual completion pop up is displayed expected remove completion pop up | 1 |
20,375 | 27,029,508,200 | IssuesEvent | 2023-02-12 02:00:06 | lizhihao6/get-daily-arxiv-noti | https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti | opened | New submissions for Fri, 10 Feb 23 | event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB | ## Keyword: events
### Optimized Hybrid Focal Margin Loss for Crack Segmentation
- **Authors:** Jiajie Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2302.04395
- **Pdf link:** https://arxiv.org/pdf/2302.04395
- **Abstract**
Many loss functions have been derived from cross-entropy loss functions such as large-margin softmax loss and focal loss. The large-margin softmax loss makes the classification more rigorous and prevents overfitting. The focal loss alleviates class imbalance in object detection by down-weighting the loss of well-classified examples. Recent research has shown that these two loss functions derived from cross entropy have valuable applications in the field of image segmentation. However, to the best of our knowledge, there is no unified formulation that combines these two loss functions so that they can not only be transformed mutually, but can also be used to simultaneously address class imbalance and overfitting. To this end, we subdivide the entropy-based loss into the regularizer-based entropy loss and the focal-based entropy loss, and propose a novel optimized hybrid focal loss to handle extreme class imbalance and prevent overfitting for crack segmentation. We have evaluated our proposal in comparison with three crack segmentation datasets (DeepCrack-DB, CRACK500 and our private PanelCrack dataset). Our experiments demonstrate that the focal margin component can significantly increase the IoU of cracks by 0.43 on DeepCrack-DB and 0.44 on our PanelCrack dataset, respectively.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Diverse Human Motion Prediction Guided by Multi-Level Spatial-Temporal Anchors
- **Authors:** Sirui Xu, Yu-Xiong Wang, Liang-Yan Gui
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.04860
- **Pdf link:** https://arxiv.org/pdf/2302.04860
- **Abstract**
Predicting diverse human motions given a sequence of historical poses has received increasing attention. Despite rapid progress, existing work captures the multi-modal nature of human motions primarily through likelihood-based sampling, where the mode collapse has been widely observed. In this paper, we propose a simple yet effective approach that disentangles randomly sampled codes with a deterministic learnable component named anchors to promote sample precision and diversity. Anchors are further factorized into spatial anchors and temporal anchors, which provide attractively interpretable control over spatial-temporal disparity. In principle, our spatial-temporal anchor-based sampling (STARS) can be applied to different motion predictors. Here we propose an interaction-enhanced spatial-temporal graph convolutional network (IE-STGCN) that encodes prior knowledge of human motions (e.g., spatial locality), and incorporate the anchors into it. Extensive experiments demonstrate that our approach outperforms state of the art in both stochastic and deterministic prediction, suggesting it as a unified framework for modeling human motions. Our code and pretrained models are available at https://github.com/Sirui-Xu/STARS.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Weakly Supervised Human Skin Segmentation using Guidance Attention Mechanisms
- **Authors:** Kooshan Hashemifard, Pau Climent-Perez, Francisco Florez-Revuelta
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.04625
- **Pdf link:** https://arxiv.org/pdf/2302.04625
- **Abstract**
Human skin segmentation is a crucial task in computer vision and biometric systems, yet it poses several challenges such as variability in skin color, pose, and illumination. This paper presents a robust data-driven skin segmentation method for a single image that addresses these challenges through the integration of contextual information and efficient network design. In addition to robustness and accuracy, the integration into real-time systems requires a careful balance between computational power, speed, and performance. The proposed method incorporates two attention modules, Body Attention and Skin Attention, that utilize contextual information to improve segmentation results. These modules draw attention to the desired areas, focusing on the body boundaries and skin pixels, respectively. Additionally, an efficient network architecture is employed in the encoder part to minimize computational power while retaining high performance. To handle the issue of noisy labels in skin datasets, the proposed method uses a weakly supervised training strategy, relying on the Skin Attention module. The results of this study demonstrate that the proposed method is comparable to, or outperforms, state-of-the-art methods on benchmark datasets.
## Keyword: raw image
There is no result
| 2.0 | New submissions for Fri, 10 Feb 23 - ## Keyword: events
### Optimized Hybrid Focal Margin Loss for Crack Segmentation
- **Authors:** Jiajie Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2302.04395
- **Pdf link:** https://arxiv.org/pdf/2302.04395
- **Abstract**
Many loss functions have been derived from cross-entropy loss functions such as large-margin softmax loss and focal loss. The large-margin softmax loss makes the classification more rigorous and prevents overfitting. The focal loss alleviates class imbalance in object detection by down-weighting the loss of well-classified examples. Recent research has shown that these two loss functions derived from cross entropy have valuable applications in the field of image segmentation. However, to the best of our knowledge, there is no unified formulation that combines these two loss functions so that they can not only be transformed mutually, but can also be used to simultaneously address class imbalance and overfitting. To this end, we subdivide the entropy-based loss into the regularizer-based entropy loss and the focal-based entropy loss, and propose a novel optimized hybrid focal loss to handle extreme class imbalance and prevent overfitting for crack segmentation. We have evaluated our proposal in comparison with three crack segmentation datasets (DeepCrack-DB, CRACK500 and our private PanelCrack dataset). Our experiments demonstrate that the focal margin component can significantly increase the IoU of cracks by 0.43 on DeepCrack-DB and 0.44 on our PanelCrack dataset, respectively.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Diverse Human Motion Prediction Guided by Multi-Level Spatial-Temporal Anchors
- **Authors:** Sirui Xu, Yu-Xiong Wang, Liang-Yan Gui
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.04860
- **Pdf link:** https://arxiv.org/pdf/2302.04860
- **Abstract**
Predicting diverse human motions given a sequence of historical poses has received increasing attention. Despite rapid progress, existing work captures the multi-modal nature of human motions primarily through likelihood-based sampling, where the mode collapse has been widely observed. In this paper, we propose a simple yet effective approach that disentangles randomly sampled codes with a deterministic learnable component named anchors to promote sample precision and diversity. Anchors are further factorized into spatial anchors and temporal anchors, which provide attractively interpretable control over spatial-temporal disparity. In principle, our spatial-temporal anchor-based sampling (STARS) can be applied to different motion predictors. Here we propose an interaction-enhanced spatial-temporal graph convolutional network (IE-STGCN) that encodes prior knowledge of human motions (e.g., spatial locality), and incorporate the anchors into it. Extensive experiments demonstrate that our approach outperforms state of the art in both stochastic and deterministic prediction, suggesting it as a unified framework for modeling human motions. Our code and pretrained models are available at https://github.com/Sirui-Xu/STARS.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Weakly Supervised Human Skin Segmentation using Guidance Attention Mechanisms
- **Authors:** Kooshan Hashemifard, Pau Climent-Perez, Francisco Florez-Revuelta
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.04625
- **Pdf link:** https://arxiv.org/pdf/2302.04625
- **Abstract**
Human skin segmentation is a crucial task in computer vision and biometric systems, yet it poses several challenges such as variability in skin color, pose, and illumination. This paper presents a robust data-driven skin segmentation method for a single image that addresses these challenges through the integration of contextual information and efficient network design. In addition to robustness and accuracy, the integration into real-time systems requires a careful balance between computational power, speed, and performance. The proposed method incorporates two attention modules, Body Attention and Skin Attention, that utilize contextual information to improve segmentation results. These modules draw attention to the desired areas, focusing on the body boundaries and skin pixels, respectively. Additionally, an efficient network architecture is employed in the encoder part to minimize computational power while retaining high performance. To handle the issue of noisy labels in skin datasets, the proposed method uses a weakly supervised training strategy, relying on the Skin Attention module. The results of this study demonstrate that the proposed method is comparable to, or outperforms, state-of-the-art methods on benchmark datasets.
## Keyword: raw image
There is no result
| process | new submissions for fri feb keyword events optimized hybrid focal margin loss for crack segmentation authors jiajie chen subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract many loss functions have been derived from cross entropy loss functions such as large margin softmax loss and focal loss the large margin softmax loss makes the classification more rigorous and prevents overfitting the focal loss alleviates class imbalance in object detection by down weighting the loss of well classified examples recent research has shown that these two loss functions derived from cross entropy have valuable applications in the field of image segmentation however to the best of our knowledge there is no unified formulation that combines these two loss functions so that they can not only be transformed mutually but can also be used to simultaneously address class imbalance and overfitting to this end we subdivide the entropy based loss into the regularizer based entropy loss and the focal based entropy loss and propose a novel optimized hybrid focal loss to handle extreme class imbalance and prevent overfitting for crack segmentation we have evaluated our proposal in comparison with three crack segmentation datasets deepcrack db and our private panelcrack dataset our experiments demonstrate that the focal margin component can significantly increase the iou of cracks by on deepcrack db and on our panelcrack dataset respectively keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp diverse human motion prediction guided by multi level spatial temporal anchors authors sirui xu yu xiong wang liang yan gui subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract predicting diverse human motions given a sequence of historical poses has received increasing attention despite rapid progress existing work captures the multi modal nature of human motions primarily through likelihood based sampling where the mode collapse has been widely observed in this paper we propose a simple yet effective approach that disentangles randomly sampled codes with a deterministic learnable component named anchors to promote sample precision and diversity anchors are further factorized into spatial anchors and temporal anchors which provide attractively interpretable control over spatial temporal disparity in principle our spatial temporal anchor based sampling stars can be applied to different motion predictors here we propose an interaction enhanced spatial temporal graph convolutional network ie stgcn that encodes prior knowledge of human motions e g spatial locality and incorporate the anchors into it extensive experiments demonstrate that our approach outperforms state of the art in both stochastic and deterministic prediction suggesting it as a unified framework for modeling human motions our code and pretrained models are available at keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw weakly supervised human skin segmentation using guidance attention mechanisms authors kooshan hashemifard pau climent perez francisco florez revuelta subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract human skin segmentation is a crucial task in computer vision and biometric systems yet it poses several challenges such as variability in skin color pose and illumination this paper presents a robust data driven skin segmentation method for a single image that addresses these challenges through the integration of contextual information and efficient network design in addition to robustness and accuracy the integration into real time systems requires a careful balance between computational power speed and performance the proposed method incorporates two attention modules body attention and skin attention that utilize contextual information to improve segmentation results these modules draw attention to the desired areas focusing on the body boundaries and skin pixels respectively additionally an efficient network architecture is employed in the encoder part to minimize computational power while retaining high performance to handle the issue of noisy labels in skin datasets the proposed method uses a weakly supervised training strategy relying on the skin attention module the results of this study demonstrate that the proposed method is comparable to or outperforms state of the art methods on benchmark datasets keyword raw image there is no result | 1 |
859 | 2,517,803,847 | IssuesEvent | 2015-01-16 17:17:06 | mozilla/webmaker-app | https://api.github.com/repos/mozilla/webmaker-app | closed | Redesign blog template | design: needs visual design | We'd like to try a first-run experience that involves pre-populating the main screen with some "ready-made" apps. The first three we were thinking of trying are:
* store (vendor)
* blog
* survey
| 2.0 | Redesign blog template - We'd like to try a first-run experience that involves pre-populating the main screen with some "ready-made" apps. The first three we were thinking of trying are:
* store (vendor)
* blog
* survey
| non_process | redesign blog template we d like to try a first run experience that involves pre populating the main screen with some ready made apps the first three we were thinking of trying are store vendor blog survey | 0 |
346,866 | 24,887,308,436 | IssuesEvent | 2022-10-28 08:53:40 | seanmanik/ped | https://api.github.com/repos/seanmanik/ped | opened | Lack of documentation for interview | type.DocumentationBug severity.Low | Currently, the summary of CinternS talks about helping users with their internship applications, but does not mention much about how interviews play a part in this context.
Users might thus be confused by the availability of the interview command, as well as the availability of an interview tag.
The User Guide is well documented thus far, but will benefit significantly from adding an introduction to how interviews are a crucial aspect of internship applications.
<!--session: 1666944787870-4c14a83a-590e-44b7-abea-2728cc038d02-->
<!--Version: Web v3.4.4--> | 1.0 | Lack of documentation for interview - Currently, the summary of CinternS talks about helping users with their internship applications, but does not mention much about how interviews play a part in this context.
Users might thus be confused by the availability of the interview command, as well as the availability of an interview tag.
The User Guide is well documented thus far, but will benefit significantly from adding an introduction to how interviews are a crucial aspect of internship applications.
<!--session: 1666944787870-4c14a83a-590e-44b7-abea-2728cc038d02-->
<!--Version: Web v3.4.4--> | non_process | lack of documentation for interview currently the summary of cinterns talks about helping users with their internship applications but does not mention much about how interviews play a part in this context users might thus be confused by the availability of the interview command as well as the availability of an interview tag the user guide is well documented thus far but will benefit significantly from adding an introduction to how interviews are a crucial aspect of internship applications | 0 |
20,980 | 27,843,541,850 | IssuesEvent | 2023-03-20 14:13:22 | UnitTestBot/UTBotJava | https://api.github.com/repos/UnitTestBot/UTBotJava | opened | `IllegalArgumentException`s in Instrumented process for `MockReturnObjectExample` | ctg-bug comp-instrumented-process | **Description**
There are error messages in the `utbot-engine-current.log`.
```java
InstrumentedProcessError: RdFault: InvocationPhase:
IllegalArgumentException: signature=calculateFromArray()I
expecting this, but provided argument list is empty
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke-BWLJW6A(InvokeInstrumentation.kt:49)
...
IllegalArgumentException: signature=calculate(I)I
expecting this, but provided argument list is empty
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke-BWLJW6A(InvokeInstrumentation.kt:49)
```
**To Reproduce**
1. Run the 'utbot' project in IntelliJ Idea 2022.2.4 Ultimate
2. [Install plugin built from unit-test-bot/rc3102023 branch](https://github.com/UnitTestBot/UTBotJava/actions/runs/4448325159)
3. Generate tests for `utbot-sample/src/test/java/org/utbot/mock/MockReturnObjectExample`
with default settings: Symbolic + Fuzzing
**Expected behavior**
Instrumented process should be executed correctly.
Test cases generated by Fuzzing are expected.
**Actual behavior**
There are Errors reports for `calculate` and `calculateFromArray`.
There are no test methods generated by Fuzzing.
**Visual proofs (screenshots, logs, images)**
~~~java
16:55:11.653 | ERROR | ConcreteExecutor | executeAsync, response(ERROR)
org.utbot.instrumentation.util.InstrumentedProcessError: Error in the instrumented process |> com.jetbrains.rd.util.reactive.RdFault: InvocationPhase, reason: org.utbot.instrumentation.instrumentation.execution.phases.ExecutionPhaseError: InvocationPhase
at org.utbot.instrumentation.instrumentation.execution.phases.InvocationPhase.wrapError(InvocationPhase.kt:22)
at org.utbot.instrumentation.instrumentation.execution.phases.ExecutionPhaseKt.start(ExecutionPhase.kt:30)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController.executePhaseInTimeout(PhasesController.kt:55)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation.invoke(UtExecutionInstrumentation.kt:108)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation.invoke(UtExecutionInstrumentation.kt:48)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:152)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:149)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:113)
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:86)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:112)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:173)
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:360)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase$queue$1.run(SingleThreadScheduler.kt:41)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalArgumentException: signature=calculateFromArray()I
expecting this, but provided argument list is empty
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke-BWLJW6A(InvokeInstrumentation.kt:49)
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke(InvokeInstrumentation.kt:21)
at org.utbot.instrumentation.instrumentation.Instrumentation$DefaultImpls.invoke$default(Instrumentation.kt:21)
at org.utbot.instrumentation.instrumentation.execution.phases.InvocationPhase.invoke-0E7RQCE(InvocationPhase.kt:31)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1$concreteResult$1.invoke-IoAF18A(UtExecutionInstrumentation.kt:109)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1$concreteResult$1.invoke(UtExecutionInstrumentation.kt:108)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1$result$1.invoke(PhasesController.kt:61)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$2.invoke(ThreadUtil.kt:56)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:47)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:43)
at kotlin.concurrent.ThreadsKt$thread$thread$1.run(Thread.kt:30)
at com.jetbrains.rd.framework.RdTaskResult$Companion.read(TaskInterfaces.kt:30)
at com.jetbrains.rd.framework.impl.CallSiteWiredRdTask.onWireReceived(RdTask.kt:104)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1$queue$1.invoke(RdTask.kt:278)
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1$queue$2.invokeSuspend(RdTask.kt:287)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source)
at org.utbot.common.ConcurrencyKt.runBlockingWithCancellationPredicate(Concurrency.kt:38)
at org.utbot.framework.plugin.api.TestCaseGenerator$generate$3.invoke(TestCaseGenerator.kt:156)
at org.utbot.framework.plugin.api.TestCaseGenerator$generate$3.invoke(TestCaseGenerator.kt:155)
at org.utbot.common.ConcurrencyKt.runIgnoringCancellationException(Concurrency.kt:47)
at org.utbot.framework.plugin.api.TestCaseGenerator.generate(TestCaseGenerator.kt:155)
at org.utbot.framework.process.EngineProcessMainKt$setup$3.invoke(EngineProcessMain.kt:111)
at org.utbot.framework.process.EngineProcessMainKt$setup$3.invoke(EngineProcessMain.kt:97)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:113)
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:86)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:112)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:173)
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:360)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase$queue$1.run(SingleThreadScheduler.kt:41)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.utbot.instrumentation.ConcreteExecutor.withProcess(ConcreteExecutor.kt:218) ~[utbot-instrumentation-2023.3.jar:?]
at org.utbot.instrumentation.ConcreteExecutor$withProcess$1.invokeSuspend(ConcreteExecutor.kt) ~[utbot-instrumentation-2023.3.jar:?]
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) [kotlin-stdlib-1.8.0.jar:1.8.0-release-345(1.8.0)]
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:104) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source) [utbot-instrumentation-2023.3.jar:?]
at org.utbot.common.ConcurrencyKt.runBlockingWithCancellationPredicate(Concurrency.kt:38) [utbot-core-2023.3.jar:?]
at org.utbot.framework.plugin.api.TestCaseGenerator$generate$3.invoke(TestCaseGenerator.kt:156) [utbot-framework-2023.3.jar:?]
at org.utbot.framework.plugin.api.TestCaseGenerator$generate$3.invoke(TestCaseGenerator.kt:155) [utbot-framework-2023.3.jar:?]
at org.utbot.common.ConcurrencyKt.runIgnoringCancellationException(Concurrency.kt:47) [utbot-core-2023.3.jar:?]
at org.utbot.framework.plugin.api.TestCaseGenerator.generate(TestCaseGenerator.kt:155) [utbot-framework-2023.3.jar:?]
at org.utbot.framework.process.EngineProcessMainKt$setup$3.invoke(EngineProcessMain.kt:111) [utbot-framework-2023.3.jar:?]
at org.utbot.framework.process.EngineProcessMainKt$setup$3.invoke(EngineProcessMain.kt:97) [utbot-framework-2023.3.jar:?]
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:113) [utbot-rd-2023.3.jar:?]
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:86) [utbot-rd-2023.3.jar:?]
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:112) [utbot-rd-2023.3.jar:?]
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:173) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:360) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase$queue$1.run(SingleThreadScheduler.kt:41) [rd-core-2022.2.1.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.jetbrains.rd.util.reactive.RdFault: InvocationPhase, reason: org.utbot.instrumentation.instrumentation.execution.phases.ExecutionPhaseError: InvocationPhase
at org.utbot.instrumentation.instrumentation.execution.phases.InvocationPhase.wrapError(InvocationPhase.kt:22)
at org.utbot.instrumentation.instrumentation.execution.phases.ExecutionPhaseKt.start(ExecutionPhase.kt:30)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController.executePhaseInTimeout(PhasesController.kt:55)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation.invoke(UtExecutionInstrumentation.kt:108)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation.invoke(UtExecutionInstrumentation.kt:48)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:152)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:149)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:113)
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:86)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:112)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:173)
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:360)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase$queue$1.run(SingleThreadScheduler.kt:41)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalArgumentException: signature=calculateFromArray()I
expecting this, but provided argument list is empty
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke-BWLJW6A(InvokeInstrumentation.kt:49)
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke(InvokeInstrumentation.kt:21)
at org.utbot.instrumentation.instrumentation.Instrumentation$DefaultImpls.invoke$default(Instrumentation.kt:21)
at org.utbot.instrumentation.instrumentation.execution.phases.InvocationPhase.invoke-0E7RQCE(InvocationPhase.kt:31)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1$concreteResult$1.invoke-IoAF18A(UtExecutionInstrumentation.kt:109)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1$concreteResult$1.invoke(UtExecutionInstrumentation.kt:108)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1$result$1.invoke(PhasesController.kt:61)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$2.invoke(ThreadUtil.kt:56)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:47)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:43)
at kotlin.concurrent.ThreadsKt$thread$thread$1.run(Thread.kt:30)
at com.jetbrains.rd.framework.RdTaskResult$Companion.read(TaskInterfaces.kt:30) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.CallSiteWiredRdTask.onWireReceived(RdTask.kt:104) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1$queue$1.invoke(RdTask.kt:278) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1$queue$2.invokeSuspend(RdTask.kt:287) ~[rd-framework-2022.2.1.jar:?]
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) ~[kotlin-stdlib-1.8.0.jar:1.8.0-release-345(1.8.0)]
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) ~[utbot-instrumentation-2023.3.jar:?]
... 28 more
~~~
**Environment**
Windows 10 Pro
IntelliJ IDEA 2022.2.24 Ultimate
**Additional context**
When running this code in a separate project - there were 1000+ exceptions of this kind.
| 1.0 | `IllegalArgumentException`s in Instrumented process for `MockReturnObjectExample` - **Description**
There are error messages in the `utbot-engine-current.log`.
```java
InstrumentedProcessError: RdFault: InvocationPhase:
IllegalArgumentException: signature=calculateFromArray()I
expecting this, but provided argument list is empty
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke-BWLJW6A(InvokeInstrumentation.kt:49)
...
IllegalArgumentException: signature=calculate(I)I
expecting this, but provided argument list is empty
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke-BWLJW6A(InvokeInstrumentation.kt:49)
```
**To Reproduce**
1. Run the 'utbot' project in IntelliJ Idea 2022.2.4 Ultimate
2. [Install plugin built from unit-test-bot/rc3102023 branch](https://github.com/UnitTestBot/UTBotJava/actions/runs/4448325159)
3. Generate tests for `utbot-sample/src/test/java/org/utbot/mock/MockReturnObjectExample`
with default settings: Symbolic + Fuzzing
**Expected behavior**
Instrumented process should be executed correctly.
Test cases generated by Fuzzing are expected.
**Actual behavior**
There are Errors reports for `calculate` and `calculateFromArray`.
There are no test methods generated by Fuzzing.
**Visual proofs (screenshots, logs, images)**
~~~java
16:55:11.653 | ERROR | ConcreteExecutor | executeAsync, response(ERROR)
org.utbot.instrumentation.util.InstrumentedProcessError: Error in the instrumented process |> com.jetbrains.rd.util.reactive.RdFault: InvocationPhase, reason: org.utbot.instrumentation.instrumentation.execution.phases.ExecutionPhaseError: InvocationPhase
at org.utbot.instrumentation.instrumentation.execution.phases.InvocationPhase.wrapError(InvocationPhase.kt:22)
at org.utbot.instrumentation.instrumentation.execution.phases.ExecutionPhaseKt.start(ExecutionPhase.kt:30)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController.executePhaseInTimeout(PhasesController.kt:55)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation.invoke(UtExecutionInstrumentation.kt:108)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation.invoke(UtExecutionInstrumentation.kt:48)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:152)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:149)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:113)
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:86)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:112)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:173)
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:360)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase$queue$1.run(SingleThreadScheduler.kt:41)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalArgumentException: signature=calculateFromArray()I
expecting this, but provided argument list is empty
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke-BWLJW6A(InvokeInstrumentation.kt:49)
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke(InvokeInstrumentation.kt:21)
at org.utbot.instrumentation.instrumentation.Instrumentation$DefaultImpls.invoke$default(Instrumentation.kt:21)
at org.utbot.instrumentation.instrumentation.execution.phases.InvocationPhase.invoke-0E7RQCE(InvocationPhase.kt:31)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1$concreteResult$1.invoke-IoAF18A(UtExecutionInstrumentation.kt:109)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1$concreteResult$1.invoke(UtExecutionInstrumentation.kt:108)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1$result$1.invoke(PhasesController.kt:61)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$2.invoke(ThreadUtil.kt:56)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:47)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:43)
at kotlin.concurrent.ThreadsKt$thread$thread$1.run(Thread.kt:30)
at com.jetbrains.rd.framework.RdTaskResult$Companion.read(TaskInterfaces.kt:30)
at com.jetbrains.rd.framework.impl.CallSiteWiredRdTask.onWireReceived(RdTask.kt:104)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1$queue$1.invoke(RdTask.kt:278)
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1$queue$2.invokeSuspend(RdTask.kt:287)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source)
at org.utbot.common.ConcurrencyKt.runBlockingWithCancellationPredicate(Concurrency.kt:38)
at org.utbot.framework.plugin.api.TestCaseGenerator$generate$3.invoke(TestCaseGenerator.kt:156)
at org.utbot.framework.plugin.api.TestCaseGenerator$generate$3.invoke(TestCaseGenerator.kt:155)
at org.utbot.common.ConcurrencyKt.runIgnoringCancellationException(Concurrency.kt:47)
at org.utbot.framework.plugin.api.TestCaseGenerator.generate(TestCaseGenerator.kt:155)
at org.utbot.framework.process.EngineProcessMainKt$setup$3.invoke(EngineProcessMain.kt:111)
at org.utbot.framework.process.EngineProcessMainKt$setup$3.invoke(EngineProcessMain.kt:97)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:113)
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:86)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:112)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:173)
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:360)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase$queue$1.run(SingleThreadScheduler.kt:41)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.utbot.instrumentation.ConcreteExecutor.withProcess(ConcreteExecutor.kt:218) ~[utbot-instrumentation-2023.3.jar:?]
at org.utbot.instrumentation.ConcreteExecutor$withProcess$1.invokeSuspend(ConcreteExecutor.kt) ~[utbot-instrumentation-2023.3.jar:?]
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) [kotlin-stdlib-1.8.0.jar:1.8.0-release-345(1.8.0)]
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:104) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38) [utbot-instrumentation-2023.3.jar:?]
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source) [utbot-instrumentation-2023.3.jar:?]
at org.utbot.common.ConcurrencyKt.runBlockingWithCancellationPredicate(Concurrency.kt:38) [utbot-core-2023.3.jar:?]
at org.utbot.framework.plugin.api.TestCaseGenerator$generate$3.invoke(TestCaseGenerator.kt:156) [utbot-framework-2023.3.jar:?]
at org.utbot.framework.plugin.api.TestCaseGenerator$generate$3.invoke(TestCaseGenerator.kt:155) [utbot-framework-2023.3.jar:?]
at org.utbot.common.ConcurrencyKt.runIgnoringCancellationException(Concurrency.kt:47) [utbot-core-2023.3.jar:?]
at org.utbot.framework.plugin.api.TestCaseGenerator.generate(TestCaseGenerator.kt:155) [utbot-framework-2023.3.jar:?]
at org.utbot.framework.process.EngineProcessMainKt$setup$3.invoke(EngineProcessMain.kt:111) [utbot-framework-2023.3.jar:?]
at org.utbot.framework.process.EngineProcessMainKt$setup$3.invoke(EngineProcessMain.kt:97) [utbot-framework-2023.3.jar:?]
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:113) [utbot-rd-2023.3.jar:?]
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:86) [utbot-rd-2023.3.jar:?]
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:112) [utbot-rd-2023.3.jar:?]
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:173) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:360) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12) [rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase$queue$1.run(SingleThreadScheduler.kt:41) [rd-core-2022.2.1.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.jetbrains.rd.util.reactive.RdFault: InvocationPhase, reason: org.utbot.instrumentation.instrumentation.execution.phases.ExecutionPhaseError: InvocationPhase
at org.utbot.instrumentation.instrumentation.execution.phases.InvocationPhase.wrapError(InvocationPhase.kt:22)
at org.utbot.instrumentation.instrumentation.execution.phases.ExecutionPhaseKt.start(ExecutionPhase.kt:30)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController.executePhaseInTimeout(PhasesController.kt:55)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation.invoke(UtExecutionInstrumentation.kt:108)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation.invoke(UtExecutionInstrumentation.kt:48)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:152)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:149)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:113)
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:86)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:112)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:173)
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:360)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12)
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase$queue$1.run(SingleThreadScheduler.kt:41)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalArgumentException: signature=calculateFromArray()I
expecting this, but provided argument list is empty
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke-BWLJW6A(InvokeInstrumentation.kt:49)
at org.utbot.instrumentation.instrumentation.InvokeInstrumentation.invoke(InvokeInstrumentation.kt:21)
at org.utbot.instrumentation.instrumentation.Instrumentation$DefaultImpls.invoke$default(Instrumentation.kt:21)
at org.utbot.instrumentation.instrumentation.execution.phases.InvocationPhase.invoke-0E7RQCE(InvocationPhase.kt:31)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1$concreteResult$1.invoke-IoAF18A(UtExecutionInstrumentation.kt:109)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1$concreteResult$1.invoke(UtExecutionInstrumentation.kt:108)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1$result$1.invoke(PhasesController.kt:61)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$2.invoke(ThreadUtil.kt:56)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:47)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:43)
at kotlin.concurrent.ThreadsKt$thread$thread$1.run(Thread.kt:30)
at com.jetbrains.rd.framework.RdTaskResult$Companion.read(TaskInterfaces.kt:30) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.CallSiteWiredRdTask.onWireReceived(RdTask.kt:104) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:12) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:12) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1$queue$1.invoke(RdTask.kt:278) ~[rd-framework-2022.2.1.jar:?]
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1$queue$2.invokeSuspend(RdTask.kt:287) ~[rd-framework-2022.2.1.jar:?]
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) ~[kotlin-stdlib-1.8.0.jar:1.8.0-release-345(1.8.0)]
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) ~[utbot-instrumentation-2023.3.jar:?]
... 28 more
~~~
**Environment**
Windows 10 Pro
IntelliJ IDEA 2022.2.24 Ultimate
**Additional context**
When running this code in a separate project - there were 1000+ exceptions of this kind.
| process | illegalargumentexception s in instrumented process for mockreturnobjectexample description there are error messages in the utbot engine current log java instrumentedprocesserror rdfault invocationphase illegalargumentexception signature calculatefromarray i expecting this but provided argument list is empty at org utbot instrumentation instrumentation invokeinstrumentation invoke invokeinstrumentation kt illegalargumentexception signature calculate i i expecting this but provided argument list is empty at org utbot instrumentation instrumentation invokeinstrumentation invoke invokeinstrumentation kt to reproduce run the utbot project in intellij idea ultimate generate tests for utbot sample src test java org utbot mock mockreturnobjectexample with default settings symbolic fuzzing expected behavior instrumented process should be executed correctly test cases generated by fuzzing are expected actual behavior there are errors reports for calculate and calculatefromarray there are no test methods generated by fuzzing visual proofs screenshots logs images java error concreteexecutor executeasync response error org utbot instrumentation util instrumentedprocesserror error in the instrumented process com jetbrains rd util reactive rdfault invocationphase reason org utbot instrumentation instrumentation execution phases executionphaseerror invocationphase at org utbot instrumentation instrumentation execution phases invocationphase wraperror invocationphase kt at org utbot instrumentation instrumentation execution phases executionphasekt start executionphase kt at org utbot instrumentation instrumentation execution phases phasescontroller executephaseintimeout phasescontroller kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation invoke utexecutioninstrumentation kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation invoke utexecutioninstrumentation kt at org utbot instrumentation process instrumentedprocessmainkt setup invoke instrumentedprocessmain kt at org utbot instrumentation process instrumentedprocessmainkt setup invoke instrumentedprocessmain kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at org utbot rd idlewatchdog wrapactive clientprocessutil kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework impl rdcall onwirereceived rdtask kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl protocolcontexts readmessagecontextandinvoke protocolcontexts kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd util threading singlethreadschedulerbase queue run singlethreadscheduler kt at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by java lang illegalargumentexception signature calculatefromarray i expecting this but provided argument list is empty at org utbot instrumentation instrumentation invokeinstrumentation invoke invokeinstrumentation kt at org utbot instrumentation instrumentation invokeinstrumentation invoke invokeinstrumentation kt at org utbot instrumentation instrumentation instrumentation defaultimpls invoke default instrumentation kt at org utbot instrumentation instrumentation execution phases invocationphase invoke invocationphase kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation invoke concreteresult invoke utexecutioninstrumentation kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation invoke concreteresult invoke utexecutioninstrumentation kt at org utbot instrumentation instrumentation execution phases phasescontroller executephaseintimeout result invoke phasescontroller kt at org utbot common threadbasedexecutor invokewithtimeout invoke threadutil kt at org utbot common threadbasedexecutor invokewithtimeout invoke threadutil kt at org utbot common threadbasedexecutor invokewithtimeout invoke threadutil kt at kotlin concurrent threadskt thread thread run thread kt at com jetbrains rd framework rdtaskresult companion read taskinterfaces kt at com jetbrains rd framework impl callsitewiredrdtask onwirereceived rdtask kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl protocolcontexts readmessagecontextandinvoke protocolcontexts kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl rdcall createresponsescheduler queue invoke rdtask kt at com jetbrains rd framework impl rdcall createresponsescheduler queue invokesuspend rdtask kt at kotlin coroutines jvm internal basecontinuationimpl resumewith continuationimpl kt at kotlinx coroutines dispatchedtask run dispatchedtask kt at kotlinx coroutines eventloopimplbase processnextevent eventloop common kt at kotlinx coroutines blockingcoroutine joinblocking builders kt at kotlinx coroutines builderskt builderskt runblocking builders kt at kotlinx coroutines builderskt runblocking unknown source at kotlinx coroutines builderskt builderskt runblocking default builders kt at kotlinx coroutines builderskt runblocking default unknown source at org utbot common concurrencykt runblockingwithcancellationpredicate concurrency kt at org utbot framework plugin api testcasegenerator generate invoke testcasegenerator kt at org utbot framework plugin api testcasegenerator generate invoke testcasegenerator kt at org utbot common concurrencykt runignoringcancellationexception concurrency kt at org utbot framework plugin api testcasegenerator generate testcasegenerator kt at org utbot framework process engineprocessmainkt setup invoke engineprocessmain kt at org utbot framework process engineprocessmainkt setup invoke engineprocessmain kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at org utbot rd idlewatchdog wrapactive clientprocessutil kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework impl rdcall onwirereceived rdtask kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl protocolcontexts readmessagecontextandinvoke protocolcontexts kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd util threading singlethreadschedulerbase queue run singlethreadscheduler kt at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java at org utbot instrumentation concreteexecutor withprocess concreteexecutor kt at org utbot instrumentation concreteexecutor withprocess invokesuspend concreteexecutor kt at kotlin coroutines jvm internal basecontinuationimpl resumewith continuationimpl kt at kotlinx coroutines dispatchedtask run dispatchedtask kt at kotlinx coroutines eventloopimplbase processnextevent eventloop common kt at kotlinx coroutines blockingcoroutine joinblocking builders kt at kotlinx coroutines builderskt builderskt runblocking builders kt at kotlinx coroutines builderskt runblocking unknown source at kotlinx coroutines builderskt builderskt runblocking default builders kt at kotlinx coroutines builderskt runblocking default unknown source at org utbot common concurrencykt runblockingwithcancellationpredicate concurrency kt at org utbot framework plugin api testcasegenerator generate invoke testcasegenerator kt at org utbot framework plugin api testcasegenerator generate invoke testcasegenerator kt at org utbot common concurrencykt runignoringcancellationexception concurrency kt at org utbot framework plugin api testcasegenerator generate testcasegenerator kt at org utbot framework process engineprocessmainkt setup invoke engineprocessmain kt at org utbot framework process engineprocessmainkt setup invoke engineprocessmain kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at org utbot rd idlewatchdog wrapactive clientprocessutil kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework impl rdcall onwirereceived rdtask kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl protocolcontexts readmessagecontextandinvoke protocolcontexts kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd util threading singlethreadschedulerbase queue run singlethreadscheduler kt at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by com jetbrains rd util reactive rdfault invocationphase reason org utbot instrumentation instrumentation execution phases executionphaseerror invocationphase at org utbot instrumentation instrumentation execution phases invocationphase wraperror invocationphase kt at org utbot instrumentation instrumentation execution phases executionphasekt start executionphase kt at org utbot instrumentation instrumentation execution phases phasescontroller executephaseintimeout phasescontroller kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation invoke utexecutioninstrumentation kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation invoke utexecutioninstrumentation kt at org utbot instrumentation process instrumentedprocessmainkt setup invoke instrumentedprocessmain kt at org utbot instrumentation process instrumentedprocessmainkt setup invoke instrumentedprocessmain kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at org utbot rd idlewatchdog wrapactive clientprocessutil kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework impl rdcall onwirereceived rdtask kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl protocolcontexts readmessagecontextandinvoke protocolcontexts kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd util threading singlethreadschedulerbase queue run singlethreadscheduler kt at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by java lang illegalargumentexception signature calculatefromarray i expecting this but provided argument list is empty at org utbot instrumentation instrumentation invokeinstrumentation invoke invokeinstrumentation kt at org utbot instrumentation instrumentation invokeinstrumentation invoke invokeinstrumentation kt at org utbot instrumentation instrumentation instrumentation defaultimpls invoke default instrumentation kt at org utbot instrumentation instrumentation execution phases invocationphase invoke invocationphase kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation invoke concreteresult invoke utexecutioninstrumentation kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation invoke concreteresult invoke utexecutioninstrumentation kt at org utbot instrumentation instrumentation execution phases phasescontroller executephaseintimeout result invoke phasescontroller kt at org utbot common threadbasedexecutor invokewithtimeout invoke threadutil kt at org utbot common threadbasedexecutor invokewithtimeout invoke threadutil kt at org utbot common threadbasedexecutor invokewithtimeout invoke threadutil kt at kotlin concurrent threadskt thread thread run thread kt at com jetbrains rd framework rdtaskresult companion read taskinterfaces kt at com jetbrains rd framework impl callsitewiredrdtask onwirereceived rdtask kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl protocolcontexts readmessagecontextandinvoke protocolcontexts kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl rdcall createresponsescheduler queue invoke rdtask kt at com jetbrains rd framework impl rdcall createresponsescheduler queue invokesuspend rdtask kt at kotlin coroutines jvm internal basecontinuationimpl resumewith continuationimpl kt at kotlinx coroutines dispatchedtask run dispatchedtask kt more environment windows pro intellij idea ultimate additional context when running this code in a separate project there were exceptions of this kind | 1 |
13,005 | 8,066,791,709 | IssuesEvent | 2018-08-04 20:17:26 | rixed/ramen | https://api.github.com/repos/rixed/ramen | opened | Improve quantile performences | performance | - Reservoir sampling to avoid collecting more than 10 times the required resolution;
- Selection of the requested element rather than sorting;
- Compute several results out of a single perce tile operation. | True | Improve quantile performences - - Reservoir sampling to avoid collecting more than 10 times the required resolution;
- Selection of the requested element rather than sorting;
- Compute several results out of a single perce tile operation. | non_process | improve quantile performences reservoir sampling to avoid collecting more than times the required resolution selection of the requested element rather than sorting compute several results out of a single perce tile operation | 0 |
324,969 | 27,835,485,207 | IssuesEvent | 2023-03-20 09:10:49 | rhinstaller/kickstart-tests | https://api.github.com/repos/rhinstaller/kickstart-tests | opened | rhel9 flake: "Anaconda.Modules.Security:gi.repository.GLib.GError: g-io-error-quark: Timeout was reached (24)" | test flake | 03-19-2023
```
02:21:27,841 WARNING org.fedoraproject.Anaconda.Boss:DEBUG:anaconda.modules.boss.module_manager.start_modules:org.fedoraproject.Anaconda.Addons.Kdump is available.
02:21:27,842 WARNING org.fedoraproject.Anaconda.Addons.Kdump:DEBUG:anaconda.modules.common.base.base:Start the loop.
02:21:51,638 EMERG kernel:watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/u2:1:11]
02:21:51,640 WARNING kernel:Modules linked in: intel_rapl_msr intel_rapl_common isst_if_common nfit libnvdimm rapl i2c_i801 i2c_smbus pcspkr lpc_ich virtio_balloon joydev fuse zram ext4 mbcache jbd2 loop nls_utf8 isofs sr_mod cdrom sg virtio_gpu virtio_dma_buf drm_shmem_helper drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ahci libahci drm libata 8021q garp mrp stp llc virtio_net crct10dif_pclmul crc32_pclmul net_failover ghash_clmulni_intel virtio_console failover virtio_blk serio_raw rfkill sunrpc lrw dm_crypt dm_round_robin dm_multipath dm_snapshot dm_bufio dm_mirror dm_region_hash dm_log dm_zero dm_mod linear raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid6_pq libcrc32c crc32c_intel raid1 raid0 iscsi_ibft squashfs be2iscsi bnx2i cnic uio cxgb4i cxgb4 tls libcxgbi libcxgb qla4xxx iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi edd
02:21:51,645 WARNING kernel:CPU: 0 PID: 11 Comm: kworker/u2:1 Not tainted 5.14.0-284.2.1.el9_2.x86_64 #1
02:21:51,645 WARNING kernel:Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.1-2.fc36 04/01/2014
02:21:51,645 WARNING kernel:Workqueue: loop1 loop_rootcg_workfn [loop]
02:21:51,645 WARNING kernel:RIP: 0010:_copy_to_iter+0x48b/0x6d0
02:21:51,645 WARNING kernel:Code: e0 06 49 03 01 48 2b 05 73 c0 f6 00 48 c1 f8 06 48 c1 e0 0c 48 03 05 74 c0 f6 00 48 01 c8 83 fa 08 0f 82 6f ff ff ff 48 8b 0e <48> 89 08 89 d1 48 8b 7c 0e f8 48 89 7c 08 f8 48 8d 78 08 48 83 e7
02:21:51,645 WARNING kernel:RSP: 0000:ffffa6ab40063b40 EFLAGS: 00010212
02:21:51,645 WARNING kernel:RAX: ffff9682c8806000 RBX: 000000002048b000 RCX: fffb40bd8b48ffff
02:21:51,645 WARNING kernel:RDX: 0000000000001000 RSI: ffff9682a158e000 RDI: ffff9682a158e000
02:21:51,645 WARNING kernel:RBP: 0000000000000000 R08: 0000000000000000 R09: ffffa6ab40063e20
02:21:51,645 WARNING kernel:R10: 0000000000001000 R11: ffff9682a158e000 R12: 0000000000001000
02:21:51,645 WARNING kernel:R13: 0000000000001000 R14: 0000000000000000 R15: ffffa6ab40063e30
02:21:51,646 WARNING kernel:FS: 0000000000000000(0000) GS:ffff9682f7e00000(0000) knlGS:0000000000000000
02:21:51,646 WARNING kernel:CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
02:21:51,646 WARNING kernel:CR2: 00007f8b3d2b1650 CR3: 000000007b636002 CR4: 0000000000770ef0
02:21:51,646 WARNING kernel:DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
02:21:51,646 WARNING kernel:DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
02:21:51,646 WARNING kernel:PKRU: 55555554
02:21:51,646 WARNING kernel:Call Trace:
02:21:51,646 WARNING kernel: <TASK>
02:21:51,646 WARNING kernel: ? filemap_get_pages+0x94/0x340
02:21:51,646 WARNING kernel: copy_page_to_iter+0x26f/0x490
02:21:51,647 WARNING kernel: filemap_read+0x18a/0x320
02:21:51,647 WARNING kernel: ? avc_has_perm+0x8f/0x1b0
02:21:51,647 WARNING kernel: do_iter_readv_writev+0x16c/0x190
02:21:51,647 WARNING kernel: do_iter_read+0xe9/0x170
02:21:51,647 WARNING kernel: loop_process_work+0x487/0x5f0 [loop]
02:21:51,647 WARNING kernel: process_one_work+0x1e5/0x3c0
02:21:51,647 WARNING kernel: worker_thread+0x50/0x3b0
02:21:51,647 WARNING kernel: ? rescuer_thread+0x3a0/0x3a0
02:21:51,647 WARNING kernel: kthread+0xd6/0x100
02:21:51,647 WARNING kernel: ? kthread_complete_and_exit+0x20/0x20
02:21:51,647 WARNING kernel: ret_from_fork+0x1f/0x30
02:21:51,647 WARNING kernel: </TASK>
02:21:53,173 INFO systemd:systemd-hostnamed.service: Deactivated successfully.
02:21:53,595 WARNING org.fedoraproject.Anaconda.Modules.Security:Traceback (most recent call last):
02:21:53,597 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
02:21:53,597 WARNING org.fedoraproject.Anaconda.Modules.Security: return _run_code(code, main_globals, None,
02:21:53,597 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: exec(code, run_globals)
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/security/__main__.py", line 25, in <module>
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: service.run()
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/common/base/base.py", line 90, in run
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: self.publish()
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/security/security.py", line 66, in publish
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: DBus.publish_object(SECURITY.object_path, SecurityInterface(self))
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib/python3.9/site-packages/dasbus/connection.py", line 287, in publish_object
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: object_handler.connect_object()
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib/python3.9/site-packages/dasbus/server/handler.py", line 324, in connect_object
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: self._register_object()
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib/python3.9/site-packages/dasbus/server/handler.py", line 339, in _register_object
02:21:53,600 WARNING org.fedoraproject.Anaconda.Modules.Security: self._message_bus.connection,
02:21:53,600 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib/python3.9/site-packages/dasbus/connection.py", line 169, in connection
02:21:53,600 WARNING org.fedoraproject.Anaconda.Modules.Security: self._connection = self._get_connection()
02:21:53,600 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/site-packages/pyanaconda/core/dbus.py", line 46, in _get_connection
02:21:53,601 WARNING org.fedoraproject.Anaconda.Modules.Security: return self._provider.get_addressed_bus_connection(bus_address)
02:21:53,601 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib/python3.9/site-packages/dasbus/connection.py", line 76, in get_addressed_bus_connection
02:21:53,601 WARNING org.fedoraproject.Anaconda.Modules.Security: return Gio.DBusConnection.new_for_address_sync(
02:21:53,601 WARNING org.fedoraproject.Anaconda.Modules.Security:gi.repository.GLib.GError: g-io-error-quark: Timeout was reached (24)
```
[virt-install.log](https://github.com/rhinstaller/kickstart-tests/files/11015986/virt-install.log)
| 1.0 | rhel9 flake: "Anaconda.Modules.Security:gi.repository.GLib.GError: g-io-error-quark: Timeout was reached (24)" - 03-19-2023
```
02:21:27,841 WARNING org.fedoraproject.Anaconda.Boss:DEBUG:anaconda.modules.boss.module_manager.start_modules:org.fedoraproject.Anaconda.Addons.Kdump is available.
02:21:27,842 WARNING org.fedoraproject.Anaconda.Addons.Kdump:DEBUG:anaconda.modules.common.base.base:Start the loop.
02:21:51,638 EMERG kernel:watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/u2:1:11]
02:21:51,640 WARNING kernel:Modules linked in: intel_rapl_msr intel_rapl_common isst_if_common nfit libnvdimm rapl i2c_i801 i2c_smbus pcspkr lpc_ich virtio_balloon joydev fuse zram ext4 mbcache jbd2 loop nls_utf8 isofs sr_mod cdrom sg virtio_gpu virtio_dma_buf drm_shmem_helper drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ahci libahci drm libata 8021q garp mrp stp llc virtio_net crct10dif_pclmul crc32_pclmul net_failover ghash_clmulni_intel virtio_console failover virtio_blk serio_raw rfkill sunrpc lrw dm_crypt dm_round_robin dm_multipath dm_snapshot dm_bufio dm_mirror dm_region_hash dm_log dm_zero dm_mod linear raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid6_pq libcrc32c crc32c_intel raid1 raid0 iscsi_ibft squashfs be2iscsi bnx2i cnic uio cxgb4i cxgb4 tls libcxgbi libcxgb qla4xxx iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi edd
02:21:51,645 WARNING kernel:CPU: 0 PID: 11 Comm: kworker/u2:1 Not tainted 5.14.0-284.2.1.el9_2.x86_64 #1
02:21:51,645 WARNING kernel:Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.1-2.fc36 04/01/2014
02:21:51,645 WARNING kernel:Workqueue: loop1 loop_rootcg_workfn [loop]
02:21:51,645 WARNING kernel:RIP: 0010:_copy_to_iter+0x48b/0x6d0
02:21:51,645 WARNING kernel:Code: e0 06 49 03 01 48 2b 05 73 c0 f6 00 48 c1 f8 06 48 c1 e0 0c 48 03 05 74 c0 f6 00 48 01 c8 83 fa 08 0f 82 6f ff ff ff 48 8b 0e <48> 89 08 89 d1 48 8b 7c 0e f8 48 89 7c 08 f8 48 8d 78 08 48 83 e7
02:21:51,645 WARNING kernel:RSP: 0000:ffffa6ab40063b40 EFLAGS: 00010212
02:21:51,645 WARNING kernel:RAX: ffff9682c8806000 RBX: 000000002048b000 RCX: fffb40bd8b48ffff
02:21:51,645 WARNING kernel:RDX: 0000000000001000 RSI: ffff9682a158e000 RDI: ffff9682a158e000
02:21:51,645 WARNING kernel:RBP: 0000000000000000 R08: 0000000000000000 R09: ffffa6ab40063e20
02:21:51,645 WARNING kernel:R10: 0000000000001000 R11: ffff9682a158e000 R12: 0000000000001000
02:21:51,645 WARNING kernel:R13: 0000000000001000 R14: 0000000000000000 R15: ffffa6ab40063e30
02:21:51,646 WARNING kernel:FS: 0000000000000000(0000) GS:ffff9682f7e00000(0000) knlGS:0000000000000000
02:21:51,646 WARNING kernel:CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
02:21:51,646 WARNING kernel:CR2: 00007f8b3d2b1650 CR3: 000000007b636002 CR4: 0000000000770ef0
02:21:51,646 WARNING kernel:DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
02:21:51,646 WARNING kernel:DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
02:21:51,646 WARNING kernel:PKRU: 55555554
02:21:51,646 WARNING kernel:Call Trace:
02:21:51,646 WARNING kernel: <TASK>
02:21:51,646 WARNING kernel: ? filemap_get_pages+0x94/0x340
02:21:51,646 WARNING kernel: copy_page_to_iter+0x26f/0x490
02:21:51,647 WARNING kernel: filemap_read+0x18a/0x320
02:21:51,647 WARNING kernel: ? avc_has_perm+0x8f/0x1b0
02:21:51,647 WARNING kernel: do_iter_readv_writev+0x16c/0x190
02:21:51,647 WARNING kernel: do_iter_read+0xe9/0x170
02:21:51,647 WARNING kernel: loop_process_work+0x487/0x5f0 [loop]
02:21:51,647 WARNING kernel: process_one_work+0x1e5/0x3c0
02:21:51,647 WARNING kernel: worker_thread+0x50/0x3b0
02:21:51,647 WARNING kernel: ? rescuer_thread+0x3a0/0x3a0
02:21:51,647 WARNING kernel: kthread+0xd6/0x100
02:21:51,647 WARNING kernel: ? kthread_complete_and_exit+0x20/0x20
02:21:51,647 WARNING kernel: ret_from_fork+0x1f/0x30
02:21:51,647 WARNING kernel: </TASK>
02:21:53,173 INFO systemd:systemd-hostnamed.service: Deactivated successfully.
02:21:53,595 WARNING org.fedoraproject.Anaconda.Modules.Security:Traceback (most recent call last):
02:21:53,597 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
02:21:53,597 WARNING org.fedoraproject.Anaconda.Modules.Security: return _run_code(code, main_globals, None,
02:21:53,597 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: exec(code, run_globals)
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/security/__main__.py", line 25, in <module>
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: service.run()
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/common/base/base.py", line 90, in run
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: self.publish()
02:21:53,598 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/security/security.py", line 66, in publish
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: DBus.publish_object(SECURITY.object_path, SecurityInterface(self))
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib/python3.9/site-packages/dasbus/connection.py", line 287, in publish_object
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: object_handler.connect_object()
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib/python3.9/site-packages/dasbus/server/handler.py", line 324, in connect_object
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: self._register_object()
02:21:53,599 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib/python3.9/site-packages/dasbus/server/handler.py", line 339, in _register_object
02:21:53,600 WARNING org.fedoraproject.Anaconda.Modules.Security: self._message_bus.connection,
02:21:53,600 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib/python3.9/site-packages/dasbus/connection.py", line 169, in connection
02:21:53,600 WARNING org.fedoraproject.Anaconda.Modules.Security: self._connection = self._get_connection()
02:21:53,600 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib64/python3.9/site-packages/pyanaconda/core/dbus.py", line 46, in _get_connection
02:21:53,601 WARNING org.fedoraproject.Anaconda.Modules.Security: return self._provider.get_addressed_bus_connection(bus_address)
02:21:53,601 WARNING org.fedoraproject.Anaconda.Modules.Security: File "/usr/lib/python3.9/site-packages/dasbus/connection.py", line 76, in get_addressed_bus_connection
02:21:53,601 WARNING org.fedoraproject.Anaconda.Modules.Security: return Gio.DBusConnection.new_for_address_sync(
02:21:53,601 WARNING org.fedoraproject.Anaconda.Modules.Security:gi.repository.GLib.GError: g-io-error-quark: Timeout was reached (24)
```
[virt-install.log](https://github.com/rhinstaller/kickstart-tests/files/11015986/virt-install.log)
| non_process | flake anaconda modules security gi repository glib gerror g io error quark timeout was reached warning org fedoraproject anaconda boss debug anaconda modules boss module manager start modules org fedoraproject anaconda addons kdump is available warning org fedoraproject anaconda addons kdump debug anaconda modules common base base start the loop emerg kernel watchdog bug soft lockup cpu stuck for warning kernel modules linked in intel rapl msr intel rapl common isst if common nfit libnvdimm rapl smbus pcspkr lpc ich virtio balloon joydev fuse zram mbcache loop nls isofs sr mod cdrom sg virtio gpu virtio dma buf drm shmem helper drm kms helper syscopyarea sysfillrect sysimgblt fb sys fops ahci libahci drm libata garp mrp stp llc virtio net pclmul pclmul net failover ghash clmulni intel virtio console failover virtio blk serio raw rfkill sunrpc lrw dm crypt dm round robin dm multipath dm snapshot dm bufio dm mirror dm region hash dm log dm zero dm mod linear async recov async memcpy async pq async xor async tx pq intel iscsi ibft squashfs cnic uio tls libcxgbi libcxgb iscsi boot sysfs iscsi tcp libiscsi tcp libiscsi scsi transport iscsi edd warning kernel cpu pid comm kworker not tainted warning kernel hardware name qemu standard pc bios warning kernel workqueue loop rootcg workfn warning kernel rip copy to iter warning kernel code fa ff ff ff warning kernel rsp eflags warning kernel rax rbx rcx warning kernel rdx rsi rdi warning kernel rbp warning kernel warning kernel warning kernel fs gs knlgs warning kernel cs ds es warning kernel warning kernel warning kernel warning kernel pkru warning kernel call trace warning kernel warning kernel filemap get pages warning kernel copy page to iter warning kernel filemap read warning kernel avc has perm warning kernel do iter readv writev warning kernel do iter read warning kernel loop process work warning kernel process one work warning kernel worker thread warning kernel rescuer thread warning kernel kthread warning kernel kthread complete and exit warning kernel ret from fork warning kernel info systemd systemd hostnamed service deactivated successfully warning org fedoraproject anaconda modules security traceback most recent call last warning org fedoraproject anaconda modules security file usr runpy py line in run module as main warning org fedoraproject anaconda modules security return run code code main globals none warning org fedoraproject anaconda modules security file usr runpy py line in run code warning org fedoraproject anaconda modules security exec code run globals warning org fedoraproject anaconda modules security file usr site packages pyanaconda modules security main py line in warning org fedoraproject anaconda modules security service run warning org fedoraproject anaconda modules security file usr site packages pyanaconda modules common base base py line in run warning org fedoraproject anaconda modules security self publish warning org fedoraproject anaconda modules security file usr site packages pyanaconda modules security security py line in publish warning org fedoraproject anaconda modules security dbus publish object security object path securityinterface self warning org fedoraproject anaconda modules security file usr lib site packages dasbus connection py line in publish object warning org fedoraproject anaconda modules security object handler connect object warning org fedoraproject anaconda modules security file usr lib site packages dasbus server handler py line in connect object warning org fedoraproject anaconda modules security self register object warning org fedoraproject anaconda modules security file usr lib site packages dasbus server handler py line in register object warning org fedoraproject anaconda modules security self message bus connection warning org fedoraproject anaconda modules security file usr lib site packages dasbus connection py line in connection warning org fedoraproject anaconda modules security self connection self get connection warning org fedoraproject anaconda modules security file usr site packages pyanaconda core dbus py line in get connection warning org fedoraproject anaconda modules security return self provider get addressed bus connection bus address warning org fedoraproject anaconda modules security file usr lib site packages dasbus connection py line in get addressed bus connection warning org fedoraproject anaconda modules security return gio dbusconnection new for address sync warning org fedoraproject anaconda modules security gi repository glib gerror g io error quark timeout was reached | 0 |
20,423 | 27,086,366,830 | IssuesEvent | 2023-02-14 17:16:35 | esmero/strawberry_runners | https://api.github.com/repos/esmero/strawberry_runners | closed | Fix Binary detection | bug enhancement Post processor Plugins | # What?
For the new TEXT processor I used a very naive Binary Detection way (mb_detect) which funny enough does not work the same in PHP 7+, 8.0 and 8.1
I'm changing this to a `pregmatch` using `//u` as detection. This will work! | 1.0 | Fix Binary detection - # What?
For the new TEXT processor I used a very naive Binary Detection way (mb_detect) which funny enough does not work the same in PHP 7+, 8.0 and 8.1
I'm changing this to a `pregmatch` using `//u` as detection. This will work! | process | fix binary detection what for the new text processor i used a very naive binary detection way mb detect which funny enough does not work the same in php and i m changing this to a pregmatch using u as detection this will work | 1 |
21,349 | 29,174,122,480 | IssuesEvent | 2023-05-19 06:16:18 | pycaret/pycaret | https://api.github.com/repos/pycaret/pycaret | closed | [BUG]: "feature_selection_estimator" is not working when using estimater(not string) | bug preprocessing | ### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [X] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master).
### Issue Description
Hello!
I set feature_selection_estimator to use LGBClassifier(importance_type='gain') for feature selection, but an error occurred.
When I checked the code, it seems that the value of feature_selection_estimator that I set was only used for error checking in _feature_selection in "preprocessor.py" and was not used anywhere else.
### Reproducible Example
```python
exp.setup(data=train, target='tg', feature_selection=True, feature_selection_estimator=LGBMClassifier(importance_type='gain'))
```
### Expected Behavior
Any estimator that can obtain feature_importance can be used to select features
### Actual Results
```python-traceback
Traceback (most recent call last):
File "C:\xxx\plugins\python-ce\helpers\pydev\pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\xxx\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/xxx/ensemble_learning_2.py", line 158, in <module>
exp.setup(
File "C:\Users\xxx\venv310\lib\site-packages\pycaret\classification\oop.py", line 880, in setup
self._feature_selection(
File "C:\Users\xxx\venv310\lib\site-packages\pycaret\internal\preprocess\preprocessor.py", line 993, in _feature_selection
estimator=fs_estimator,
UnboundLocalError: local variable 'fs_estimator' referenced before assignment
```
### Installed Versions
PyCaret 3.0.1 | 1.0 | [BUG]: "feature_selection_estimator" is not working when using estimater(not string) - ### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [X] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master).
### Issue Description
Hello!
I set feature_selection_estimator to use LGBClassifier(importance_type='gain') for feature selection, but an error occurred.
When I checked the code, it seems that the value of feature_selection_estimator that I set was only used for error checking in _feature_selection in "preprocessor.py" and was not used anywhere else.
### Reproducible Example
```python
exp.setup(data=train, target='tg', feature_selection=True, feature_selection_estimator=LGBMClassifier(importance_type='gain'))
```
### Expected Behavior
Any estimator that can obtain feature_importance can be used to select features
### Actual Results
```python-traceback
Traceback (most recent call last):
File "C:\xxx\plugins\python-ce\helpers\pydev\pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\xxx\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/xxx/ensemble_learning_2.py", line 158, in <module>
exp.setup(
File "C:\Users\xxx\venv310\lib\site-packages\pycaret\classification\oop.py", line 880, in setup
self._feature_selection(
File "C:\Users\xxx\venv310\lib\site-packages\pycaret\internal\preprocess\preprocessor.py", line 993, in _feature_selection
estimator=fs_estimator,
UnboundLocalError: local variable 'fs_estimator' referenced before assignment
```
### Installed Versions
PyCaret 3.0.1 | process | feature selection estimator is not working when using estimater not string pycaret version checks i have checked that this issue has not already been reported i have confirmed this bug exists on the of pycaret i have confirmed this bug exists on the master branch of pycaret pip install u git issue description hello i set feature selection estimator to use lgbclassifier importance type gain for feature selection but an error occurred when i checked the code it seems that the value of feature selection estimator that i set was only used for error checking in feature selection in preprocessor py and was not used anywhere else reproducible example python exp setup data train target tg feature selection true feature selection estimator lgbmclassifier importance type gain expected behavior any estimator that can obtain feature importance can be used to select features actual results python traceback traceback most recent call last file c xxx plugins python ce helpers pydev pydevd py line in exec pydev imports execfile file globals locals execute the script file c xxx plugins python ce helpers pydev pydev imps pydev execfile py line in execfile exec compile contents n file exec glob loc file c users xxx ensemble learning py line in exp setup file c users xxx lib site packages pycaret classification oop py line in setup self feature selection file c users xxx lib site packages pycaret internal preprocess preprocessor py line in feature selection estimator fs estimator unboundlocalerror local variable fs estimator referenced before assignment installed versions pycaret | 1 |
9,848 | 4,651,629,478 | IssuesEvent | 2016-10-03 10:55:09 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Cannot Build Node.js 6.7.0 on CorePure64 Tinycore Linux | build intl | * **Version**: 6.7.0
* **Platform**: CorePure64 Tinycore Linux
* **Subsystem**:
I'm trying to build node version 6.7.0 on 64 bit Tinycore Linux with g++ version 5.20.
I'm getting an error that the ```-fno-rtti``` flag is being set when the code is using ```typeid``` and ```dynamic_cast```.
### Steps to reproduce
CorePure64 TinyCore Linux ISO Download http://tinycorelinux.net/7.x/x86_64/release/CorePure64-7.2.iso
```shell
tce-load -wi python-dev compiletc
wget https://nodejs.org/dist/v6.7.0/node-v6.7.0.tar.gz
tar -zvxf node-v6.7.0.tar.gz
cd node-v6.7.0
./configure
make
```
### Make output
```
make[1]: Entering directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
g++ '-DU_I18N_IMPLEMENTATION=1' '-DU_ATTRIBUTE_DEPRECATED=' '-D_CRT_SECURE_NO_DEPRECATE=' '-DU_STATIC_IMPLEMENTATION=1' '-DUCONFIG_NO_TRANSLITERATION=1' '-DUCONFIG_NO_SERVICE=1' '-DUCONFIG_NO_REGULAR_EXPRESSIONS=1' '-DU_ENABLE_DYLOAD=0' '-DU_HAVE_STD_STRING=0' '-DUCONFIG_NO_BREAK_ITERATION=0' '-DUCONFIG_NO_LEGACY_CONVERSION=1' '-DUCONFIG_NO_CONVERSION=1' -I../deps/icu-small/source/i18n -I../deps/icu-small/source/common -pthread -Wall -Wextra -Wno-unused-parameter -m64 -Wno-deprecated-declarations -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++0x -frtti -MMD -MF /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/.deps//mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/rbt_pars.o.d.raw -mtune=generic -Os -pipe -fno-exceptions -fno-rtti -c -o /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/rbt_pars.o ../deps/icu-small/source/i18n/rbt_pars.cpp
make[1]: Leaving directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
make[1]: Entering directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
g++ '-DU_I18N_IMPLEMENTATION=1' '-DU_ATTRIBUTE_DEPRECATED=' '-D_CRT_SECURE_NO_DEPRECATE=' '-DU_STATIC_IMPLEMENTATION=1' '-DUCONFIG_NO_TRANSLITERATION=1' '-DUCONFIG_NO_SERVICE=1' '-DUCONFIG_NO_REGULAR_EXPRESSIONS=1' '-DU_ENABLE_DYLOAD=0' '-DU_HAVE_STD_STRING=0' '-DUCONFIG_NO_BREAK_ITERATION=0' '-DUCONFIG_NO_LEGACY_CONVERSION=1' '-DUCONFIG_NO_CONVERSION=1' -I../deps/icu-small/source/i18n -I../deps/icu-small/source/common -pthread -Wall -Wextra -Wno-unused-parameter -m64 -Wno-deprecated-declarations -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++0x -frtti -MMD -MF /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/.deps//mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/plurfmt.o.d.raw -mtune=generic -Os -pipe -fno-exceptions -fno-rtti -c -o /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/plurfmt.o ../deps/icu-small/source/i18n/plurfmt.cpp
../deps/icu-small/source/i18n/plurfmt.cpp: In member function 'icu_57::UnicodeString& icu_57::PluralFormat::format(const icu_57::Formattable&, double, icu_57::UnicodeString&, icu_57::FieldPosition&, UErrorCode&) const':
../deps/icu-small/source/i18n/plurfmt.cpp:271:75: error: 'dynamic_cast' not permitted with -fno-rtti
DecimalFormat *decFmt = dynamic_cast<DecimalFormat *>(numberFormat);
^
../deps/icu-small/source/i18n/plurfmt.cpp:284:75: error: 'dynamic_cast' not permitted with -fno-rtti
DecimalFormat *decFmt = dynamic_cast<DecimalFormat *>(numberFormat);
^
tools/icu/icui18n.target.mk:285: recipe for target '/mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/plurfmt.o' failed
make[1]: *** [/mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/plurfmt.o] Error 1
make[1]: Leaving directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
make[1]: Entering directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
g++ '-DU_I18N_IMPLEMENTATION=1' '-DU_ATTRIBUTE_DEPRECATED=' '-D_CRT_SECURE_NO_DEPRECATE=' '-DU_STATIC_IMPLEMENTATION=1' '-DUCONFIG_NO_TRANSLITERATION=1' '-DUCONFIG_NO_SERVICE=1' '-DUCONFIG_NO_REGULAR_EXPRESSIONS=1' '-DU_ENABLE_DYLOAD=0' '-DU_HAVE_STD_STRING=0' '-DUCONFIG_NO_BREAK_ITERATION=0' '-DUCONFIG_NO_LEGACY_CONVERSION=1' '-DUCONFIG_NO_CONVERSION=1' -I../deps/icu-small/source/i18n -I../deps/icu-small/source/common -pthread -Wall -Wextra -Wno-unused-parameter -m64 -Wno-deprecated-declarations -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++0x -frtti -MMD -MF /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/.deps//mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/dtfmtsym.o.d.raw -mtune=generic -Os -pipe -fno-exceptions -fno-rtti -c -o /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/dtfmtsym.o ../deps/icu-small/source/i18n/dtfmtsym.cpp
In file included from ../deps/icu-small/source/i18n/dtfmtsym.cpp:43:0:
../deps/icu-small/source/common/unifiedcache.h: In member function 'virtual int32_t icu_57::CacheKey<T>::hashCode() const':
../deps/icu-small/source/common/unifiedcache.h:107:32: error: cannot use typeid with -fno-rtti
const char *s = typeid(T).name();
^
../deps/icu-small/source/common/unifiedcache.h: In member function 'virtual char* icu_57::CacheKey<T>::writeDescription(char*, int32_t) const':
../deps/icu-small/source/common/unifiedcache.h:115:32: error: cannot use typeid with -fno-rtti
const char *s = typeid(T).name();
^
../deps/icu-small/source/common/unifiedcache.h: In member function 'virtual UBool icu_57::CacheKey<T>::operator==(const icu_57::CacheKeyBase&) const':
../deps/icu-small/source/common/unifiedcache.h:125:23: error: cannot use typeid with -fno-rtti
return typeid(*this) == typeid(other);
^
../deps/icu-small/source/common/unifiedcache.h:125:39: error: cannot use typeid with -fno-rtti
return typeid(*this) == typeid(other);
^
tools/icu/icui18n.target.mk:285: recipe for target '/mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/dtfmtsym.o' failed
make[1]: *** [/mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/dtfmtsym.o] Error 1
make[1]: Leaving directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
make[1]: Entering directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
``` | 1.0 | Cannot Build Node.js 6.7.0 on CorePure64 Tinycore Linux - * **Version**: 6.7.0
* **Platform**: CorePure64 Tinycore Linux
* **Subsystem**:
I'm trying to build node version 6.7.0 on 64 bit Tinycore Linux with g++ version 5.20.
I'm getting an error that the ```-fno-rtti``` flag is being set when the code is using ```typeid``` and ```dynamic_cast```.
### Steps to reproduce
CorePure64 TinyCore Linux ISO Download http://tinycorelinux.net/7.x/x86_64/release/CorePure64-7.2.iso
```shell
tce-load -wi python-dev compiletc
wget https://nodejs.org/dist/v6.7.0/node-v6.7.0.tar.gz
tar -zvxf node-v6.7.0.tar.gz
cd node-v6.7.0
./configure
make
```
### Make output
```
make[1]: Entering directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
g++ '-DU_I18N_IMPLEMENTATION=1' '-DU_ATTRIBUTE_DEPRECATED=' '-D_CRT_SECURE_NO_DEPRECATE=' '-DU_STATIC_IMPLEMENTATION=1' '-DUCONFIG_NO_TRANSLITERATION=1' '-DUCONFIG_NO_SERVICE=1' '-DUCONFIG_NO_REGULAR_EXPRESSIONS=1' '-DU_ENABLE_DYLOAD=0' '-DU_HAVE_STD_STRING=0' '-DUCONFIG_NO_BREAK_ITERATION=0' '-DUCONFIG_NO_LEGACY_CONVERSION=1' '-DUCONFIG_NO_CONVERSION=1' -I../deps/icu-small/source/i18n -I../deps/icu-small/source/common -pthread -Wall -Wextra -Wno-unused-parameter -m64 -Wno-deprecated-declarations -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++0x -frtti -MMD -MF /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/.deps//mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/rbt_pars.o.d.raw -mtune=generic -Os -pipe -fno-exceptions -fno-rtti -c -o /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/rbt_pars.o ../deps/icu-small/source/i18n/rbt_pars.cpp
make[1]: Leaving directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
make[1]: Entering directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
g++ '-DU_I18N_IMPLEMENTATION=1' '-DU_ATTRIBUTE_DEPRECATED=' '-D_CRT_SECURE_NO_DEPRECATE=' '-DU_STATIC_IMPLEMENTATION=1' '-DUCONFIG_NO_TRANSLITERATION=1' '-DUCONFIG_NO_SERVICE=1' '-DUCONFIG_NO_REGULAR_EXPRESSIONS=1' '-DU_ENABLE_DYLOAD=0' '-DU_HAVE_STD_STRING=0' '-DUCONFIG_NO_BREAK_ITERATION=0' '-DUCONFIG_NO_LEGACY_CONVERSION=1' '-DUCONFIG_NO_CONVERSION=1' -I../deps/icu-small/source/i18n -I../deps/icu-small/source/common -pthread -Wall -Wextra -Wno-unused-parameter -m64 -Wno-deprecated-declarations -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++0x -frtti -MMD -MF /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/.deps//mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/plurfmt.o.d.raw -mtune=generic -Os -pipe -fno-exceptions -fno-rtti -c -o /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/plurfmt.o ../deps/icu-small/source/i18n/plurfmt.cpp
../deps/icu-small/source/i18n/plurfmt.cpp: In member function 'icu_57::UnicodeString& icu_57::PluralFormat::format(const icu_57::Formattable&, double, icu_57::UnicodeString&, icu_57::FieldPosition&, UErrorCode&) const':
../deps/icu-small/source/i18n/plurfmt.cpp:271:75: error: 'dynamic_cast' not permitted with -fno-rtti
DecimalFormat *decFmt = dynamic_cast<DecimalFormat *>(numberFormat);
^
../deps/icu-small/source/i18n/plurfmt.cpp:284:75: error: 'dynamic_cast' not permitted with -fno-rtti
DecimalFormat *decFmt = dynamic_cast<DecimalFormat *>(numberFormat);
^
tools/icu/icui18n.target.mk:285: recipe for target '/mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/plurfmt.o' failed
make[1]: *** [/mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/plurfmt.o] Error 1
make[1]: Leaving directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
make[1]: Entering directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
g++ '-DU_I18N_IMPLEMENTATION=1' '-DU_ATTRIBUTE_DEPRECATED=' '-D_CRT_SECURE_NO_DEPRECATE=' '-DU_STATIC_IMPLEMENTATION=1' '-DUCONFIG_NO_TRANSLITERATION=1' '-DUCONFIG_NO_SERVICE=1' '-DUCONFIG_NO_REGULAR_EXPRESSIONS=1' '-DU_ENABLE_DYLOAD=0' '-DU_HAVE_STD_STRING=0' '-DUCONFIG_NO_BREAK_ITERATION=0' '-DUCONFIG_NO_LEGACY_CONVERSION=1' '-DUCONFIG_NO_CONVERSION=1' -I../deps/icu-small/source/i18n -I../deps/icu-small/source/common -pthread -Wall -Wextra -Wno-unused-parameter -m64 -Wno-deprecated-declarations -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++0x -frtti -MMD -MF /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/.deps//mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/dtfmtsym.o.d.raw -mtune=generic -Os -pipe -fno-exceptions -fno-rtti -c -o /mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/dtfmtsym.o ../deps/icu-small/source/i18n/dtfmtsym.cpp
In file included from ../deps/icu-small/source/i18n/dtfmtsym.cpp:43:0:
../deps/icu-small/source/common/unifiedcache.h: In member function 'virtual int32_t icu_57::CacheKey<T>::hashCode() const':
../deps/icu-small/source/common/unifiedcache.h:107:32: error: cannot use typeid with -fno-rtti
const char *s = typeid(T).name();
^
../deps/icu-small/source/common/unifiedcache.h: In member function 'virtual char* icu_57::CacheKey<T>::writeDescription(char*, int32_t) const':
../deps/icu-small/source/common/unifiedcache.h:115:32: error: cannot use typeid with -fno-rtti
const char *s = typeid(T).name();
^
../deps/icu-small/source/common/unifiedcache.h: In member function 'virtual UBool icu_57::CacheKey<T>::operator==(const icu_57::CacheKeyBase&) const':
../deps/icu-small/source/common/unifiedcache.h:125:23: error: cannot use typeid with -fno-rtti
return typeid(*this) == typeid(other);
^
../deps/icu-small/source/common/unifiedcache.h:125:39: error: cannot use typeid with -fno-rtti
return typeid(*this) == typeid(other);
^
tools/icu/icui18n.target.mk:285: recipe for target '/mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/dtfmtsym.o' failed
make[1]: *** [/mnt/vda1/forge/tp_node/node-v6.7.0/out/Release/obj.target/icui18n/deps/icu-small/source/i18n/dtfmtsym.o] Error 1
make[1]: Leaving directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
make[1]: Entering directory '/mnt/vda1/forge/tp_node/node-v6.7.0/out'
``` | non_process | cannot build node js on tinycore linux version platform tinycore linux subsystem i m trying to build node version on bit tinycore linux with g version i m getting an error that the fno rtti flag is being set when the code is using typeid and dynamic cast steps to reproduce tinycore linux iso download shell tce load wi python dev compiletc wget tar zvxf node tar gz cd node configure make make output make entering directory mnt forge tp node node out g du implementation du attribute deprecated d crt secure no deprecate du static implementation duconfig no transliteration duconfig no service duconfig no regular expressions du enable dyload du have std string duconfig no break iteration duconfig no legacy conversion duconfig no conversion i deps icu small source i deps icu small source common pthread wall wextra wno unused parameter wno deprecated declarations fno omit frame pointer fno rtti fno exceptions std gnu frtti mmd mf mnt forge tp node node out release deps mnt forge tp node node out release obj target deps icu small source rbt pars o d raw mtune generic os pipe fno exceptions fno rtti c o mnt forge tp node node out release obj target deps icu small source rbt pars o deps icu small source rbt pars cpp make leaving directory mnt forge tp node node out make entering directory mnt forge tp node node out g du implementation du attribute deprecated d crt secure no deprecate du static implementation duconfig no transliteration duconfig no service duconfig no regular expressions du enable dyload du have std string duconfig no break iteration duconfig no legacy conversion duconfig no conversion i deps icu small source i deps icu small source common pthread wall wextra wno unused parameter wno deprecated declarations fno omit frame pointer fno rtti fno exceptions std gnu frtti mmd mf mnt forge tp node node out release deps mnt forge tp node node out release obj target deps icu small source plurfmt o d raw mtune generic os pipe fno exceptions fno rtti c o mnt forge tp node node out release obj target deps icu small source plurfmt o deps icu small source plurfmt cpp deps icu small source plurfmt cpp in member function icu unicodestring icu pluralformat format const icu formattable double icu unicodestring icu fieldposition uerrorcode const deps icu small source plurfmt cpp error dynamic cast not permitted with fno rtti decimalformat decfmt dynamic cast numberformat deps icu small source plurfmt cpp error dynamic cast not permitted with fno rtti decimalformat decfmt dynamic cast numberformat tools icu target mk recipe for target mnt forge tp node node out release obj target deps icu small source plurfmt o failed make error make leaving directory mnt forge tp node node out make entering directory mnt forge tp node node out g du implementation du attribute deprecated d crt secure no deprecate du static implementation duconfig no transliteration duconfig no service duconfig no regular expressions du enable dyload du have std string duconfig no break iteration duconfig no legacy conversion duconfig no conversion i deps icu small source i deps icu small source common pthread wall wextra wno unused parameter wno deprecated declarations fno omit frame pointer fno rtti fno exceptions std gnu frtti mmd mf mnt forge tp node node out release deps mnt forge tp node node out release obj target deps icu small source dtfmtsym o d raw mtune generic os pipe fno exceptions fno rtti c o mnt forge tp node node out release obj target deps icu small source dtfmtsym o deps icu small source dtfmtsym cpp in file included from deps icu small source dtfmtsym cpp deps icu small source common unifiedcache h in member function virtual t icu cachekey hashcode const deps icu small source common unifiedcache h error cannot use typeid with fno rtti const char s typeid t name deps icu small source common unifiedcache h in member function virtual char icu cachekey writedescription char t const deps icu small source common unifiedcache h error cannot use typeid with fno rtti const char s typeid t name deps icu small source common unifiedcache h in member function virtual ubool icu cachekey operator const icu cachekeybase const deps icu small source common unifiedcache h error cannot use typeid with fno rtti return typeid this typeid other deps icu small source common unifiedcache h error cannot use typeid with fno rtti return typeid this typeid other tools icu target mk recipe for target mnt forge tp node node out release obj target deps icu small source dtfmtsym o failed make error make leaving directory mnt forge tp node node out make entering directory mnt forge tp node node out | 0 |
8,226 | 11,413,047,489 | IssuesEvent | 2020-02-01 17:07:18 | koalaverse/homlr | https://api.github.com/repos/koalaverse/homlr | closed | Minor typos in several chapters | 01 Introduction to ML 02 Modeling Process 03 Feature & Target Eng 04 Linear Regression 06 Regularized Regression 07 MARS 09 Decision Trees 10 Bagging 11 Random Forests typo | Hi thanks for this book. While reading I come accross many minor typos in the text. I'll try to reference them here.
The typos and my propositions will be in **bold**; redundant words are <s>strikeouted</s>
In Chapter 4.8 : The last paragraph of the section :
> gradually decreases for **lessor** important variables.
Correct would be :
> gradually decreases for **lesser** important variables.
In Chapter 6.2 : the paragraph right after the note to the reader.
> Many real-life data sets, like those **cmmon** to text mining
Correct would be :
> Many real-life data sets, like those **common** to text mining
In Chapter 9.5 . The last sentence of the second paragraph
> "Basically, this is telling us that Overall_Qual is an important predictor **os** sales price"
Correct would be :
>Basically, this is telling us that Overall_Qual is an important predictor **of** sales price.
In Chapter 10.2. The last sentence of the "note to the reader"
> you’ll often find that the averaged guesses tends to be a lot closer to the true **numnber**.
Correct would be
> you’ll often find that the averaged guesses tends to be a lot closer to the true **number**
In 10.3. The fourth (4th) line of the first paragraph
> we’re keeping bias low and **avriance ** high
Correct would be:
> we’re keeping bias low and **variance** high
In 10.4 last sentence of the third paragraph
> how the OOB error closely <s>closely</s> approximates the test error.
In Chapter 11, second line
> collection of de-correlated trees to <s>to</s> further improve
In 11.4.3, the fifth line
> if computation time is a concern **than** you can
Correct would be:
> if computation time is a concern **then** you can
In 11.7, second line
> (**witht he** exception of surrogate splits)
Correct would be :
> (**with the** exception of surrogate splits)
I'm still reading the book, so I will keep posting the issues. | 1.0 | Minor typos in several chapters - Hi thanks for this book. While reading I come accross many minor typos in the text. I'll try to reference them here.
The typos and my propositions will be in **bold**; redundant words are <s>strikeouted</s>
In Chapter 4.8 : The last paragraph of the section :
> gradually decreases for **lessor** important variables.
Correct would be :
> gradually decreases for **lesser** important variables.
In Chapter 6.2 : the paragraph right after the note to the reader.
> Many real-life data sets, like those **cmmon** to text mining
Correct would be :
> Many real-life data sets, like those **common** to text mining
In Chapter 9.5 . The last sentence of the second paragraph
> "Basically, this is telling us that Overall_Qual is an important predictor **os** sales price"
Correct would be :
>Basically, this is telling us that Overall_Qual is an important predictor **of** sales price.
In Chapter 10.2. The last sentence of the "note to the reader"
> you’ll often find that the averaged guesses tends to be a lot closer to the true **numnber**.
Correct would be
> you’ll often find that the averaged guesses tends to be a lot closer to the true **number**
In 10.3. The fourth (4th) line of the first paragraph
> we’re keeping bias low and **avriance ** high
Correct would be:
> we’re keeping bias low and **variance** high
In 10.4 last sentence of the third paragraph
> how the OOB error closely <s>closely</s> approximates the test error.
In Chapter 11, second line
> collection of de-correlated trees to <s>to</s> further improve
In 11.4.3, the fifth line
> if computation time is a concern **than** you can
Correct would be:
> if computation time is a concern **then** you can
In 11.7, second line
> (**witht he** exception of surrogate splits)
Correct would be :
> (**with the** exception of surrogate splits)
I'm still reading the book, so I will keep posting the issues. | process | minor typos in several chapters hi thanks for this book while reading i come accross many minor typos in the text i ll try to reference them here the typos and my propositions will be in bold redundant words are strikeouted in chapter the last paragraph of the section gradually decreases for lessor important variables correct would be gradually decreases for lesser important variables in chapter the paragraph right after the note to the reader many real life data sets like those cmmon to text mining correct would be many real life data sets like those common to text mining in chapter the last sentence of the second paragraph basically this is telling us that overall qual is an important predictor os sales price correct would be basically this is telling us that overall qual is an important predictor of sales price in chapter the last sentence of the note to the reader you’ll often find that the averaged guesses tends to be a lot closer to the true numnber correct would be you’ll often find that the averaged guesses tends to be a lot closer to the true number in the fourth line of the first paragraph we’re keeping bias low and avriance high correct would be we’re keeping bias low and variance high in last sentence of the third paragraph how the oob error closely closely approximates the test error in chapter second line collection of de correlated trees to to further improve in the fifth line if computation time is a concern than you can correct would be if computation time is a concern then you can in second line witht he exception of surrogate splits correct would be with the exception of surrogate splits i m still reading the book so i will keep posting the issues | 1 |
19,004 | 25,005,955,080 | IssuesEvent | 2022-11-03 11:52:41 | unisonweb/unison | https://api.github.com/repos/unisonweb/unison | opened | `todo` order is wrong | update-process todo-command | Neither working through from the top down nor the bottom up in the `todo` list after a type refactor necessarily gives work in dependency order.
I'm omitting the specific example because it isn't public code, and we imagine it could be minimized further anyway.
The symptom was that working todo 1/112 required us to first complete todo 40/112. Working from the bottom up also didn't work, thus we think the order heuristic must be wrong or broken. | 1.0 | `todo` order is wrong - Neither working through from the top down nor the bottom up in the `todo` list after a type refactor necessarily gives work in dependency order.
I'm omitting the specific example because it isn't public code, and we imagine it could be minimized further anyway.
The symptom was that working todo 1/112 required us to first complete todo 40/112. Working from the bottom up also didn't work, thus we think the order heuristic must be wrong or broken. | process | todo order is wrong neither working through from the top down nor the bottom up in the todo list after a type refactor necessarily gives work in dependency order i m omitting the specific example because it isn t public code and we imagine it could be minimized further anyway the symptom was that working todo required us to first complete todo working from the bottom up also didn t work thus we think the order heuristic must be wrong or broken | 1 |
294,754 | 9,040,250,745 | IssuesEvent | 2019-02-10 14:50:34 | cs2103-ay1819s2-t12-3/main | https://api.github.com/repos/cs2103-ay1819s2-t12-3/main | opened | [Week 5] Project Milestone: v1.0 | priority.High | # Deliverables:
### User Guide:
- [ ] Draft a user guide in a convenient medium (e.g., a GoogleDoc) to describe what the product would be like when it is at v2.0.
- We recommend that you follow the existing AB4 User Guide in terms of structure and format.
- As this is a very rough draft and the final version will be in a different format altogether (i.e., in asciidoc format), don't waste time in formatting, copy editing etc. It is fine as long as the tutor can get a rough idea of the features from this draft. You can also do just the 'Features' section and omit the other parts.
- Do try to come up with concrete command syntax for feature that you would implement (at least for those that you will implement by v1.4).
- Consider including some UI mock-ups too (they can be hand-drawn or created using a tool such as PowerPoint or Balsamiq).
- 💡 It is highly recommended that you divide documentation work (in the User Guide and the Developer Guide) among team members based on enhancements/features each person would be adding e.g., If you are the person planing to add a feature X, you should be the person to describe the feature X in the User Guide and in the Developer Guide. For features that are not planned to be implemented by v1.4, you can divide them based on who will be implementing them if the project were to continue until v2.0 (hypothetically).
- Reason: In the final project evaluation your documentation skills will be graded based on sections of the User/Developer Guide you have written.
- Suggested length: Follow the existing user guide and developer guides in terms of the level of details.
**Submission:** Save your draft as a single pdf file, name it {Your Team ID}.pdf e.g., W09-3.pdf and upload to LumiNUS.
### Project Management:
- [ ] After the v2.0 is conceptualized, decide which features each member will do by v1.4.
- We realize that it will be hard for you to estimate the effort required for each feature as you are not familiar with the code base. Nevertheless, come up with a project plan as per your best estimate; this plan can be revised at later stages. It is better to start with some plan rather than no plan at all. If in doubt, choose to do less than more; we don't expect you to deliver a lot of big features.
- [ ] Divide each of those features into three increments, to be released at v1.1, v1.2, v1.3 (v1.4 omitted deliberately as a buffer). Each increment should deliver a end-user visible enhancement.
- Document the above two items somewhere e.g., in a Google doc/sheet. An example is given below:
* Jake Woo: Profile photo feature
* v1.1: show a place holder for photo, showing a generic default image
* v1.2: can specify photo location if it is in local hard disk,
show photo from local hard disk
* v1.3: auto-copy the photo to app folder, support using online photo
as profile pic, stylize photo e.g., round frame
**Submission:** Include in the pdf file you upload to LumiNUS. | 1.0 | [Week 5] Project Milestone: v1.0 - # Deliverables:
### User Guide:
- [ ] Draft a user guide in a convenient medium (e.g., a GoogleDoc) to describe what the product would be like when it is at v2.0.
- We recommend that you follow the existing AB4 User Guide in terms of structure and format.
- As this is a very rough draft and the final version will be in a different format altogether (i.e., in asciidoc format), don't waste time in formatting, copy editing etc. It is fine as long as the tutor can get a rough idea of the features from this draft. You can also do just the 'Features' section and omit the other parts.
- Do try to come up with concrete command syntax for feature that you would implement (at least for those that you will implement by v1.4).
- Consider including some UI mock-ups too (they can be hand-drawn or created using a tool such as PowerPoint or Balsamiq).
- 💡 It is highly recommended that you divide documentation work (in the User Guide and the Developer Guide) among team members based on enhancements/features each person would be adding e.g., If you are the person planing to add a feature X, you should be the person to describe the feature X in the User Guide and in the Developer Guide. For features that are not planned to be implemented by v1.4, you can divide them based on who will be implementing them if the project were to continue until v2.0 (hypothetically).
- Reason: In the final project evaluation your documentation skills will be graded based on sections of the User/Developer Guide you have written.
- Suggested length: Follow the existing user guide and developer guides in terms of the level of details.
**Submission:** Save your draft as a single pdf file, name it {Your Team ID}.pdf e.g., W09-3.pdf and upload to LumiNUS.
### Project Management:
- [ ] After the v2.0 is conceptualized, decide which features each member will do by v1.4.
- We realize that it will be hard for you to estimate the effort required for each feature as you are not familiar with the code base. Nevertheless, come up with a project plan as per your best estimate; this plan can be revised at later stages. It is better to start with some plan rather than no plan at all. If in doubt, choose to do less than more; we don't expect you to deliver a lot of big features.
- [ ] Divide each of those features into three increments, to be released at v1.1, v1.2, v1.3 (v1.4 omitted deliberately as a buffer). Each increment should deliver a end-user visible enhancement.
- Document the above two items somewhere e.g., in a Google doc/sheet. An example is given below:
* Jake Woo: Profile photo feature
* v1.1: show a place holder for photo, showing a generic default image
* v1.2: can specify photo location if it is in local hard disk,
show photo from local hard disk
* v1.3: auto-copy the photo to app folder, support using online photo
as profile pic, stylize photo e.g., round frame
**Submission:** Include in the pdf file you upload to LumiNUS. | non_process | project milestone deliverables user guide draft a user guide in a convenient medium e g a googledoc to describe what the product would be like when it is at we recommend that you follow the existing user guide in terms of structure and format as this is a very rough draft and the final version will be in a different format altogether i e in asciidoc format don t waste time in formatting copy editing etc it is fine as long as the tutor can get a rough idea of the features from this draft you can also do just the features section and omit the other parts do try to come up with concrete command syntax for feature that you would implement at least for those that you will implement by consider including some ui mock ups too they can be hand drawn or created using a tool such as powerpoint or balsamiq 💡 it is highly recommended that you divide documentation work in the user guide and the developer guide among team members based on enhancements features each person would be adding e g if you are the person planing to add a feature x you should be the person to describe the feature x in the user guide and in the developer guide for features that are not planned to be implemented by you can divide them based on who will be implementing them if the project were to continue until hypothetically reason in the final project evaluation your documentation skills will be graded based on sections of the user developer guide you have written suggested length follow the existing user guide and developer guides in terms of the level of details submission save your draft as a single pdf file name it your team id pdf e g pdf and upload to luminus project management after the is conceptualized decide which features each member will do by we realize that it will be hard for you to estimate the effort required for each feature as you are not familiar with the code base nevertheless come up with a project plan as per your best estimate this plan can be revised at later stages it is better to start with some plan rather than no plan at all if in doubt choose to do less than more we don t expect you to deliver a lot of big features divide each of those features into three increments to be released at omitted deliberately as a buffer each increment should deliver a end user visible enhancement document the above two items somewhere e g in a google doc sheet an example is given below jake woo profile photo feature show a place holder for photo showing a generic default image can specify photo location if it is in local hard disk show photo from local hard disk auto copy the photo to app folder support using online photo as profile pic stylize photo e g round frame submission include in the pdf file you upload to luminus | 0 |
9,529 | 12,500,621,333 | IssuesEvent | 2020-06-01 22:46:23 | googleapis/gapic-showcase | https://api.github.com/repos/googleapis/gapic-showcase | closed | chore: refactor use of google.com domain in test data | good first issue process | Per this [PR comment](https://github.com/googleapis/gapic-showcase/pull/380#discussion_r425517801) we should replace instances of `google.com` with `example.com` in the test data. | 1.0 | chore: refactor use of google.com domain in test data - Per this [PR comment](https://github.com/googleapis/gapic-showcase/pull/380#discussion_r425517801) we should replace instances of `google.com` with `example.com` in the test data. | process | chore refactor use of google com domain in test data per this we should replace instances of google com with example com in the test data | 1 |
4,881 | 7,758,698,307 | IssuesEvent | 2018-05-31 20:29:04 | hashicorp/packer | https://api.github.com/repos/hashicorp/packer | closed | Add sriov/ena parameters for the Amazon Import Post-Processor | enhancement invalid post-processor/amazon-import wontfix | This is a feature request to add functionality that allows the Amazon Import Post-Processor to enable <tt>ena_support/sriov_support</tt> to imported images. This allows instances launched from the imported images to leverage enhanced networking (with the assumption they have the required packages/drivers installed).
Here are some references:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html
https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-image-attribute.html
In short, it's the same functionality that is offered by same respective Amazon EBS Builder's parameters.
| 1.0 | Add sriov/ena parameters for the Amazon Import Post-Processor - This is a feature request to add functionality that allows the Amazon Import Post-Processor to enable <tt>ena_support/sriov_support</tt> to imported images. This allows instances launched from the imported images to leverage enhanced networking (with the assumption they have the required packages/drivers installed).
Here are some references:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html
https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-image-attribute.html
In short, it's the same functionality that is offered by same respective Amazon EBS Builder's parameters.
| process | add sriov ena parameters for the amazon import post processor this is a feature request to add functionality that allows the amazon import post processor to enable ena support sriov support to imported images this allows instances launched from the imported images to leverage enhanced networking with the assumption they have the required packages drivers installed here are some references in short it s the same functionality that is offered by same respective amazon ebs builder s parameters | 1 |
4,547 | 7,375,328,043 | IssuesEvent | 2018-03-13 23:50:59 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Clean up resources: mention stopping the docker container for edge? | assigned-to-author doc-enhancement in-process iot-edge triaged | Might you not also want to mention that they will still have the edge container running in docker on their local machine, that they should stop?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 64575e7c-664c-4d6b-a440-24c91e93def5
* Version Independent ID: 17d9481c-a75b-bbfd-1f9a-76fc3d6b2930
* [Content](https://docs.microsoft.com/en-us/azure/iot-edge/quickstart)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/iot-edge/quickstart.md)
* Service: iot-edge | 1.0 | Clean up resources: mention stopping the docker container for edge? - Might you not also want to mention that they will still have the edge container running in docker on their local machine, that they should stop?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 64575e7c-664c-4d6b-a440-24c91e93def5
* Version Independent ID: 17d9481c-a75b-bbfd-1f9a-76fc3d6b2930
* [Content](https://docs.microsoft.com/en-us/azure/iot-edge/quickstart)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/iot-edge/quickstart.md)
* Service: iot-edge | process | clean up resources mention stopping the docker container for edge might you not also want to mention that they will still have the edge container running in docker on their local machine that they should stop document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bbfd service iot edge | 1 |
16,135 | 20,386,858,074 | IssuesEvent | 2022-02-22 08:01:49 | maticnetwork/miden | https://api.github.com/repos/maticnetwork/miden | closed | Stacking of auxiliary traces | processor | The VM has several components traces of which should be "stacked" so that they all share the same set of execution trace columns. These components are:
* Hasher
* Bitwise processor
* Memory controller
All of these components already implement `fill_trace()` methods which should help with the process. The overall procedure for building a trace is as follows:
First, we need to determine overall length of the execution trace (this would happen when we convert `Process` struct into the `ExecutionTrace` [here](https://github.com/maticnetwork/miden/blob/next/processor/src/trace.rs#L19)). This length should be computed as be the max of the lengths of all segments of the trace. For the purposes of this issue, we care only about two segments: stack trace and auxiliary table trace.
Stack trace can be retrieved via `Stack::trace_length()` (which should probably be renamed into `trace_len()` for consistency). All components of the auxiliary table also expose `trace_len()`. So, the length should be: `max(stack_len, hasher_len + bitwise_len + memory_len)`. And then we need to pad this length to the next power of two.
Once we have this length, we should allocate 18 vectors of this length, break it into fragments corresponding to lengths of each auxiliary component, and use `fill_trace()` to fill corresponding segment. We fill the fragments as follows:
1. First we fill the Hasher fragment. For this fragment values in column 0 are set to `ZERO`, and the remaining 17 columns contain the actual hasher trace.
2. Then we fill the Bitwsie fragment. For this fragment values in column 0 are set to `ONE` and values in column 1 are set to `ZERO`. The next 13 columns contain the actual bitwise processor trace. The remaining 3 columns are padded with `ZERO`
3. Then we fill the Memory fragment. For this fragment values in columns 0 and 1 are set to `ONE`, values in column 2 are set to `ZERO`. The next 14 columns contain the actual memory trace, and the last column is padded with `ZERO`.
4. If the above doesn't fill all the table completely, we need to pad the remaining rows. For padded rows values in the first 3 columns are set to `ONE`, and values in remaining columns are set to `ZERO`.
Even though the above is described sequentially - we can fill all fragments in parallel.
Also, the Bitwise trace is currently missing two columns (so, currently in the code the number of bitwise columns is 11 rather than 13). These are selector columns which we need to identify the bitwise operation being performed. | 1.0 | Stacking of auxiliary traces - The VM has several components traces of which should be "stacked" so that they all share the same set of execution trace columns. These components are:
* Hasher
* Bitwise processor
* Memory controller
All of these components already implement `fill_trace()` methods which should help with the process. The overall procedure for building a trace is as follows:
First, we need to determine overall length of the execution trace (this would happen when we convert `Process` struct into the `ExecutionTrace` [here](https://github.com/maticnetwork/miden/blob/next/processor/src/trace.rs#L19)). This length should be computed as be the max of the lengths of all segments of the trace. For the purposes of this issue, we care only about two segments: stack trace and auxiliary table trace.
Stack trace can be retrieved via `Stack::trace_length()` (which should probably be renamed into `trace_len()` for consistency). All components of the auxiliary table also expose `trace_len()`. So, the length should be: `max(stack_len, hasher_len + bitwise_len + memory_len)`. And then we need to pad this length to the next power of two.
Once we have this length, we should allocate 18 vectors of this length, break it into fragments corresponding to lengths of each auxiliary component, and use `fill_trace()` to fill corresponding segment. We fill the fragments as follows:
1. First we fill the Hasher fragment. For this fragment values in column 0 are set to `ZERO`, and the remaining 17 columns contain the actual hasher trace.
2. Then we fill the Bitwsie fragment. For this fragment values in column 0 are set to `ONE` and values in column 1 are set to `ZERO`. The next 13 columns contain the actual bitwise processor trace. The remaining 3 columns are padded with `ZERO`
3. Then we fill the Memory fragment. For this fragment values in columns 0 and 1 are set to `ONE`, values in column 2 are set to `ZERO`. The next 14 columns contain the actual memory trace, and the last column is padded with `ZERO`.
4. If the above doesn't fill all the table completely, we need to pad the remaining rows. For padded rows values in the first 3 columns are set to `ONE`, and values in remaining columns are set to `ZERO`.
Even though the above is described sequentially - we can fill all fragments in parallel.
Also, the Bitwise trace is currently missing two columns (so, currently in the code the number of bitwise columns is 11 rather than 13). These are selector columns which we need to identify the bitwise operation being performed. | process | stacking of auxiliary traces the vm has several components traces of which should be stacked so that they all share the same set of execution trace columns these components are hasher bitwise processor memory controller all of these components already implement fill trace methods which should help with the process the overall procedure for building a trace is as follows first we need to determine overall length of the execution trace this would happen when we convert process struct into the executiontrace this length should be computed as be the max of the lengths of all segments of the trace for the purposes of this issue we care only about two segments stack trace and auxiliary table trace stack trace can be retrieved via stack trace length which should probably be renamed into trace len for consistency all components of the auxiliary table also expose trace len so the length should be max stack len hasher len bitwise len memory len and then we need to pad this length to the next power of two once we have this length we should allocate vectors of this length break it into fragments corresponding to lengths of each auxiliary component and use fill trace to fill corresponding segment we fill the fragments as follows first we fill the hasher fragment for this fragment values in column are set to zero and the remaining columns contain the actual hasher trace then we fill the bitwsie fragment for this fragment values in column are set to one and values in column are set to zero the next columns contain the actual bitwise processor trace the remaining columns are padded with zero then we fill the memory fragment for this fragment values in columns and are set to one values in column are set to zero the next columns contain the actual memory trace and the last column is padded with zero if the above doesn t fill all the table completely we need to pad the remaining rows for padded rows values in the first columns are set to one and values in remaining columns are set to zero even though the above is described sequentially we can fill all fragments in parallel also the bitwise trace is currently missing two columns so currently in the code the number of bitwise columns is rather than these are selector columns which we need to identify the bitwise operation being performed | 1 |
38,389 | 10,191,928,579 | IssuesEvent | 2019-08-12 09:43:08 | digital-asset/daml | https://api.github.com/repos/digital-asset/daml | closed | engineering.da-int.net references | component/build-system | To make daml usable by the public, all the internal engineering.da-int.net need to be replaced with public-accessible equivalents. See #304 for the epic.
```
dev-env/bin/dade-copyright-headers
17:# http://engineering.da-int.net/docs/engineering-handbook/licenses-copyrights.html
dev-env/bin/da-sdk-head
20: echo "Please visit https://engineering.da-int.net/sdk/summary/releases/latest/doc/packages/sdk-docs-installation/"
dev-env/windows/manifests/java-oraclejdk-8u111.json
8: "url": "https://engineering.da-int.net/nix-vendored/jdk-8u111-windows-x64.exe#/dl.7z",
12: "url": "https://engineering.da-int.net/nix-vendored/jdk-8u111-windows-i586.exe#/dl.7z",
40: "dl https://engineering.da-int.net/nix-vendored/jce_policy-8.zip \"$dir\\tmp\\jce_policy-8.zip\"",
dev-env/windows/manifests/README.md
40:In general, binaries are provided from: https://engineering.da-int.net/nix-vendored/<tool_name>/<file_name>
dev-env/windows/manifests/vcredist-14.0.23026.json
9: "url": "https://engineering.da-int.net/nix-vendored/vc_redist/vc_redist.x64.14.0.23026.exe",
dev-env/windows/README.md
11: iex (new-object net.webclient).downloadstring('https://engineering.da-int.net/download/dadew-installer')
dev-env/windows/dadew-installer.ps1
38:(new-object net.webclient).downloadFile("https://engineering.da-int.net/download/dadew.zip", $zipFile)
ledger/sandbox/README.md
41:The dar files are the archives containing compiled DAML code. We highly recommend generating the dar files using the new DAML packaging, as described in https://engineering.da-int.net/docs/da-all-docs/packages/daml-project/. This will ensure that you're generating the .dar files correctly. The linked page also gives a good overview of what the dar files are, along with other key concepts.
``` | 1.0 | engineering.da-int.net references - To make daml usable by the public, all the internal engineering.da-int.net need to be replaced with public-accessible equivalents. See #304 for the epic.
```
dev-env/bin/dade-copyright-headers
17:# http://engineering.da-int.net/docs/engineering-handbook/licenses-copyrights.html
dev-env/bin/da-sdk-head
20: echo "Please visit https://engineering.da-int.net/sdk/summary/releases/latest/doc/packages/sdk-docs-installation/"
dev-env/windows/manifests/java-oraclejdk-8u111.json
8: "url": "https://engineering.da-int.net/nix-vendored/jdk-8u111-windows-x64.exe#/dl.7z",
12: "url": "https://engineering.da-int.net/nix-vendored/jdk-8u111-windows-i586.exe#/dl.7z",
40: "dl https://engineering.da-int.net/nix-vendored/jce_policy-8.zip \"$dir\\tmp\\jce_policy-8.zip\"",
dev-env/windows/manifests/README.md
40:In general, binaries are provided from: https://engineering.da-int.net/nix-vendored/<tool_name>/<file_name>
dev-env/windows/manifests/vcredist-14.0.23026.json
9: "url": "https://engineering.da-int.net/nix-vendored/vc_redist/vc_redist.x64.14.0.23026.exe",
dev-env/windows/README.md
11: iex (new-object net.webclient).downloadstring('https://engineering.da-int.net/download/dadew-installer')
dev-env/windows/dadew-installer.ps1
38:(new-object net.webclient).downloadFile("https://engineering.da-int.net/download/dadew.zip", $zipFile)
ledger/sandbox/README.md
41:The dar files are the archives containing compiled DAML code. We highly recommend generating the dar files using the new DAML packaging, as described in https://engineering.da-int.net/docs/da-all-docs/packages/daml-project/. This will ensure that you're generating the .dar files correctly. The linked page also gives a good overview of what the dar files are, along with other key concepts.
``` | non_process | engineering da int net references to make daml usable by the public all the internal engineering da int net need to be replaced with public accessible equivalents see for the epic dev env bin dade copyright headers dev env bin da sdk head echo please visit dev env windows manifests java oraclejdk json url url dl dir tmp jce policy zip dev env windows manifests readme md in general binaries are provided from dev env windows manifests vcredist json url dev env windows readme md iex new object net webclient downloadstring dev env windows dadew installer new object net webclient downloadfile zipfile ledger sandbox readme md the dar files are the archives containing compiled daml code we highly recommend generating the dar files using the new daml packaging as described in this will ensure that you re generating the dar files correctly the linked page also gives a good overview of what the dar files are along with other key concepts | 0 |
15,161 | 18,912,322,584 | IssuesEvent | 2021-11-16 15:14:22 | opensafely-core/job-server | https://api.github.com/repos/opensafely-core/job-server | opened | Add url functionality | application-process | It would be good to add url functionality
> As an applicant, I want to add a url to the application, to hyperlink to a paper that supports my rationale for conducting the survey
> As a reviewer, I want to click on a url, to take me directly to the location referenced by a researcher.
I think for the moment we could make it clear in documentation that applicants should copy and paste the full url into the application but screenshots to current applications where a url would've been useful/was intended for when we get to it
- In this example the applicant is intending to put ini a url

- In this example I am reviewing this application and I have no idea where xu et al is located

- sheer laziness but as a reviewer I have to copy and past this link

| 1.0 | Add url functionality - It would be good to add url functionality
> As an applicant, I want to add a url to the application, to hyperlink to a paper that supports my rationale for conducting the survey
> As a reviewer, I want to click on a url, to take me directly to the location referenced by a researcher.
I think for the moment we could make it clear in documentation that applicants should copy and paste the full url into the application but screenshots to current applications where a url would've been useful/was intended for when we get to it
- In this example the applicant is intending to put ini a url

- In this example I am reviewing this application and I have no idea where xu et al is located

- sheer laziness but as a reviewer I have to copy and past this link

| process | add url functionality it would be good to add url functionality as an applicant i want to add a url to the application to hyperlink to a paper that supports my rationale for conducting the survey as a reviewer i want to click on a url to take me directly to the location referenced by a researcher i think for the moment we could make it clear in documentation that applicants should copy and paste the full url into the application but screenshots to current applications where a url would ve been useful was intended for when we get to it in this example the applicant is intending to put ini a url in this example i am reviewing this application and i have no idea where xu et al is located sheer laziness but as a reviewer i have to copy and past this link | 1 |
23,149 | 11,860,025,541 | IssuesEvent | 2020-03-25 14:17:58 | graalvm/graaljs | https://api.github.com/repos/graalvm/graaljs | closed | GraalJs is very slow | performance | I'm trying to migrate my Nashorn apps to Graal and did some tests. Results are strange, maybe I did something wrong. Graal downloaded today by maven, macos 10.15.3 on macBook pro, Oracle JDK 13 (tried different versions with the same result)
```
public static void main(String[] args) throws Exception
{
String src = "\n" +
"if(!storage)\n" +
"{\n" +
"var storage={};\n" +
"}\n" +
"for(let key in params)\n" +
"{\n" +
" storage[key]='migrated: '+params[key];" +
"}\n" +
"v1='newPVal1';\n" +
"let v='newVal4';\n" +
"i++;\n" +
"params.arg1='11111';\n" +
"params['2']='22222';\n" +
"\n";
try
{
Context.Builder contextBuilder = Context.newBuilder("js");
GraalJSScriptEngine engine = GraalJSScriptEngine.create(null, contextBuilder);
CompiledScript cs;
Invocable inv;
org.graalvm.polyglot.Context context = org.graalvm.polyglot.Context.create();
Value bd = context.getBindings("js");
bd.putMember("v1", "value1");
bd.putMember("v2", "value2");
bd.putMember("v3", "value3");
bd.putMember("i", "0");
inv = (Invocable) engine;
// System.out.println(src);
// cs = ((Compilable) engine).compile(src);
src = "function process(params) { " + src + "\n}\n" +
"print('eval done!');\n";
System.out.println(src);
context.eval("js", src);
//bd = context.getBindings("js");
Value foo = (Value) bd.getMember("process");
Map<String, Object> params = new HashMap<>();
ZonedDateTime start = ZonedDateTime.now();
for (int i = 0; i < 10000000; i++)
{
params.clear();
params.put("arg1", "arg value1");
foo.execute(ProxyObject.fromMap(params));
// Map<String, Object> res = (Map) inv.invokeFunction("__process_data__", jsArgs);
}
long time = start.until(ZonedDateTime.now(), ChronoUnit.MILLIS);
System.out.println("duration " + time);
} catch (Exception e)
{
System.out.println("Exception: " + e.getMessage());
e.printStackTrace();
}
}
```
```
/Library/Java/JavaVirtualMachines/jdk-13.0.2.jdk/Contents/Home/bin/java "-javaagent:/Applications/IntelliJ IDEA.app/Contents/lib/idea_rt.jar=50579:/Applications/IntelliJ IDEA.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath /Users/viktor/Documents/work/java/js/GraalVM/out/production/GraalVM:/Users/viktor/Documents/work/java/js/GraalVM/lib/asm-7.1.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/js-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/icu4j-64.2.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/junit-4.12.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/asm-tree-7.1.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/asm-util-7.1.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/regex-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/asm-commons-7.1.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/profiler-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/asm-analysis-7.1.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/graal-sdk-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/hamcrest-core-1.3.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/truffle-api-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/chromeinspector-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/js-scriptengine-20.0.0.jar JsTest
function process(params) {
if(!storage)
{
var storage={};
}
for(let key in params)
{
storage[key]='migrated: '+params[key];}
v1='newPVal1';
let v='newVal4';
i++;
params.arg1='11111';
params['2']='22222';
}
print('eval done!');
eval done!
duration 20130
Process finished with exit code 0
```
```
public static void main(String[] args) throws Exception
{
ScriptEngine engine = new NashornScriptEngineFactory().getScriptEngine(
new String[]{"-strict", "--no-java", "--no-syntax-extensions", "--optimistic-types=true"},
null,
scr -> false);
CompiledScript cs;
Invocable inv;
Bindings bd;
inv = (Invocable) engine;
boolean showBindings = false;
bd = engine.getBindings(ScriptContext.ENGINE_SCOPE);
bd.remove("load");
bd.remove("loadWithNewGlobal");
bd.remove("exit");
bd.remove("eval");
bd.remove("quit");
bd.put("v1", "value1");
bd.put("v2", "value2");
bd.put("v3", "value3");
bd.put("i", "0");
String src = "if(!storage)\n" +
"{\n" +
"var storage={};\n" +
"}\n" +
"for(var key in params)\n" +
"{\n" +
" storage[key]='migrated: '+params[key];" +
"}\n" +
"v1='newPVal1';\n" +
"var v='newVal4';\n" +
"i++;\n" +
"params.arg1='11111';\n" +
"params['2']='22222';\n" +
"\n";
cs = ((Compilable) engine).compile(src);
src = "function process(params) { " + src + "\n}\n" +
"print('eval done!');\n";
System.out.println(src);
cs = ((Compilable) engine).compile(src);
cs.eval();
Map<String, String> params = new HashMap<>();
ZonedDateTime start = ZonedDateTime.now();
for (int i = 0; i < 10000000; i++)
{
params.clear();
params.put("arg1", "arg value1");
Map<String, Object> res = (Map) inv.invokeFunction("process", params);
}
long time = start.until(ZonedDateTime.now(), ChronoUnit.MILLIS);
System.out.println("duration: " + time);
}
```
```
/Library/Java/JavaVirtualMachines/jdk-10.0.2.jdk/Contents/Home/bin/java "-javaagent:/Applications/IntelliJ IDEA.app/Contents/lib/idea_rt.jar=50725:/Applications/IntelliJ IDEA.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath /Users/viktor/Documents/work/java/js/nashorn/out/production/js JsTest
function process(params) { if(!storage)
{
var storage={};
}
for(var key in params)
{
storage[key]='migrated: '+params[key];}
v1='newPVal1';
var v='newVal4';
i++;
params.arg1='11111';
params['2']='22222';
}
print('eval done!');
eval done!
duration: 4172
Process finished with exit code 0
```
| True | GraalJs is very slow - I'm trying to migrate my Nashorn apps to Graal and did some tests. Results are strange, maybe I did something wrong. Graal downloaded today by maven, macos 10.15.3 on macBook pro, Oracle JDK 13 (tried different versions with the same result)
```
public static void main(String[] args) throws Exception
{
String src = "\n" +
"if(!storage)\n" +
"{\n" +
"var storage={};\n" +
"}\n" +
"for(let key in params)\n" +
"{\n" +
" storage[key]='migrated: '+params[key];" +
"}\n" +
"v1='newPVal1';\n" +
"let v='newVal4';\n" +
"i++;\n" +
"params.arg1='11111';\n" +
"params['2']='22222';\n" +
"\n";
try
{
Context.Builder contextBuilder = Context.newBuilder("js");
GraalJSScriptEngine engine = GraalJSScriptEngine.create(null, contextBuilder);
CompiledScript cs;
Invocable inv;
org.graalvm.polyglot.Context context = org.graalvm.polyglot.Context.create();
Value bd = context.getBindings("js");
bd.putMember("v1", "value1");
bd.putMember("v2", "value2");
bd.putMember("v3", "value3");
bd.putMember("i", "0");
inv = (Invocable) engine;
// System.out.println(src);
// cs = ((Compilable) engine).compile(src);
src = "function process(params) { " + src + "\n}\n" +
"print('eval done!');\n";
System.out.println(src);
context.eval("js", src);
//bd = context.getBindings("js");
Value foo = (Value) bd.getMember("process");
Map<String, Object> params = new HashMap<>();
ZonedDateTime start = ZonedDateTime.now();
for (int i = 0; i < 10000000; i++)
{
params.clear();
params.put("arg1", "arg value1");
foo.execute(ProxyObject.fromMap(params));
// Map<String, Object> res = (Map) inv.invokeFunction("__process_data__", jsArgs);
}
long time = start.until(ZonedDateTime.now(), ChronoUnit.MILLIS);
System.out.println("duration " + time);
} catch (Exception e)
{
System.out.println("Exception: " + e.getMessage());
e.printStackTrace();
}
}
```
```
/Library/Java/JavaVirtualMachines/jdk-13.0.2.jdk/Contents/Home/bin/java "-javaagent:/Applications/IntelliJ IDEA.app/Contents/lib/idea_rt.jar=50579:/Applications/IntelliJ IDEA.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath /Users/viktor/Documents/work/java/js/GraalVM/out/production/GraalVM:/Users/viktor/Documents/work/java/js/GraalVM/lib/asm-7.1.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/js-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/icu4j-64.2.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/junit-4.12.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/asm-tree-7.1.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/asm-util-7.1.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/regex-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/asm-commons-7.1.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/profiler-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/asm-analysis-7.1.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/graal-sdk-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/hamcrest-core-1.3.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/truffle-api-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/chromeinspector-20.0.0.jar:/Users/viktor/Documents/work/java/js/GraalVM/lib/js-scriptengine-20.0.0.jar JsTest
function process(params) {
if(!storage)
{
var storage={};
}
for(let key in params)
{
storage[key]='migrated: '+params[key];}
v1='newPVal1';
let v='newVal4';
i++;
params.arg1='11111';
params['2']='22222';
}
print('eval done!');
eval done!
duration 20130
Process finished with exit code 0
```
```
public static void main(String[] args) throws Exception
{
ScriptEngine engine = new NashornScriptEngineFactory().getScriptEngine(
new String[]{"-strict", "--no-java", "--no-syntax-extensions", "--optimistic-types=true"},
null,
scr -> false);
CompiledScript cs;
Invocable inv;
Bindings bd;
inv = (Invocable) engine;
boolean showBindings = false;
bd = engine.getBindings(ScriptContext.ENGINE_SCOPE);
bd.remove("load");
bd.remove("loadWithNewGlobal");
bd.remove("exit");
bd.remove("eval");
bd.remove("quit");
bd.put("v1", "value1");
bd.put("v2", "value2");
bd.put("v3", "value3");
bd.put("i", "0");
String src = "if(!storage)\n" +
"{\n" +
"var storage={};\n" +
"}\n" +
"for(var key in params)\n" +
"{\n" +
" storage[key]='migrated: '+params[key];" +
"}\n" +
"v1='newPVal1';\n" +
"var v='newVal4';\n" +
"i++;\n" +
"params.arg1='11111';\n" +
"params['2']='22222';\n" +
"\n";
cs = ((Compilable) engine).compile(src);
src = "function process(params) { " + src + "\n}\n" +
"print('eval done!');\n";
System.out.println(src);
cs = ((Compilable) engine).compile(src);
cs.eval();
Map<String, String> params = new HashMap<>();
ZonedDateTime start = ZonedDateTime.now();
for (int i = 0; i < 10000000; i++)
{
params.clear();
params.put("arg1", "arg value1");
Map<String, Object> res = (Map) inv.invokeFunction("process", params);
}
long time = start.until(ZonedDateTime.now(), ChronoUnit.MILLIS);
System.out.println("duration: " + time);
}
```
```
/Library/Java/JavaVirtualMachines/jdk-10.0.2.jdk/Contents/Home/bin/java "-javaagent:/Applications/IntelliJ IDEA.app/Contents/lib/idea_rt.jar=50725:/Applications/IntelliJ IDEA.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath /Users/viktor/Documents/work/java/js/nashorn/out/production/js JsTest
function process(params) { if(!storage)
{
var storage={};
}
for(var key in params)
{
storage[key]='migrated: '+params[key];}
v1='newPVal1';
var v='newVal4';
i++;
params.arg1='11111';
params['2']='22222';
}
print('eval done!');
eval done!
duration: 4172
Process finished with exit code 0
```
| non_process | graaljs is very slow i m trying to migrate my nashorn apps to graal and did some tests results are strange maybe i did something wrong graal downloaded today by maven macos on macbook pro oracle jdk tried different versions with the same result public static void main string args throws exception string src n if storage n n var storage n n for let key in params n n storage migrated params n n let v n i n params n params n n try context builder contextbuilder context newbuilder js graaljsscriptengine engine graaljsscriptengine create null contextbuilder compiledscript cs invocable inv org graalvm polyglot context context org graalvm polyglot context create value bd context getbindings js bd putmember bd putmember bd putmember bd putmember i inv invocable engine system out println src cs compilable engine compile src src function process params src n n print eval done n system out println src context eval js src bd context getbindings js value foo value bd getmember process map params new hashmap zoneddatetime start zoneddatetime now for int i i i params clear params put arg foo execute proxyobject frommap params map res map inv invokefunction process data jsargs long time start until zoneddatetime now chronounit millis system out println duration time catch exception e system out println exception e getmessage e printstacktrace library java javavirtualmachines jdk jdk contents home bin java javaagent applications intellij idea app contents lib idea rt jar applications intellij idea app contents bin dfile encoding utf classpath users viktor documents work java js graalvm out production graalvm users viktor documents work java js graalvm lib asm jar users viktor documents work java js graalvm lib js jar users viktor documents work java js graalvm lib jar users viktor documents work java js graalvm lib junit jar users viktor documents work java js graalvm lib asm tree jar users viktor documents work java js graalvm lib asm util jar users viktor documents work java js graalvm lib regex jar users viktor documents work java js graalvm lib asm commons jar users viktor documents work java js graalvm lib profiler jar users viktor documents work java js graalvm lib asm analysis jar users viktor documents work java js graalvm lib graal sdk jar users viktor documents work java js graalvm lib hamcrest core jar users viktor documents work java js graalvm lib truffle api jar users viktor documents work java js graalvm lib chromeinspector jar users viktor documents work java js graalvm lib js scriptengine jar jstest function process params if storage var storage for let key in params storage migrated params let v i params params print eval done eval done duration process finished with exit code public static void main string args throws exception scriptengine engine new nashornscriptenginefactory getscriptengine new string strict no java no syntax extensions optimistic types true null scr false compiledscript cs invocable inv bindings bd inv invocable engine boolean showbindings false bd engine getbindings scriptcontext engine scope bd remove load bd remove loadwithnewglobal bd remove exit bd remove eval bd remove quit bd put bd put bd put bd put i string src if storage n n var storage n n for var key in params n n storage migrated params n n var v n i n params n params n n cs compilable engine compile src src function process params src n n print eval done n system out println src cs compilable engine compile src cs eval map params new hashmap zoneddatetime start zoneddatetime now for int i i i params clear params put arg map res map inv invokefunction process params long time start until zoneddatetime now chronounit millis system out println duration time library java javavirtualmachines jdk jdk contents home bin java javaagent applications intellij idea app contents lib idea rt jar applications intellij idea app contents bin dfile encoding utf classpath users viktor documents work java js nashorn out production js jstest function process params if storage var storage for var key in params storage migrated params var v i params params print eval done eval done duration process finished with exit code | 0 |
106,726 | 11,496,132,014 | IssuesEvent | 2020-02-12 07:09:03 | onaio/onadata | https://api.github.com/repos/onaio/onadata | closed | Update Contributing Guideline | Documentation | ### Problem description
The current contributing guidelines do not cover the need for commits to be verified. Would be nice if we updated them to cover how one should go about signing their commits.
| 1.0 | Update Contributing Guideline - ### Problem description
The current contributing guidelines do not cover the need for commits to be verified. Would be nice if we updated them to cover how one should go about signing their commits.
| non_process | update contributing guideline problem description the current contributing guidelines do not cover the need for commits to be verified would be nice if we updated them to cover how one should go about signing their commits | 0 |
542,835 | 15,867,945,310 | IssuesEvent | 2021-04-08 17:33:16 | bounswe/2021SpringGroup3 | https://api.github.com/repos/bounswe/2021SpringGroup3 | closed | Domain Analysis | Priority: High Status: Completed Type: Research | - Perform an analysis of the domain for the project.
- Prepare a table of common and missing features. | 1.0 | Domain Analysis - - Perform an analysis of the domain for the project.
- Prepare a table of common and missing features. | non_process | domain analysis perform an analysis of the domain for the project prepare a table of common and missing features | 0 |
18,684 | 24,594,892,712 | IssuesEvent | 2022-10-14 07:26:29 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [DID] 'linkid' and 'text' are getting de-identified | Bug P1 Response datastore Process: Fixed Process: Tested dev | 'linkid' and 'text' are getting de-identified when the admin configure the activity id and name with the following in the Study builder
Activity id and name: dated, timeff

| 2.0 | [DID] 'linkid' and 'text' are getting de-identified - 'linkid' and 'text' are getting de-identified when the admin configure the activity id and name with the following in the Study builder
Activity id and name: dated, timeff

| process | linkid and text are getting de identified linkid and text are getting de identified when the admin configure the activity id and name with the following in the study builder activity id and name dated timeff | 1 |
7,601 | 10,712,177,270 | IssuesEvent | 2019-10-25 08:25:22 | aiidateam/aiida-core | https://api.github.com/repos/aiidateam/aiida-core | closed | `WorkflowFactory` and `CalculationFactory` should verify type of loaded entry point | priority/important topic/plugin-system topic/processes type/bug | Currently it is possible to add a `CalcJob` or `calcfunction` to the `aiida.workflows` group and load it with the `WorkflowFactory`. It is better to raise as users should use the `aiida.calculations` group for this. There is no way we can prevent it from being add to the entry point, unless we perform a generic check on all registered entry points, trying to load them and verifying the type, but this might be unnecessarily heavy. | 1.0 | `WorkflowFactory` and `CalculationFactory` should verify type of loaded entry point - Currently it is possible to add a `CalcJob` or `calcfunction` to the `aiida.workflows` group and load it with the `WorkflowFactory`. It is better to raise as users should use the `aiida.calculations` group for this. There is no way we can prevent it from being add to the entry point, unless we perform a generic check on all registered entry points, trying to load them and verifying the type, but this might be unnecessarily heavy. | process | workflowfactory and calculationfactory should verify type of loaded entry point currently it is possible to add a calcjob or calcfunction to the aiida workflows group and load it with the workflowfactory it is better to raise as users should use the aiida calculations group for this there is no way we can prevent it from being add to the entry point unless we perform a generic check on all registered entry points trying to load them and verifying the type but this might be unnecessarily heavy | 1 |
4,618 | 7,461,467,406 | IssuesEvent | 2018-03-31 03:18:48 | ChickenKyiv/api-extended-database | https://api.github.com/repos/ChickenKyiv/api-extended-database | closed | raven/sentry configuration | in-process | `Raven.config('https://77aa2ee9a7ce484497f56278982a0809@sentry.io/305339').install()`
Init:
https://github.com/GroceriStar/groceristar/blob/master/server/server.js#L17
How to use this tracer:
https://github.com/GroceriStar/groceristar/blob/master/server/server.js#L60
if you'll cover your code well with this tracer - when you'll break something - I'll receive an email notification and will be able to understand more clearly what was goes wrong.

| 1.0 | raven/sentry configuration - `Raven.config('https://77aa2ee9a7ce484497f56278982a0809@sentry.io/305339').install()`
Init:
https://github.com/GroceriStar/groceristar/blob/master/server/server.js#L17
How to use this tracer:
https://github.com/GroceriStar/groceristar/blob/master/server/server.js#L60
if you'll cover your code well with this tracer - when you'll break something - I'll receive an email notification and will be able to understand more clearly what was goes wrong.

| process | raven sentry configuration raven config init how to use this tracer if you ll cover your code well with this tracer when you ll break something i ll receive an email notification and will be able to understand more clearly what was goes wrong | 1 |
15,462 | 11,520,590,783 | IssuesEvent | 2020-02-14 15:05:13 | dotnet/dotnet-docker | https://api.github.com/repos/dotnet/dotnet-docker | closed | Dev build script should build sample images | area:infrastructure enhancement triaged | On a machine in which the sample images have not been built or pulled, attempting to run the `build-and-test.ps1` script will result in errors similar to `'mcr.microsoft.com/dotnet/core/samples:dotnetapp-buster-slim' could not be found on disk.` This is because when the `run-tests.ps1` script gets invoked, it defaults to not pulling images. And since the build script didn't end up building the sample images, those images will not exist on disk and result in an error when the sample tests run.
The build script should be updated to include sample images in the set of images that are built. | 1.0 | Dev build script should build sample images - On a machine in which the sample images have not been built or pulled, attempting to run the `build-and-test.ps1` script will result in errors similar to `'mcr.microsoft.com/dotnet/core/samples:dotnetapp-buster-slim' could not be found on disk.` This is because when the `run-tests.ps1` script gets invoked, it defaults to not pulling images. And since the build script didn't end up building the sample images, those images will not exist on disk and result in an error when the sample tests run.
The build script should be updated to include sample images in the set of images that are built. | non_process | dev build script should build sample images on a machine in which the sample images have not been built or pulled attempting to run the build and test script will result in errors similar to mcr microsoft com dotnet core samples dotnetapp buster slim could not be found on disk this is because when the run tests script gets invoked it defaults to not pulling images and since the build script didn t end up building the sample images those images will not exist on disk and result in an error when the sample tests run the build script should be updated to include sample images in the set of images that are built | 0 |
84,866 | 24,453,420,888 | IssuesEvent | 2022-10-07 02:59:31 | abhiTronix/deffcode | https://api.github.com/repos/abhiTronix/deffcode | closed | [Idea]: Similar to OpenCV, Index based Camera Device Capture | Enhancement :zap: WIP :building_construction: Idea :bulb: | ### Issue guidelines
- [X] I've read the [Issue Guidelines](https://abhitronix.github.io/deffcode/latest/contribution/issue/#submitting-an-issue-guidelines) and wholeheartedly agree.
### Issue Checklist
- [X] I have searched open or closed [issues](https://github.com/abhiTronix/deffcode/issues) and found nothing related to my idea.
- [X] I have read the [Documentation](https://abhitronix.github.io/deffcode/latest) and it doesn't mention anything about my idea.
- [X] To my best knowledge, my idea wouldn't break something for other users.
### Describe your Idea
Currently, DeFFcode users have to manually assign device name or its path using [`source`](https://abhitronix.github.io/deffcode/latest/recipes/reference/sourcer/params/#source) parameter and the demuxer using [`source_demuxer`](https://abhitronix.github.io/deffcode/latest/recipes/reference/sourcer/params/#source_demuxer) parameter of the respective API for the given input device. And thereby makes this feature less user-friendly, and complicated for everyday user who might find it extremely difficult to replicate.
This issue will track the **Index based Camera Device Capture feature**, which implements Device Indexing _(similar to OpenCV)_, where the user just have to assign device index as integer _(-n to n-1<sup>th</sup>)_ in `source` parameter of DeFFcode APIs to directly access the given input device, thus making things much simpler.
### Use Cases
The **Index based Camera Device Capture feature** implements Device Indexing _(similar to OpenCV)_, where the user just have to assign device index as integer _(-n to n-1<sup>th</sup>)_ in `source` parameter of DeFFcode APIs to directly access the given input device in few seconds.
This feature is much better than manually Identifying and Specifying Video Capture Device Name/Path and suitable Demuxer on different OS platforms, thus minimizing perceived complexity of decoding Live Feed Devices.
### Any other Relevant Information?
_No response_ | 1.0 | [Idea]: Similar to OpenCV, Index based Camera Device Capture - ### Issue guidelines
- [X] I've read the [Issue Guidelines](https://abhitronix.github.io/deffcode/latest/contribution/issue/#submitting-an-issue-guidelines) and wholeheartedly agree.
### Issue Checklist
- [X] I have searched open or closed [issues](https://github.com/abhiTronix/deffcode/issues) and found nothing related to my idea.
- [X] I have read the [Documentation](https://abhitronix.github.io/deffcode/latest) and it doesn't mention anything about my idea.
- [X] To my best knowledge, my idea wouldn't break something for other users.
### Describe your Idea
Currently, DeFFcode users have to manually assign device name or its path using [`source`](https://abhitronix.github.io/deffcode/latest/recipes/reference/sourcer/params/#source) parameter and the demuxer using [`source_demuxer`](https://abhitronix.github.io/deffcode/latest/recipes/reference/sourcer/params/#source_demuxer) parameter of the respective API for the given input device. And thereby makes this feature less user-friendly, and complicated for everyday user who might find it extremely difficult to replicate.
This issue will track the **Index based Camera Device Capture feature**, which implements Device Indexing _(similar to OpenCV)_, where the user just have to assign device index as integer _(-n to n-1<sup>th</sup>)_ in `source` parameter of DeFFcode APIs to directly access the given input device, thus making things much simpler.
### Use Cases
The **Index based Camera Device Capture feature** implements Device Indexing _(similar to OpenCV)_, where the user just have to assign device index as integer _(-n to n-1<sup>th</sup>)_ in `source` parameter of DeFFcode APIs to directly access the given input device in few seconds.
This feature is much better than manually Identifying and Specifying Video Capture Device Name/Path and suitable Demuxer on different OS platforms, thus minimizing perceived complexity of decoding Live Feed Devices.
### Any other Relevant Information?
_No response_ | non_process | similar to opencv index based camera device capture issue guidelines i ve read the and wholeheartedly agree issue checklist i have searched open or closed and found nothing related to my idea i have read the and it doesn t mention anything about my idea to my best knowledge my idea wouldn t break something for other users describe your idea currently deffcode users have to manually assign device name or its path using parameter and the demuxer using parameter of the respective api for the given input device and thereby makes this feature less user friendly and complicated for everyday user who might find it extremely difficult to replicate this issue will track the index based camera device capture feature which implements device indexing similar to opencv where the user just have to assign device index as integer n to n th in source parameter of deffcode apis to directly access the given input device thus making things much simpler use cases the index based camera device capture feature implements device indexing similar to opencv where the user just have to assign device index as integer n to n th in source parameter of deffcode apis to directly access the given input device in few seconds this feature is much better than manually identifying and specifying video capture device name path and suitable demuxer on different os platforms thus minimizing perceived complexity of decoding live feed devices any other relevant information no response | 0 |
449,243 | 12,965,644,387 | IssuesEvent | 2020-07-20 22:50:00 | Poobslag/turbofat | https://api.github.com/repos/Poobslag/turbofat | closed | Skip later piece kicks after a failed floor kick | priority-4 | Currently, a failed floor kick results in the rest of the piece kicks still being attempted.
This rewards some very unusual rotation attempts. For example, a spin move like this might fail two times:
```
..... .LLL.
.LL.. .L...
##L.. -> ##...
#.L.. #....
#.##. #.##.
```
This causes a floor kick, when the player wants to get the piece into that narrow gap. However, the third time the rotation is attempted, it results in this:
```
..... .....
.LL.. .....
##L.. -> ##...
#.L.. #LLL.
#.##. #L##.
```
In a way I guess this is maybe "better" because it gives the player flexibility if they REALLY know the intricacies of the kick system. But really it's much worse because no player would expect this behavior; you have to memorize how many floor kicks each piece has and exhaust them just to get the piece where you want.
When the player is out of floor kicks, a failed "you have no floor kicks" kick should result in no kick at all. It should not attempt kicks later in the list. | 1.0 | Skip later piece kicks after a failed floor kick - Currently, a failed floor kick results in the rest of the piece kicks still being attempted.
This rewards some very unusual rotation attempts. For example, a spin move like this might fail two times:
```
..... .LLL.
.LL.. .L...
##L.. -> ##...
#.L.. #....
#.##. #.##.
```
This causes a floor kick, when the player wants to get the piece into that narrow gap. However, the third time the rotation is attempted, it results in this:
```
..... .....
.LL.. .....
##L.. -> ##...
#.L.. #LLL.
#.##. #L##.
```
In a way I guess this is maybe "better" because it gives the player flexibility if they REALLY know the intricacies of the kick system. But really it's much worse because no player would expect this behavior; you have to memorize how many floor kicks each piece has and exhaust them just to get the piece where you want.
When the player is out of floor kicks, a failed "you have no floor kicks" kick should result in no kick at all. It should not attempt kicks later in the list. | non_process | skip later piece kicks after a failed floor kick currently a failed floor kick results in the rest of the piece kicks still being attempted this rewards some very unusual rotation attempts for example a spin move like this might fail two times lll ll l l l this causes a floor kick when the player wants to get the piece into that narrow gap however the third time the rotation is attempted it results in this ll l l lll l in a way i guess this is maybe better because it gives the player flexibility if they really know the intricacies of the kick system but really it s much worse because no player would expect this behavior you have to memorize how many floor kicks each piece has and exhaust them just to get the piece where you want when the player is out of floor kicks a failed you have no floor kicks kick should result in no kick at all it should not attempt kicks later in the list | 0 |
234,689 | 18,013,611,710 | IssuesEvent | 2021-09-16 11:31:31 | ehuckriede/Toronto_renting_regulations | https://api.github.com/repos/ehuckriede/Toronto_renting_regulations | opened | Expand the research motivation part | documentation | - Motivate your research question or business problem
- Clearly explain which problem is solved
| 1.0 | Expand the research motivation part - - Motivate your research question or business problem
- Clearly explain which problem is solved
| non_process | expand the research motivation part motivate your research question or business problem clearly explain which problem is solved | 0 |
1,975 | 4,804,048,457 | IssuesEvent | 2016-11-02 12:16:06 | paulkornikov/Pragonas | https://api.github.com/repos/paulkornikov/Pragonas | closed | Ne plus sauvegarder fichier sg sur serveur avant lecture | a-enhancement chargement processus workload III | Just an example on how you can read the uploaded file without saving it on the server:
// Use the InputStream to get the actual stream sent.
StreamReader csvreader = new StreamReader(UploadedFile.InputStream);
while (!csvreader.EndOfStream)
{
var line = csvreader.ReadLine();
var values = line.Split(';');
}
voir aussi le csvHeleper:
https://joshclose.github.io/CsvHelper/#
| 1.0 | Ne plus sauvegarder fichier sg sur serveur avant lecture - Just an example on how you can read the uploaded file without saving it on the server:
// Use the InputStream to get the actual stream sent.
StreamReader csvreader = new StreamReader(UploadedFile.InputStream);
while (!csvreader.EndOfStream)
{
var line = csvreader.ReadLine();
var values = line.Split(';');
}
voir aussi le csvHeleper:
https://joshclose.github.io/CsvHelper/#
| process | ne plus sauvegarder fichier sg sur serveur avant lecture just an example on how you can read the uploaded file without saving it on the server use the inputstream to get the actual stream sent streamreader csvreader new streamreader uploadedfile inputstream while csvreader endofstream var line csvreader readline var values line split voir aussi le csvheleper | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.