Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
81,484
| 30,875,180,005
|
IssuesEvent
|
2023-08-03 13:53:29
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
adding a new space just redisplays the add dialog
|
T-Defect
|
### Steps to reproduce
1. Viewing a space's room list, eg
2. Click the Add button
3. Click Add space
4. Fill out Name and Address fields (and optionally image and Description, same result)
5. Click Add button
### Outcome
#### What did you expect?
For this child space to be created and appear in the parent's room list
#### What happened instead?
The add dialog redisplays, showing no error.
### Operating system
macos ventura
### Application version
Element version: 1.11.37 Olm version: 3.2.14
### How did you install the app?
_No response_
### Homeserver
matrix.org
### Will you send logs?
Yes
|
1.0
|
adding a new space just redisplays the add dialog - ### Steps to reproduce
1. Viewing a space's room list, eg
2. Click the Add button
3. Click Add space
4. Fill out Name and Address fields (and optionally image and Description, same result)
5. Click Add button
### Outcome
#### What did you expect?
For this child space to be created and appear in the parent's room list
#### What happened instead?
The add dialog redisplays, showing no error.
### Operating system
macos ventura
### Application version
Element version: 1.11.37 Olm version: 3.2.14
### How did you install the app?
_No response_
### Homeserver
matrix.org
### Will you send logs?
Yes
|
non_process
|
adding a new space just redisplays the add dialog steps to reproduce viewing a space s room list eg click the add button click add space fill out name and address fields and optionally image and description same result click add button outcome what did you expect for this child space to be created and appear in the parent s room list what happened instead the add dialog redisplays showing no error operating system macos ventura application version element version olm version how did you install the app no response homeserver matrix org will you send logs yes
| 0
|
179,074
| 6,621,338,252
|
IssuesEvent
|
2017-09-21 18:47:01
|
coreos/bugs
|
https://api.github.com/repos/coreos/bugs
|
opened
|
Consider using the NOOP IO scheduler in virtualized environments
|
area/performance component/kernel kind/enhancement priority/P2 team/os
|
# Issue Report #
## Feature Request ##
### Environment ###
All virtualized environments.
### Desired Feature ###
Use `elevator=noop` to use the NOOP IO scheduler. This allows the hypervisor to make scheduling decisions instead. This may not be applicable to all environments, but it's at least [recommended on Hyper-V](https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/Best-Practices-for-running-Linux-on-Hyper-V).
|
1.0
|
Consider using the NOOP IO scheduler in virtualized environments - # Issue Report #
## Feature Request ##
### Environment ###
All virtualized environments.
### Desired Feature ###
Use `elevator=noop` to use the NOOP IO scheduler. This allows the hypervisor to make scheduling decisions instead. This may not be applicable to all environments, but it's at least [recommended on Hyper-V](https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/Best-Practices-for-running-Linux-on-Hyper-V).
|
non_process
|
consider using the noop io scheduler in virtualized environments issue report feature request environment all virtualized environments desired feature use elevator noop to use the noop io scheduler this allows the hypervisor to make scheduling decisions instead this may not be applicable to all environments but it s at least
| 0
|
21,949
| 30,451,926,309
|
IssuesEvent
|
2023-07-16 12:10:45
|
tokio-rs/tokio
|
https://api.github.com/repos/tokio-rs/tokio
|
closed
|
Make `Command::raw_arg` show up in docs
|
C-bug T-docs A-tokio M-process
|
This method was added in #5704, but it does not show up in the documentation because it is windows-only. We need to fix that.
|
1.0
|
Make `Command::raw_arg` show up in docs - This method was added in #5704, but it does not show up in the documentation because it is windows-only. We need to fix that.
|
process
|
make command raw arg show up in docs this method was added in but it does not show up in the documentation because it is windows only we need to fix that
| 1
|
9,539
| 12,507,169,878
|
IssuesEvent
|
2020-06-02 13:44:29
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `MakeSet` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `MakeSet` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `MakeSet` from TiDB -
## Description
Port the scalar function `MakeSet` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function makeset from tidb description port the scalar function makeset from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
10,704
| 13,501,851,654
|
IssuesEvent
|
2020-09-13 04:58:54
|
amor71/LiuAlgoTrader
|
https://api.github.com/repos/amor71/LiuAlgoTrader
|
closed
|
expose trade & quote events to strategies
|
in-process
|
including imbalance calculations (which are already done..)
|
1.0
|
expose trade & quote events to strategies - including imbalance calculations (which are already done..)
|
process
|
expose trade quote events to strategies including imbalance calculations which are already done
| 1
|
11,973
| 14,737,017,004
|
IssuesEvent
|
2021-01-07 00:37:59
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Keener - not able to run reports
|
anc-external anc-ops anc-process anc-report anp-important ant-bug ant-support
|
In GitLab by @kdjstudios on Apr 10, 2018, 08:09
**Submitted by:** Gaylan Garrett <gaylan@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-10-62175/conversation
**Server:** Hosted
**Client/Site:** Keener
**Account:** NA
**Issue:**
I just tried to run the accounts receivables report and I received the error. We’re sorry, but something went wrong.
|
1.0
|
Keener - not able to run reports - In GitLab by @kdjstudios on Apr 10, 2018, 08:09
**Submitted by:** Gaylan Garrett <gaylan@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-10-62175/conversation
**Server:** Hosted
**Client/Site:** Keener
**Account:** NA
**Issue:**
I just tried to run the accounts receivables report and I received the error. We’re sorry, but something went wrong.
|
process
|
keener not able to run reports in gitlab by kdjstudios on apr submitted by gaylan garrett helpdesk server hosted client site keener account na issue i just tried to run the accounts receivables report and i received the error we’re sorry but something went wrong
| 1
|
19,274
| 25,463,974,583
|
IssuesEvent
|
2022-11-25 00:38:21
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
Analista de Implantação ERP na [SANKHYA]
|
SALVADOR COMERCIAL INFRAESTRUTURA SUPORTE TÉCNICO PROCESSOS Stale
|
<!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Analista de Implantação ERP
Se você gosta de trabalhar com processos bem estruturados, a Sankhya é o seu lugar!
Venha fazer parte de uma empresa presente no mercado ERP há 30 anos, em constante evolução e com expectativa de crescimento de 40% em 2019.
**Responsabilidades**
- Realizar levantamento de processos nos clientes e sugestão de melhorias quando cabíveis
- Parametrizar o sistema conforme a definição de processos
- Simular e homologar as rotinas implantadas com os usuários nos clientes
- Conduzir treinamentos aos usuários dos clientes
- Acompanhar a colocação do sistema em produção e do desempenho após implantação
- Contribuir com a aplicação das melhores práticas de gestão aplicadas no mercado/segmento de atuação do cliente, com o objetivo de apoiá-lo na evolução do seu negócio, garantindo que os benefícios da solução sejam perceptíveis
## Local
Salvador - Bahia
## Benefícios
- Plano de Saúde
- Plano Odontológico
- Universidade Corporativa
- Vale Alimentação ou Refeição
- Reembolso de KM / Despesas de viagens
#### Diferenciais
Na Sankhya você trabalhará com um dos produtos mais inovadores do mercado ERP! Terá oportunidades de desenvolvimento profissional e experiência abrangente em diversas áreas de negócio, atendendo clientes de destaque em diferentes segmentos.
## Requisitos
**Obrigatórios:**
- Ensino superior completo em Administração, Ciências Contábeis, Sistemas de Informações ou áreas afins;
- Experiência com implantação e parametrização de Software;
- Conhecimentos de processos administrativos, financeiros, contábeis, regras fiscais, tributárias, entre outros;
- Veículo próprio e disponibilidade para viagens;
- Residir em Salvador/BA.
## SANKHYA
Atuando em todo o mercado nacional desde 1989, a Sankhya Gestão de Negócios é uma das maiores empresas provedoras de soluções integradas de gestão corporativa (ERP) do Brasil. Veja alguns números:
- 29 unidades de negócio
- 10.000 clientes corporativos
- Mais de 100 mil usuários
- Mais de 1000 colaboradores diretos
As soluções da Sankhya preparam sua empresa para o futuro, transformando dados operacionais em informações gerenciais para uma tomada de decisão mais segura e precisa.
Por meio de intensa análise, conhecimento e identificação da real necessidade do cliente, a Sankhya dimensiona o conjunto de sistemas e serviços ideais para a gestão de cada empresa.
As soluções Sankhya foram desenvolvidas com estruturas modulares, flexíveis, customizáveis, totalmente WEB e mobile para facilitar a tomada de decisão e resultar em ganhos de produtividade e rentabilidade para a sua empresa.
## Como se candidatar
https://sankhya.gupy.io/jobs/55598?cid=16
|
1.0
|
Analista de Implantação ERP na [SANKHYA] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Analista de Implantação ERP
Se você gosta de trabalhar com processos bem estruturados, a Sankhya é o seu lugar!
Venha fazer parte de uma empresa presente no mercado ERP há 30 anos, em constante evolução e com expectativa de crescimento de 40% em 2019.
**Responsabilidades**
- Realizar levantamento de processos nos clientes e sugestão de melhorias quando cabíveis
- Parametrizar o sistema conforme a definição de processos
- Simular e homologar as rotinas implantadas com os usuários nos clientes
- Conduzir treinamentos aos usuários dos clientes
- Acompanhar a colocação do sistema em produção e do desempenho após implantação
- Contribuir com a aplicação das melhores práticas de gestão aplicadas no mercado/segmento de atuação do cliente, com o objetivo de apoiá-lo na evolução do seu negócio, garantindo que os benefícios da solução sejam perceptíveis
## Local
Salvador - Bahia
## Benefícios
- Plano de Saúde
- Plano Odontológico
- Universidade Corporativa
- Vale Alimentação ou Refeição
- Reembolso de KM / Despesas de viagens
#### Diferenciais
Na Sankhya você trabalhará com um dos produtos mais inovadores do mercado ERP! Terá oportunidades de desenvolvimento profissional e experiência abrangente em diversas áreas de negócio, atendendo clientes de destaque em diferentes segmentos.
## Requisitos
**Obrigatórios:**
- Ensino superior completo em Administração, Ciências Contábeis, Sistemas de Informações ou áreas afins;
- Experiência com implantação e parametrização de Software;
- Conhecimentos de processos administrativos, financeiros, contábeis, regras fiscais, tributárias, entre outros;
- Veículo próprio e disponibilidade para viagens;
- Residir em Salvador/BA.
## SANKHYA
Atuando em todo o mercado nacional desde 1989, a Sankhya Gestão de Negócios é uma das maiores empresas provedoras de soluções integradas de gestão corporativa (ERP) do Brasil. Veja alguns números:
- 29 unidades de negócio
- 10.000 clientes corporativos
- Mais de 100 mil usuários
- Mais de 1000 colaboradores diretos
As soluções da Sankhya preparam sua empresa para o futuro, transformando dados operacionais em informações gerenciais para uma tomada de decisão mais segura e precisa.
Por meio de intensa análise, conhecimento e identificação da real necessidade do cliente, a Sankhya dimensiona o conjunto de sistemas e serviços ideais para a gestão de cada empresa.
As soluções Sankhya foram desenvolvidas com estruturas modulares, flexíveis, customizáveis, totalmente WEB e mobile para facilitar a tomada de decisão e resultar em ganhos de produtividade e rentabilidade para a sua empresa.
## Como se candidatar
https://sankhya.gupy.io/jobs/55598?cid=16
|
process
|
analista de implantação erp na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na analista de implantação erp se você gosta de trabalhar com processos bem estruturados a sankhya é o seu lugar venha fazer parte de uma empresa presente no mercado erp há anos em constante evolução e com expectativa de crescimento de em responsabilidades realizar levantamento de processos nos clientes e sugestão de melhorias quando cabíveis parametrizar o sistema conforme a definição de processos simular e homologar as rotinas implantadas com os usuários nos clientes conduzir treinamentos aos usuários dos clientes acompanhar a colocação do sistema em produção e do desempenho após implantação contribuir com a aplicação das melhores práticas de gestão aplicadas no mercado segmento de atuação do cliente com o objetivo de apoiá lo na evolução do seu negócio garantindo que os benefícios da solução sejam perceptíveis local salvador bahia benefícios plano de saúde plano odontológico universidade corporativa vale alimentação ou refeição reembolso de km despesas de viagens diferenciais na sankhya você trabalhará com um dos produtos mais inovadores do mercado erp terá oportunidades de desenvolvimento profissional e experiência abrangente em diversas áreas de negócio atendendo clientes de destaque em diferentes segmentos requisitos obrigatórios ensino superior completo em administração ciências contábeis sistemas de informações ou áreas afins experiência com implantação e parametrização de software conhecimentos de processos administrativos financeiros contábeis regras fiscais tributárias entre outros veículo próprio e disponibilidade para viagens residir em salvador ba sankhya atuando em todo o mercado nacional desde a sankhya gestão de negócios é uma das maiores empresas provedoras de soluções integradas de gestão corporativa erp do brasil veja alguns números unidades de negócio clientes corporativos mais de mil usuários mais de colaboradores diretos as soluções da sankhya preparam sua empresa para o futuro transformando dados operacionais em informações gerenciais para uma tomada de decisão mais segura e precisa por meio de intensa análise conhecimento e identificação da real necessidade do cliente a sankhya dimensiona o conjunto de sistemas e serviços ideais para a gestão de cada empresa as soluções sankhya foram desenvolvidas com estruturas modulares flexíveis customizáveis totalmente web e mobile para facilitar a tomada de decisão e resultar em ganhos de produtividade e rentabilidade para a sua empresa como se candidatar
| 1
|
7,674
| 10,760,747,784
|
IssuesEvent
|
2019-10-31 19:15:46
|
microsoft/ptvsd
|
https://api.github.com/repos/microsoft/ptvsd
|
closed
|
"pydevdSystemInfo" does not report "ppid" on Python 2.7 on Windows
|
Bug Upstream-pydevd area:Multiprocessing
|
`os.getppid()` was not available on Windows until Python 3.2. Pydevd catches any exceptions from that call, and omits the property from JSON, so this doesn't crash the debugger. But because the adapter doesn't receive "ppid" for the incoming server connection, it doesn't know which of the existing debug sessions is the parent, and doesn't send the subprocess notification to any of them. This effectively breaks multiproc debugging on 2.7.
|
1.0
|
"pydevdSystemInfo" does not report "ppid" on Python 2.7 on Windows - `os.getppid()` was not available on Windows until Python 3.2. Pydevd catches any exceptions from that call, and omits the property from JSON, so this doesn't crash the debugger. But because the adapter doesn't receive "ppid" for the incoming server connection, it doesn't know which of the existing debug sessions is the parent, and doesn't send the subprocess notification to any of them. This effectively breaks multiproc debugging on 2.7.
|
process
|
pydevdsysteminfo does not report ppid on python on windows os getppid was not available on windows until python pydevd catches any exceptions from that call and omits the property from json so this doesn t crash the debugger but because the adapter doesn t receive ppid for the incoming server connection it doesn t know which of the existing debug sessions is the parent and doesn t send the subprocess notification to any of them this effectively breaks multiproc debugging on
| 1
|
725,598
| 24,967,681,082
|
IssuesEvent
|
2022-11-01 20:56:45
|
FlutterFlow/flutterflow-issues
|
https://api.github.com/repos/FlutterFlow/flutterflow-issues
|
opened
|
AudioPlayer not working on web with deep linking enabled for asset audios
|
status: confirmed priority: medium
|
Issue tracker is **ONLY** used for reporting bugs. New feature suggestions and questions should be discussed on Community or submitted through our user feedback form.
Your issue may already be reported! Please search in the [issue tracker](../) before creating one.
Please **thumbs up** this issue if you have also experienced it. You may also add more information if there is something relevant that was not mentioned. However, please refrain from comments that are not constructive, like "I have this problem too", etc.
## Expected behavior (required)
<!-- A clear and concise description of what you expected to happen. -->
Audio player loads "asset" audios.
## Current behavior (required)
<!-- What happens instead of the expected behavior. -->
Audio player can't play asset audios on web in the scenario below.
## To Reproduce (required)
<!-- Please be detailed as possible here so we can help diagnose the issue. Issues cannot be accepted if they are too vague. For example, "project fails to build" could be better reported as:
1. Create new page
2. Add container widget
3. Set width = 123
4. Click Run
5. Observe that project doesn’t build
Code can be included in this section if it is relevant to reproducing the bug.
-->
Steps to reproduce the behavior:
1. Create new project
2. Add new page, e.g. Page2
3. Add link from HomePage to Page2.
4. Add an audio to project assets.
5. On Page2, create an audio player.
6. Select audio type = Asset and the uploaded audio as Asset Audio.
7. Publish to web.
8. Open the published link.
9. Navigate from HomePage to Page2.
10. Observe that the audio widget does not show.
11. Open browser dev tools/network.
12. Observe that it tried to load the audio from `https://<project>/page2/assets/assets/audios/<filename>.mp3`. Expected: `https://<project>.flutterflow.app/assets/assets/audios/<filename>.mp3`
Project: https://app.flutterflow.io/project/audio-asset-63344x
## Context (required)
<!-- How has this issue affected you? What are you trying to accomplish? -->
Can't use audio player on web.
## Screenshots / recordings
<!-- If applicable, add screenshots to help explain your problem. -->
N/A
## Your environment
<!--- Include relevant details about the environment you experienced the bug in -->
* **Bug Report Code:**
* Version of FlutterFlow used: FlutterFlow v3.0, released October 28, 2022
* Platform (e.g. Web, MacOS Desktop): Web
* Browser name and version: Chrome, Version 107.0.5304.87 (Official Build) (x86_64)
* Operating system and version (desktop or mobile): MacOS 12.5.1 Monterey.
|
1.0
|
AudioPlayer not working on web with deep linking enabled for asset audios - Issue tracker is **ONLY** used for reporting bugs. New feature suggestions and questions should be discussed on Community or submitted through our user feedback form.
Your issue may already be reported! Please search in the [issue tracker](../) before creating one.
Please **thumbs up** this issue if you have also experienced it. You may also add more information if there is something relevant that was not mentioned. However, please refrain from comments that are not constructive, like "I have this problem too", etc.
## Expected behavior (required)
<!-- A clear and concise description of what you expected to happen. -->
Audio player loads "asset" audios.
## Current behavior (required)
<!-- What happens instead of the expected behavior. -->
Audio player can't play asset audios on web in the scenario below.
## To Reproduce (required)
<!-- Please be detailed as possible here so we can help diagnose the issue. Issues cannot be accepted if they are too vague. For example, "project fails to build" could be better reported as:
1. Create new page
2. Add container widget
3. Set width = 123
4. Click Run
5. Observe that project doesn’t build
Code can be included in this section if it is relevant to reproducing the bug.
-->
Steps to reproduce the behavior:
1. Create new project
2. Add new page, e.g. Page2
3. Add link from HomePage to Page2.
4. Add an audio to project assets.
5. On Page2, create an audio player.
6. Select audio type = Asset and the uploaded audio as Asset Audio.
7. Publish to web.
8. Open the published link.
9. Navigate from HomePage to Page2.
10. Observe that the audio widget does not show.
11. Open browser dev tools/network.
12. Observe that it tried to load the audio from `https://<project>/page2/assets/assets/audios/<filename>.mp3`. Expected: `https://<project>.flutterflow.app/assets/assets/audios/<filename>.mp3`
Project: https://app.flutterflow.io/project/audio-asset-63344x
## Context (required)
<!-- How has this issue affected you? What are you trying to accomplish? -->
Can't use audio player on web.
## Screenshots / recordings
<!-- If applicable, add screenshots to help explain your problem. -->
N/A
## Your environment
<!--- Include relevant details about the environment you experienced the bug in -->
* **Bug Report Code:**
* Version of FlutterFlow used: FlutterFlow v3.0, released October 28, 2022
* Platform (e.g. Web, MacOS Desktop): Web
* Browser name and version: Chrome, Version 107.0.5304.87 (Official Build) (x86_64)
* Operating system and version (desktop or mobile): MacOS 12.5.1 Monterey.
|
non_process
|
audioplayer not working on web with deep linking enabled for asset audios issue tracker is only used for reporting bugs new feature suggestions and questions should be discussed on community or submitted through our user feedback form your issue may already be reported please search in the before creating one please thumbs up this issue if you have also experienced it you may also add more information if there is something relevant that was not mentioned however please refrain from comments that are not constructive like i have this problem too etc expected behavior required audio player loads asset audios current behavior required audio player can t play asset audios on web in the scenario below to reproduce required please be detailed as possible here so we can help diagnose the issue issues cannot be accepted if they are too vague for example project fails to build could be better reported as create new page add container widget set width click run observe that project doesn’t build code can be included in this section if it is relevant to reproducing the bug steps to reproduce the behavior create new project add new page e g add link from homepage to add an audio to project assets on create an audio player select audio type asset and the uploaded audio as asset audio publish to web open the published link navigate from homepage to observe that the audio widget does not show open browser dev tools network observe that it tried to load the audio from expected project context required can t use audio player on web screenshots recordings n a your environment bug report code version of flutterflow used flutterflow released october platform e g web macos desktop web browser name and version chrome version official build operating system and version desktop or mobile macos monterey
| 0
|
15,331
| 19,457,021,531
|
IssuesEvent
|
2021-12-23 00:55:22
|
googleapis/release-please
|
https://api.github.com/repos/googleapis/release-please
|
closed
|
process: add integration test for regression in #594
|
type: process
|
## Problem
We broke pagination logic when merging #587.
A fix has already been landed [here](https://github.com/googleapis/release-please/pull/594).
## Why this is less than ideal
We should add an integration test that actually exercises our graphql queries (unit tests did not catch this issue, because it's an exception thrown by GitHub).
|
1.0
|
process: add integration test for regression in #594 - ## Problem
We broke pagination logic when merging #587.
A fix has already been landed [here](https://github.com/googleapis/release-please/pull/594).
## Why this is less than ideal
We should add an integration test that actually exercises our graphql queries (unit tests did not catch this issue, because it's an exception thrown by GitHub).
|
process
|
process add integration test for regression in problem we broke pagination logic when merging a fix has already been landed why this is less than ideal we should add an integration test that actually exercises our graphql queries unit tests did not catch this issue because it s an exception thrown by github
| 1
|
173,505
| 6,525,791,908
|
IssuesEvent
|
2017-08-29 17:09:53
|
wordpress-mobile/AztecEditor-Android
|
https://api.github.com/repos/wordpress-mobile/AztecEditor-Android
|
closed
|
Non-latin characters are copied as HTML-encoded entities
|
bug high priority
|
Reported by @rachelmcr:
> When copying non-Latin text from the beta editor and pasting it elsewhere, the text is encoded. I reproduced this by copying Japanese text from Aztec and pasting it into Keep; the HTML encoded text is what appears in Keep.
The same happens for emoji, in both the main text editor and the title field.
For example:
> Êtest 😊😃
> 测试一个
becomes:
`Êtest 😊😃<br>测试一个`
|
1.0
|
Non-latin characters are copied as HTML-encoded entities - Reported by @rachelmcr:
> When copying non-Latin text from the beta editor and pasting it elsewhere, the text is encoded. I reproduced this by copying Japanese text from Aztec and pasting it into Keep; the HTML encoded text is what appears in Keep.
The same happens for emoji, in both the main text editor and the title field.
For example:
> Êtest 😊😃
> 测试一个
becomes:
`Êtest 😊😃<br>测试一个`
|
non_process
|
non latin characters are copied as html encoded entities reported by rachelmcr when copying non latin text from the beta editor and pasting it elsewhere the text is encoded i reproduced this by copying japanese text from aztec and pasting it into keep the html encoded text is what appears in keep the same happens for emoji in both the main text editor and the title field for example êtest 😊😃 测试一个 becomes test
| 0
|
9,912
| 12,952,554,155
|
IssuesEvent
|
2020-07-19 20:59:28
|
pb866/TrackMatcher.jl
|
https://api.github.com/repos/pb866/TrackMatcher.jl
|
closed
|
Inconsistencies in FlightAware archive
|
check data processing issue
|
Column names vary slightly in different FlightAware archive files for different year. Make sure `TrackMatcher` works for all cases.
|
1.0
|
Inconsistencies in FlightAware archive - Column names vary slightly in different FlightAware archive files for different year. Make sure `TrackMatcher` works for all cases.
|
process
|
inconsistencies in flightaware archive column names vary slightly in different flightaware archive files for different year make sure trackmatcher works for all cases
| 1
|
1,670
| 4,308,536,426
|
IssuesEvent
|
2016-07-21 13:22:17
|
paperjs/paper.js
|
https://api.github.com/repos/paperjs/paper.js
|
closed
|
incoherent fill-rule used when building CompoundPath
|
cat: path-processing pri: important type: bug
|
The default fill-rule of SVG, 'nonzero', is respected when building a CompoundPath from a single SVG path:
```javascript
var p = new CompoundPath('m 51.428571,89.50504 94.285719,-20 20,91.42857 -102.857147,17.14286 z M 200,130.43865 l 22.85715,82.85714 -97.14286,28.57143 -17.14286,-114.28571 z');
p.fillColor = 'black';
p.windingRule = 'nonzero';
```
[See demo](http://sketch.paperjs.org/#S/VY89T8MwEIb/yslLiuRYtmM3dqtOnZEQjJTBTQyN4pyjkFIpiP+OA2bgtnveR/fxSdANnuzIU+/n5kIoaWK79if8cBOMcAD0NzjGYYxXbB/cfNkUA2jBlDS6FtRYprnmCqxiP8TSUnKQnNrsQCm4ZGukaipqJhLdwgL3SeJUVJypymw1BJC/mqYm+1Da7FNp2EqqNO0PlULknbAUd/sTQq6RvXYhHGOIU7q/OAfX9MW//NZh2+Hb4zX41cCIi59isU//nyfv+jF2OL+T3fPL1zc=).
But when trying to build it from the two sub-path, the result is totally different:
```javascript
var p = new CompoundPath({
fillColor: 'black',
children: [
new Path('m 20,35.219325 94.28572,-19.999999 20,91.428574 -102.857148,17.14285 z'),
new Path('M 168.57143,46.647895 85.714289,43.790753 54.285715,146.64789 137.14286,158.07646 z')
]
});
```
[See demo](http://sketch.paperjs.org/#S/bVBNa4NAEP0rw15sYDM4+72WnnIuFHpMcjBqUNRdsbaFhvz3upHe+k4zb94HzI2FcmxYwd77ZqlaxlkV67R/lTNM8AKh+YZDHKf4Geq3cmmfbqcAK67dMBziEOcCsstQVn3Gt0PVdkM9N6GA40YkpJSHOxtB5FxqFOSl0OAVCqet4Hvy6B9IAk+oEq9gT7nAdSLlOFmkRMNPtuP/hb8CGYdJLLkyaJR1XoPTaJPPcyXR+txqCXrrJc3pTwgkt3zDSTvMrVEmNW1F51O4757X/1zmpuyn2IXlgxXH8/0X) (the shape of the path isn't the problem here, only the way it's filled)
If the first CompoundPath is exported to JSON and then re-imported in a CompoundPath, the way it is filled will also be different. Before finding a fix for this problem, I'm pretty sure that there is a workaround to achieve the same fill result as CompoundPath built from a single svg path, without actually using an svg path.
|
1.0
|
incoherent fill-rule used when building CompoundPath - The default fill-rule of SVG, 'nonzero', is respected when building a CompoundPath from a single SVG path:
```javascript
var p = new CompoundPath('m 51.428571,89.50504 94.285719,-20 20,91.42857 -102.857147,17.14286 z M 200,130.43865 l 22.85715,82.85714 -97.14286,28.57143 -17.14286,-114.28571 z');
p.fillColor = 'black';
p.windingRule = 'nonzero';
```
[See demo](http://sketch.paperjs.org/#S/VY89T8MwEIb/yslLiuRYtmM3dqtOnZEQjJTBTQyN4pyjkFIpiP+OA2bgtnveR/fxSdANnuzIU+/n5kIoaWK79if8cBOMcAD0NzjGYYxXbB/cfNkUA2jBlDS6FtRYprnmCqxiP8TSUnKQnNrsQCm4ZGukaipqJhLdwgL3SeJUVJypymw1BJC/mqYm+1Da7FNp2EqqNO0PlULknbAUd/sTQq6RvXYhHGOIU7q/OAfX9MW//NZh2+Hb4zX41cCIi59isU//nyfv+jF2OL+T3fPL1zc=).
But when trying to build it from the two sub-path, the result is totally different:
```javascript
var p = new CompoundPath({
fillColor: 'black',
children: [
new Path('m 20,35.219325 94.28572,-19.999999 20,91.428574 -102.857148,17.14285 z'),
new Path('M 168.57143,46.647895 85.714289,43.790753 54.285715,146.64789 137.14286,158.07646 z')
]
});
```
[See demo](http://sketch.paperjs.org/#S/bVBNa4NAEP0rw15sYDM4+72WnnIuFHpMcjBqUNRdsbaFhvz3upHe+k4zb94HzI2FcmxYwd77ZqlaxlkV67R/lTNM8AKh+YZDHKf4Geq3cmmfbqcAK67dMBziEOcCsstQVn3Gt0PVdkM9N6GA40YkpJSHOxtB5FxqFOSl0OAVCqet4Hvy6B9IAk+oEq9gT7nAdSLlOFmkRMNPtuP/hb8CGYdJLLkyaJR1XoPTaJPPcyXR+txqCXrrJc3pTwgkt3zDSTvMrVEmNW1F51O4757X/1zmpuyn2IXlgxXH8/0X) (the shape of the path isn't the problem here, only the way it's filled)
If the first CompoundPath is exported to JSON and then re-imported in a CompoundPath, the way it is filled will also be different. Before finding a fix for this problem, I'm pretty sure that there is a workaround to achieve the same fill result as CompoundPath built from a single svg path, without actually using an svg path.
|
process
|
incoherent fill rule used when building compoundpath the default fill rule of svg nonzero is respected when building a compoundpath from a single svg path javascript var p new compoundpath m z m l z p fillcolor black p windingrule nonzero but when trying to build it from the two sub path the result is totally different javascript var p new compoundpath fillcolor black children new path m z new path m z the shape of the path isn t the problem here only the way it s filled if the first compoundpath is exported to json and then re imported in a compoundpath the way it is filled will also be different before finding a fix for this problem i m pretty sure that there is a workaround to achieve the same fill result as compoundpath built from a single svg path without actually using an svg path
| 1
|
432,912
| 30,299,408,618
|
IssuesEvent
|
2023-07-10 03:54:20
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
opened
|
Port System.Cloud documentation from .NET 8.0 APIs
|
documentation blocking-release
|
Below is the list of APIs that still show up as undocumented in dotnet-api-docs and were introduced in .NET 8.0.
Full porting instructions can be found in the [main issue](https://github.com/dotnet/runtime/issues/88561).
This task needs to be finished before the RC2 snap (September 18th).
| Summary | Parameters | TypeParameters | ReturnValue | API |
|----------|------------|----------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Missing | NA | NA | NA | [N:System.Cloud.DocumentDb](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/ns-System.Cloud.DocumentDb.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.DatabaseOptions.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/DatabaseOptions.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.QueryRequestOptions1.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/QueryRequestOptions1.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.RegionalDatabaseOptions.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/RegionalDatabaseOptions.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.RequestOptions.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/RequestOptions.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.RequestOptions1.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/RequestOptions1.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.TableOptions.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/TableOptions.xml) |
| Missing | NA | NA | NA | [N:System.Cloud.Messaging](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/ns-System.Cloud.Messaging.xml) |
|
1.0
|
Port System.Cloud documentation from .NET 8.0 APIs - Below is the list of APIs that still show up as undocumented in dotnet-api-docs and were introduced in .NET 8.0.
Full porting instructions can be found in the [main issue](https://github.com/dotnet/runtime/issues/88561).
This task needs to be finished before the RC2 snap (September 18th).
| Summary | Parameters | TypeParameters | ReturnValue | API |
|----------|------------|----------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Missing | NA | NA | NA | [N:System.Cloud.DocumentDb](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/ns-System.Cloud.DocumentDb.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.DatabaseOptions.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/DatabaseOptions.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.QueryRequestOptions1.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/QueryRequestOptions1.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.RegionalDatabaseOptions.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/RegionalDatabaseOptions.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.RequestOptions.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/RequestOptions.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.RequestOptions1.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/RequestOptions1.xml) |
| Missing | NA | NA | NA | [M:System.Cloud.DocumentDb.TableOptions.#ctor](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/System.Cloud.DocumentDb/TableOptions.xml) |
| Missing | NA | NA | NA | [N:System.Cloud.Messaging](https://github.com/dotnet/dotnet-api-docs/blob/main/xml/ns-System.Cloud.Messaging.xml) |
|
non_process
|
port system cloud documentation from net apis below is the list of apis that still show up as undocumented in dotnet api docs and were introduced in net full porting instructions can be found in the this task needs to be finished before the snap september summary parameters typeparameters returnvalue api missing na na na missing na na na missing na na na missing na na na missing na na na missing na na na missing na na na missing na na na
| 0
|
16,922
| 22,267,822,230
|
IssuesEvent
|
2022-06-10 09:13:05
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Collect metrics about started instances with start instructions through Zeebe Analytics
|
team/process-automation
|
Zeebe Analytics should be able to distinguish between process instances started at the root start event and process instances started at other elements. This will allow us to collect data on the usage of the Start Process Instance Anywhere feature.
Blocked by https://github.com/camunda/zeebe/issues/9390
Possibly blocked by https://github.com/camunda/zeebe/issues/9397 and https://github.com/camunda/zeebe/issues/9398 for testing
|
1.0
|
Collect metrics about started instances with start instructions through Zeebe Analytics - Zeebe Analytics should be able to distinguish between process instances started at the root start event and process instances started at other elements. This will allow us to collect data on the usage of the Start Process Instance Anywhere feature.
Blocked by https://github.com/camunda/zeebe/issues/9390
Possibly blocked by https://github.com/camunda/zeebe/issues/9397 and https://github.com/camunda/zeebe/issues/9398 for testing
|
process
|
collect metrics about started instances with start instructions through zeebe analytics zeebe analytics should be able to distinguish between process instances started at the root start event and process instances started at other elements this will allow us to collect data on the usage of the start process instance anywhere feature blocked by possibly blocked by and for testing
| 1
|
15,688
| 19,847,971,243
|
IssuesEvent
|
2022-01-21 09:04:39
|
googleapis/java-shell
|
https://api.github.com/repos/googleapis/java-shell
|
closed
|
Your .repo-metadata.json file has a problem 🤒
|
type: process api: cloudshell repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'shell' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'shell' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname shell invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
408,844
| 11,953,174,184
|
IssuesEvent
|
2020-04-03 20:21:05
|
forseti-security/forseti-security
|
https://api.github.com/repos/forseti-security/forseti-security
|
closed
|
[PATCH] Forseti v2.25.1
|
module: inventory priority: p0 triaged: yes
|
Patch Forseti v2.25.0 to fix method calls for organization policies (Issue #3709).
Related to terraform-google-forseti issue [#561](https://github.com/forseti-security/terraform-google-forseti/issues/561)
|
1.0
|
[PATCH] Forseti v2.25.1 - Patch Forseti v2.25.0 to fix method calls for organization policies (Issue #3709).
Related to terraform-google-forseti issue [#561](https://github.com/forseti-security/terraform-google-forseti/issues/561)
|
non_process
|
forseti patch forseti to fix method calls for organization policies issue related to terraform google forseti issue
| 0
|
539,849
| 15,795,828,228
|
IssuesEvent
|
2021-04-02 13:51:53
|
pioneers/runtime
|
https://api.github.com/repos/pioneers/runtime
|
opened
|
[NET_HANDLER] Work with Shepherd to implement hearbeat
|
enhancement high-priority
|
Shepherd currently has some trouble detecting robot disconnects. They rely on Runtime cleanly exiting in the event of an emergency of some sort, and closing the server-side port. However, in some of these edge cases, Runtime isn't really able to close that socket well (ex. networking error / disconnect, robot shutdown because of low voltage, etc.). It would be easier to simply implement a heartbeat that is sent from Runtime to Shepherd periodically (maybe once per second or so) which would let Shepherd know that the robot is still alive and well.
|
1.0
|
[NET_HANDLER] Work with Shepherd to implement hearbeat - Shepherd currently has some trouble detecting robot disconnects. They rely on Runtime cleanly exiting in the event of an emergency of some sort, and closing the server-side port. However, in some of these edge cases, Runtime isn't really able to close that socket well (ex. networking error / disconnect, robot shutdown because of low voltage, etc.). It would be easier to simply implement a heartbeat that is sent from Runtime to Shepherd periodically (maybe once per second or so) which would let Shepherd know that the robot is still alive and well.
|
non_process
|
work with shepherd to implement hearbeat shepherd currently has some trouble detecting robot disconnects they rely on runtime cleanly exiting in the event of an emergency of some sort and closing the server side port however in some of these edge cases runtime isn t really able to close that socket well ex networking error disconnect robot shutdown because of low voltage etc it would be easier to simply implement a heartbeat that is sent from runtime to shepherd periodically maybe once per second or so which would let shepherd know that the robot is still alive and well
| 0
|
8,911
| 12,016,244,721
|
IssuesEvent
|
2020-04-10 15:41:46
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
ServiceController tests fail to start test service
|
area-System.ServiceProcess untriaged
|
These tests verify ServiceBase and ServiceController but are rarely run because you need to specify outerloop AND you have to be elevated. They currently are failing for me -- I tried two machines.
```
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests>msbuild /t:rebuildandtest /p:outerloop=true
...
...
Discovering: System.ServiceProcess.ServiceController.Tests (method display = ClassAndMethod, method display options
= None)
Discovered: System.ServiceProcess.ServiceController.Tests (found 36 of 37 test cases)
Starting: System.ServiceProcess.ServiceController.Tests (parallel test collections = on, max threads = 4)...
System.InvalidOperationException : Cannot start service '4cb85dca-edbf-430c-afe2-61f5d782f266.Dependent' on com
puter '.'.
---- System.ComponentModel.Win32Exception : Access is denied.
Stack Trace:
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\src\System\ServiceProcess\ServiceControl
ler.cs(844,0): at System.ServiceProcess.ServiceController.Start(String[] args)
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\src\System\ServiceProcess\ServiceControl
ler.cs(804,0): at System.ServiceProcess.ServiceController.Start()
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests\System.ServiceProcess.ServiceContr
oller.TestService\TestServiceInstaller.cs(106,0): at System.ServiceProcess.Tests.TestServiceInstaller.Install()
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests\TestServiceProvider.cs(123,0): at
System.ServiceProcess.Tests.TestServiceProvider.CreateTestServices()
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests\TestServiceProvider.cs(77,0): at S
ystem.ServiceProcess.Tests.TestServiceProvider..ctor(String serviceName, String userName)
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests\TestServiceProvider.cs(62,0): at S
ystem.ServiceProcess.Tests.TestServiceProvider..ctor()
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests\ServiceControllerTests.cs(24,0): a
t System.ServiceProcess.Tests.ServiceControllerTests..ctor()
----- Inner Stack Trace -----
```
This is definitely an elevated prompt. I tried `Debugger.Launch()` in the TestService constructor, and it does not fire. I looked at the Security event log, and it does not have any "failed" messages. "net start <servicename>" gives access denied as well. So it's not the test harness.
@Anipik do you repro this?
|
1.0
|
ServiceController tests fail to start test service - These tests verify ServiceBase and ServiceController but are rarely run because you need to specify outerloop AND you have to be elevated. They currently are failing for me -- I tried two machines.
```
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests>msbuild /t:rebuildandtest /p:outerloop=true
...
...
Discovering: System.ServiceProcess.ServiceController.Tests (method display = ClassAndMethod, method display options
= None)
Discovered: System.ServiceProcess.ServiceController.Tests (found 36 of 37 test cases)
Starting: System.ServiceProcess.ServiceController.Tests (parallel test collections = on, max threads = 4)...
System.InvalidOperationException : Cannot start service '4cb85dca-edbf-430c-afe2-61f5d782f266.Dependent' on com
puter '.'.
---- System.ComponentModel.Win32Exception : Access is denied.
Stack Trace:
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\src\System\ServiceProcess\ServiceControl
ler.cs(844,0): at System.ServiceProcess.ServiceController.Start(String[] args)
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\src\System\ServiceProcess\ServiceControl
ler.cs(804,0): at System.ServiceProcess.ServiceController.Start()
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests\System.ServiceProcess.ServiceContr
oller.TestService\TestServiceInstaller.cs(106,0): at System.ServiceProcess.Tests.TestServiceInstaller.Install()
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests\TestServiceProvider.cs(123,0): at
System.ServiceProcess.Tests.TestServiceProvider.CreateTestServices()
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests\TestServiceProvider.cs(77,0): at S
ystem.ServiceProcess.Tests.TestServiceProvider..ctor(String serviceName, String userName)
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests\TestServiceProvider.cs(62,0): at S
ystem.ServiceProcess.Tests.TestServiceProvider..ctor()
C:\git\runtime\src\libraries\System.ServiceProcess.ServiceController\tests\ServiceControllerTests.cs(24,0): a
t System.ServiceProcess.Tests.ServiceControllerTests..ctor()
----- Inner Stack Trace -----
```
This is definitely an elevated prompt. I tried `Debugger.Launch()` in the TestService constructor, and it does not fire. I looked at the Security event log, and it does not have any "failed" messages. "net start <servicename>" gives access denied as well. So it's not the test harness.
@Anipik do you repro this?
|
process
|
servicecontroller tests fail to start test service these tests verify servicebase and servicecontroller but are rarely run because you need to specify outerloop and you have to be elevated they currently are failing for me i tried two machines c git runtime src libraries system serviceprocess servicecontroller tests msbuild t rebuildandtest p outerloop true discovering system serviceprocess servicecontroller tests method display classandmethod method display options none discovered system serviceprocess servicecontroller tests found of test cases starting system serviceprocess servicecontroller tests parallel test collections on max threads system invalidoperationexception cannot start service edbf dependent on com puter system componentmodel access is denied stack trace c git runtime src libraries system serviceprocess servicecontroller src system serviceprocess servicecontrol ler cs at system serviceprocess servicecontroller start string args c git runtime src libraries system serviceprocess servicecontroller src system serviceprocess servicecontrol ler cs at system serviceprocess servicecontroller start c git runtime src libraries system serviceprocess servicecontroller tests system serviceprocess servicecontr oller testservice testserviceinstaller cs at system serviceprocess tests testserviceinstaller install c git runtime src libraries system serviceprocess servicecontroller tests testserviceprovider cs at system serviceprocess tests testserviceprovider createtestservices c git runtime src libraries system serviceprocess servicecontroller tests testserviceprovider cs at s ystem serviceprocess tests testserviceprovider ctor string servicename string username c git runtime src libraries system serviceprocess servicecontroller tests testserviceprovider cs at s ystem serviceprocess tests testserviceprovider ctor c git runtime src libraries system serviceprocess servicecontroller tests servicecontrollertests cs a t system serviceprocess tests servicecontrollertests ctor inner stack trace this is definitely an elevated prompt i tried debugger launch in the testservice constructor and it does not fire i looked at the security event log and it does not have any failed messages net start gives access denied as well so it s not the test harness anipik do you repro this
| 1
|
40,580
| 12,799,575,817
|
IssuesEvent
|
2020-07-02 15:36:31
|
TreyM-WSS/WhiteSource-Demo
|
https://api.github.com/repos/TreyM-WSS/WhiteSource-Demo
|
opened
|
CVE-2020-8840 (High) detected in jackson-databind-2.8.1.jar
|
security vulnerability
|
## CVE-2020-8840 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/WhiteSource-Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.8.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/TreyM-WSS/WhiteSource-Demo/commits/75659f691fb82d67ecd666ba6076394defeb92d0">75659f691fb82d67ecd666ba6076394defeb92d0</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.0.0 through 2.9.10.2 lacks certain xbean-reflect/JNDI blocking, as demonstrated by org.apache.xbean.propertyeditor.JndiConverter.
<p>Publish Date: 2020-02-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8840>CVE-2020-8840</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2620">https://github.com/FasterXML/jackson-databind/issues/2620</a></p>
<p>Release Date: 2020-02-10</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.1","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.4.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.3"}],"vulnerabilityIdentifier":"CVE-2020-8840","vulnerabilityDetails":"FasterXML jackson-databind 2.0.0 through 2.9.10.2 lacks certain xbean-reflect/JNDI blocking, as demonstrated by org.apache.xbean.propertyeditor.JndiConverter.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8840","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-8840 (High) detected in jackson-databind-2.8.1.jar - ## CVE-2020-8840 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/WhiteSource-Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.8.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/TreyM-WSS/WhiteSource-Demo/commits/75659f691fb82d67ecd666ba6076394defeb92d0">75659f691fb82d67ecd666ba6076394defeb92d0</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.0.0 through 2.9.10.2 lacks certain xbean-reflect/JNDI blocking, as demonstrated by org.apache.xbean.propertyeditor.JndiConverter.
<p>Publish Date: 2020-02-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8840>CVE-2020-8840</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2620">https://github.com/FasterXML/jackson-databind/issues/2620</a></p>
<p>Release Date: 2020-02-10</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.1","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.4.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.3"}],"vulnerabilityIdentifier":"CVE-2020-8840","vulnerabilityDetails":"FasterXML jackson-databind 2.0.0 through 2.9.10.2 lacks certain xbean-reflect/JNDI blocking, as demonstrated by org.apache.xbean.propertyeditor.JndiConverter.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8840","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm whitesource demo pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind through lacks certain xbean reflect jndi blocking as demonstrated by org apache xbean propertyeditor jndiconverter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind through lacks certain xbean reflect jndi blocking as demonstrated by org apache xbean propertyeditor jndiconverter vulnerabilityurl
| 0
|
14,006
| 16,813,627,275
|
IssuesEvent
|
2021-06-17 03:18:13
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Data defined parameter not taken into account when editing features "in-place"
|
Bug Processing
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
Take for example "densify geometries given an interval".
If I run this alg on a layer with a size attribut and set interval as data defined by this attribut and output to a new layer, data defined parameter is taken into account.
On the opposite, if I modify the entities "in-place", data defined interval is not taken into account.
Input layer

Output to a new layer

Edit "in-place"

**How to Reproduce**
<!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome -->
1. Download [lignes_contraintes.zip](https://github.com/qgis/QGIS/files/6664102/lignes_contraintes.zip) and load it in Qgis
2. Enable Edit "in-place"

3. Run alg "densify geometries given an interval"
4. Set attribut "taille" as data defined value for Interval

5. Distance between vertices is 1 meter (default value of the alg) instead of 2 or 3 meters.
You can run the same alg but output to a new layer and see that data defined value for interval is taken into account.
**QGIS and OS versions**
3.18.3 OsGeo4W testing
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
**Additional context**
I observe a similar issue for at least 2 or 3 other algs. So it seems not specific to "densify geometries given an interval" but a general issue when editing features "in-place".
<!-- Add any other context about the problem here. -->
|
1.0
|
Data defined parameter not taken into account when editing features "in-place" - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
Take for example "densify geometries given an interval".
If I run this alg on a layer with a size attribut and set interval as data defined by this attribut and output to a new layer, data defined parameter is taken into account.
On the opposite, if I modify the entities "in-place", data defined interval is not taken into account.
Input layer

Output to a new layer

Edit "in-place"

**How to Reproduce**
<!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome -->
1. Download [lignes_contraintes.zip](https://github.com/qgis/QGIS/files/6664102/lignes_contraintes.zip) and load it in Qgis
2. Enable Edit "in-place"

3. Run alg "densify geometries given an interval"
4. Set attribut "taille" as data defined value for Interval

5. Distance between vertices is 1 meter (default value of the alg) instead of 2 or 3 meters.
You can run the same alg but output to a new layer and see that data defined value for interval is taken into account.
**QGIS and OS versions**
3.18.3 OsGeo4W testing
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
**Additional context**
I observe a similar issue for at least 2 or 3 other algs. So it seems not specific to "densify geometries given an interval" but a general issue when editing features "in-place".
<!-- Add any other context about the problem here. -->
|
process
|
data defined parameter not taken into account when editing features in place bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug take for example densify geometries given an interval if i run this alg on a layer with a size attribut and set interval as data defined by this attribut and output to a new layer data defined parameter is taken into account on the opposite if i modify the entities in place data defined interval is not taken into account input layer output to a new layer edit in place how to reproduce download and load it in qgis enable edit in place run alg densify geometries given an interval set attribut taille as data defined value for interval distance between vertices is meter default value of the alg instead of or meters you can run the same alg but output to a new layer and see that data defined value for interval is taken into account qgis and os versions testing about click in the table ctrl a and then ctrl c finally paste here additional context i observe a similar issue for at least or other algs so it seems not specific to densify geometries given an interval but a general issue when editing features in place
| 1
|
103,434
| 8,908,940,283
|
IssuesEvent
|
2019-01-18 03:21:24
|
xcat2/xcat2-task-management
|
https://api.github.com/repos/xcat2/xcat2-task-management
|
closed
|
Refine cases to reduce false errors
|
sprint2 test
|
In CI regression, we found some cases failed because the environment is not clear. Refine following cases to reduce the false error
```
rmdef_dynamic_group
lsdef_nics
```
|
1.0
|
Refine cases to reduce false errors - In CI regression, we found some cases failed because the environment is not clear. Refine following cases to reduce the false error
```
rmdef_dynamic_group
lsdef_nics
```
|
non_process
|
refine cases to reduce false errors in ci regression we found some cases failed because the environment is not clear refine following cases to reduce the false error rmdef dynamic group lsdef nics
| 0
|
10,814
| 13,609,290,269
|
IssuesEvent
|
2020-09-23 04:50:33
|
googleapis/java-mediatranslation
|
https://api.github.com/repos/googleapis/java-mediatranslation
|
closed
|
Dependency Dashboard
|
api: mediatranslation type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-mediatranslation-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-mediatranslation to v0.2.2
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-mediatranslation-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-mediatranslation to v0.2.2
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any chore deps update dependency com google cloud google cloud mediatranslation to check this box to trigger a request for renovate to run again on this repository
| 1
|
11,795
| 14,622,679,050
|
IssuesEvent
|
2020-12-23 01:01:42
|
ocaml-batteries-team/batteries-included
|
https://api.github.com/repos/ocaml-batteries-team/batteries-included
|
closed
|
Consider moving to ocaml-community?
|
development process
|
The [ocaml-community](https://github.com/ocaml-community/meta) has recently sprung up as a community group committed to adopting and maintaining OCaml software projects that are not being actively developed by their authors anymore.
I think that it is a natural question to wonder whether batteries-included should migrate to ocaml-community. From the start, batteries-included was designed as a non-hierarchical project built for and by "the ocaml community" at learge. I think the people creating the ocaml-batteries-team organization had in mind the same sort of open-ended, non-hierarchical code ownership and evolution that ocaml-community now represents. In practice this vision did not manifest itself through adopting other repositories, but I would say that in a sense Batteries was a pre-github incarnation of the same ideas.
One difference with possibly other projects ending up in ocaml-community today is that Batteries still receives somewhat frequent active contributions; it is not in strict maintenance mode, updating only to fix severe bugs or support newer OCaml releases. However, it is also less active than it was at the beginning, and I think that there is a sense that, in the era of (1) opam making distributing smaller packages easier and (2) the compiler distribution's standard library breaking its decades-old stillness to move fast, the development of big monolithic standard-library overlays may be slowing down calmly and peacefully. (@c-cube's containers may be a counter-example, or maybe has just not reached its slow mode yet.)
If we decided to move to ocaml-community, and they accepted to take us in, the idea wouldn't be to stop accepting new features or code changes -- this could go on indefinitely -- or to free the current maintainers (@thizanne, @UnixJunkie and myself) from their maintenance responsibilities. I don't actually think that a lot would change in practice. So why am I suggesting to discuss this?
1. I like the name, ocaml-community. It actually reflects what we do better than ocaml-batteries-team.
2. If the ocaml-community people come up with better ways to do CI, build, or renewed enthusiasm about replacing ocamlbuild+prefilter with dune+cppo, we can benefit from that -- or, more likely, be faced pressure to do the work to follow this better approach on our end.
3. It can help more people realize that batteries is a community-owned project, and possibly volunteer to help along with the maintenance.
(I've been feeling guilty about taking more than my usual two weeks to respond to a new OCaml release for 4.07.0, maybe having some more people willing to do this kind of work could help us be more reactive.)
|
1.0
|
Consider moving to ocaml-community? - The [ocaml-community](https://github.com/ocaml-community/meta) has recently sprung up as a community group committed to adopting and maintaining OCaml software projects that are not being actively developed by their authors anymore.
I think that it is a natural question to wonder whether batteries-included should migrate to ocaml-community. From the start, batteries-included was designed as a non-hierarchical project built for and by "the ocaml community" at learge. I think the people creating the ocaml-batteries-team organization had in mind the same sort of open-ended, non-hierarchical code ownership and evolution that ocaml-community now represents. In practice this vision did not manifest itself through adopting other repositories, but I would say that in a sense Batteries was a pre-github incarnation of the same ideas.
One difference with possibly other projects ending up in ocaml-community today is that Batteries still receives somewhat frequent active contributions; it is not in strict maintenance mode, updating only to fix severe bugs or support newer OCaml releases. However, it is also less active than it was at the beginning, and I think that there is a sense that, in the era of (1) opam making distributing smaller packages easier and (2) the compiler distribution's standard library breaking its decades-old stillness to move fast, the development of big monolithic standard-library overlays may be slowing down calmly and peacefully. (@c-cube's containers may be a counter-example, or maybe has just not reached its slow mode yet.)
If we decided to move to ocaml-community, and they accepted to take us in, the idea wouldn't be to stop accepting new features or code changes -- this could go on indefinitely -- or to free the current maintainers (@thizanne, @UnixJunkie and myself) from their maintenance responsibilities. I don't actually think that a lot would change in practice. So why am I suggesting to discuss this?
1. I like the name, ocaml-community. It actually reflects what we do better than ocaml-batteries-team.
2. If the ocaml-community people come up with better ways to do CI, build, or renewed enthusiasm about replacing ocamlbuild+prefilter with dune+cppo, we can benefit from that -- or, more likely, be faced pressure to do the work to follow this better approach on our end.
3. It can help more people realize that batteries is a community-owned project, and possibly volunteer to help along with the maintenance.
(I've been feeling guilty about taking more than my usual two weeks to respond to a new OCaml release for 4.07.0, maybe having some more people willing to do this kind of work could help us be more reactive.)
|
process
|
consider moving to ocaml community the has recently sprung up as a community group committed to adopting and maintaining ocaml software projects that are not being actively developed by their authors anymore i think that it is a natural question to wonder whether batteries included should migrate to ocaml community from the start batteries included was designed as a non hierarchical project built for and by the ocaml community at learge i think the people creating the ocaml batteries team organization had in mind the same sort of open ended non hierarchical code ownership and evolution that ocaml community now represents in practice this vision did not manifest itself through adopting other repositories but i would say that in a sense batteries was a pre github incarnation of the same ideas one difference with possibly other projects ending up in ocaml community today is that batteries still receives somewhat frequent active contributions it is not in strict maintenance mode updating only to fix severe bugs or support newer ocaml releases however it is also less active than it was at the beginning and i think that there is a sense that in the era of opam making distributing smaller packages easier and the compiler distribution s standard library breaking its decades old stillness to move fast the development of big monolithic standard library overlays may be slowing down calmly and peacefully c cube s containers may be a counter example or maybe has just not reached its slow mode yet if we decided to move to ocaml community and they accepted to take us in the idea wouldn t be to stop accepting new features or code changes this could go on indefinitely or to free the current maintainers thizanne unixjunkie and myself from their maintenance responsibilities i don t actually think that a lot would change in practice so why am i suggesting to discuss this i like the name ocaml community it actually reflects what we do better than ocaml batteries team if the ocaml community people come up with better ways to do ci build or renewed enthusiasm about replacing ocamlbuild prefilter with dune cppo we can benefit from that or more likely be faced pressure to do the work to follow this better approach on our end it can help more people realize that batteries is a community owned project and possibly volunteer to help along with the maintenance i ve been feeling guilty about taking more than my usual two weeks to respond to a new ocaml release for maybe having some more people willing to do this kind of work could help us be more reactive
| 1
|
22,652
| 31,895,827,711
|
IssuesEvent
|
2023-09-18 01:31:58
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - latestEonOrHighestEonothem
|
Term - change Class - GeologicalContext normative Task Group - Material Sample Process - complete
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_latestEonOrHighestEonothem
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): latestEonOrHighestEonothem
* Term label (English, not normative): Latest Eon Or Highest Eonothem
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the latest possible geochronologic eon or highest chrono-stratigraphic eonothem or the informal name ("Precambrian") attributable to the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Phanerozoic, Proterozoic
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
Change term - latestEonOrHighestEonothem - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_latestEonOrHighestEonothem
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): latestEonOrHighestEonothem
* Term label (English, not normative): Latest Eon Or Highest Eonothem
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the latest possible geochronologic eon or highest chrono-stratigraphic eonothem or the informal name ("Precambrian") attributable to the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Phanerozoic, Proterozoic
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
change term latesteonorhighesteonothem term change submitter efficacy justification why is this change necessary create consistency of terms for material in darwin core demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes latesteonorhighesteonothem term label english not normative latest eon or highest eonothem organized in class e g occurrence event location taxon geological context definition of the term normative the full name of the latest possible geochronologic eon or highest chrono stratigraphic eonothem or the informal name precambrian attributable to the stratigraphic horizon from which the cataloged item dwc materialentity was collected usage comments recommendations regarding content etc not normative examples not normative phanerozoic proterozoic refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
20,894
| 11,007,457,106
|
IssuesEvent
|
2019-12-04 08:32:38
|
tari-project/tari
|
https://api.github.com/repos/tari-project/tari
|
closed
|
Evaluate bottlenecks in comms layer and improve performance
|
performance
|
Some suggested approaches here, but you'll think of more/better ones:
- [x] Add round-trip time measurements to pingpong service
- [ ] Complete the spambot example app (#444) and use results to track real-world performance.
- [x] Make sure log message coverage is sufficient.
- [ ] Compare performance in debug and release mode
- [ ] Execute tests / example apps with a profiling tool.
- [ ] Look for areas we might be using anti-patterns / non-idiomatic Rust ([this post has some great insights](https://medium.com/@polyglot_factotum/rust-concurrency-patterns-communicate-by-sharing-your-sender-re-visited-9d42e6dfecfa))
Rinse. Repeat.
|
True
|
Evaluate bottlenecks in comms layer and improve performance - Some suggested approaches here, but you'll think of more/better ones:
- [x] Add round-trip time measurements to pingpong service
- [ ] Complete the spambot example app (#444) and use results to track real-world performance.
- [x] Make sure log message coverage is sufficient.
- [ ] Compare performance in debug and release mode
- [ ] Execute tests / example apps with a profiling tool.
- [ ] Look for areas we might be using anti-patterns / non-idiomatic Rust ([this post has some great insights](https://medium.com/@polyglot_factotum/rust-concurrency-patterns-communicate-by-sharing-your-sender-re-visited-9d42e6dfecfa))
Rinse. Repeat.
|
non_process
|
evaluate bottlenecks in comms layer and improve performance some suggested approaches here but you ll think of more better ones add round trip time measurements to pingpong service complete the spambot example app and use results to track real world performance make sure log message coverage is sufficient compare performance in debug and release mode execute tests example apps with a profiling tool look for areas we might be using anti patterns non idiomatic rust rinse repeat
| 0
|
11,237
| 14,014,236,444
|
IssuesEvent
|
2020-10-29 11:36:26
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
support kv processor
|
:Processors Stalled enhancement libbeat needs_team
|
**Describe the enhancement:**
Both logstash and elasticsearch have kv filter while beats don't have.
example input
```
level=info ts=2019-10-23T03:00:07.774549823Z caller=shipper.go:349 msg=\"upload new block\" id=01DQV8X2Y6H7JGZEE8CDQXT3YS
```
example output
```json
{
"level": "info",
"ts": "2019-10-23T03:00:07.774549823Z",
"caller": "shipper.go:349",
"msg": "upload new block",
"id": "01DQV8X2Y6H7JGZEE8CDQXT3YS"
}
```
**Describe a specific use case for the enhancement or feature:**
many programs such as prometheus use `key1=value1 key2=value2` as log format.
|
1.0
|
support kv processor - **Describe the enhancement:**
Both logstash and elasticsearch have kv filter while beats don't have.
example input
```
level=info ts=2019-10-23T03:00:07.774549823Z caller=shipper.go:349 msg=\"upload new block\" id=01DQV8X2Y6H7JGZEE8CDQXT3YS
```
example output
```json
{
"level": "info",
"ts": "2019-10-23T03:00:07.774549823Z",
"caller": "shipper.go:349",
"msg": "upload new block",
"id": "01DQV8X2Y6H7JGZEE8CDQXT3YS"
}
```
**Describe a specific use case for the enhancement or feature:**
many programs such as prometheus use `key1=value1 key2=value2` as log format.
|
process
|
support kv processor describe the enhancement both logstash and elasticsearch have kv filter while beats don t have example input level info ts caller shipper go msg upload new block id example output json level info ts caller shipper go msg upload new block id describe a specific use case for the enhancement or feature many programs such as prometheus use as log format
| 1
|
553
| 2,532,190,081
|
IssuesEvent
|
2015-01-23 14:27:38
|
systemjs/systemjs
|
https://api.github.com/repos/systemjs/systemjs
|
closed
|
Update to use classes
|
documentation
|
Have each extension as a class extending the previous extension.
Then use an ES6 module build to connect them together into a single file.
Then work out how to remap the base class in another build for system-CSP so we can inject the script loader extension first.
|
1.0
|
Update to use classes - Have each extension as a class extending the previous extension.
Then use an ES6 module build to connect them together into a single file.
Then work out how to remap the base class in another build for system-CSP so we can inject the script loader extension first.
|
non_process
|
update to use classes have each extension as a class extending the previous extension then use an module build to connect them together into a single file then work out how to remap the base class in another build for system csp so we can inject the script loader extension first
| 0
|
757,264
| 26,503,492,304
|
IssuesEvent
|
2023-01-18 12:08:45
|
carbon-design-system/carbon-addons-iot-react
|
https://api.github.com/repos/carbon-design-system/carbon-addons-iot-react
|
opened
|
[TableColumnCustomizationModal] Option to sort the Available Columns by `name`
|
type: enhancement :bulb: status: needs triage :mag: status: needs priority :inbox_tray:
|
### What package is this for?
- [x] React
### Summary
Since we have the option to show column `name` rather than column `id` in the `TableColumnCustomizationModal`, then we also need the option to sort by column name so that users are not confused. A user might be looking for a column name and not see it if it is not displayed alphabetically. Also, sorting by name is important when text is translated.
|
1.0
|
[TableColumnCustomizationModal] Option to sort the Available Columns by `name` - ### What package is this for?
- [x] React
### Summary
Since we have the option to show column `name` rather than column `id` in the `TableColumnCustomizationModal`, then we also need the option to sort by column name so that users are not confused. A user might be looking for a column name and not see it if it is not displayed alphabetically. Also, sorting by name is important when text is translated.
|
non_process
|
option to sort the available columns by name what package is this for react summary since we have the option to show column name rather than column id in the tablecolumncustomizationmodal then we also need the option to sort by column name so that users are not confused a user might be looking for a column name and not see it if it is not displayed alphabetically also sorting by name is important when text is translated
| 0
|
583,891
| 17,400,696,721
|
IssuesEvent
|
2021-08-02 19:12:09
|
thespacedoctor/sherlock
|
https://api.github.com/repos/thespacedoctor/sherlock
|
closed
|
Ship a default algorithm alongside the code
|
priority: 2 type: feature
|
See #111 for the seed of this idea.
The plan is to ship a default algorithm, probably as another yaml settings file, with the code. This will remove the headache of having to update the algorithm in multiple locations and the default algorithm will be version controlled along with the code.
* Sherlock run without a 'search algorithm' section in the sherlock.yaml settings file would fall back to the default algorithm
* For transparency, another command like `sherlock algorithm` will print the algorithm employed to stdout
|
1.0
|
Ship a default algorithm alongside the code - See #111 for the seed of this idea.
The plan is to ship a default algorithm, probably as another yaml settings file, with the code. This will remove the headache of having to update the algorithm in multiple locations and the default algorithm will be version controlled along with the code.
* Sherlock run without a 'search algorithm' section in the sherlock.yaml settings file would fall back to the default algorithm
* For transparency, another command like `sherlock algorithm` will print the algorithm employed to stdout
|
non_process
|
ship a default algorithm alongside the code see for the seed of this idea the plan is to ship a default algorithm probably as another yaml settings file with the code this will remove the headache of having to update the algorithm in multiple locations and the default algorithm will be version controlled along with the code sherlock run without a search algorithm section in the sherlock yaml settings file would fall back to the default algorithm for transparency another command like sherlock algorithm will print the algorithm employed to stdout
| 0
|
96,055
| 12,092,471,394
|
IssuesEvent
|
2020-04-19 15:46:11
|
odin-lang/Odin
|
https://api.github.com/repos/odin-lang/Odin
|
opened
|
Redesign of the Allocator interface
|
compiler-development core-library design implementation
|
Current Interface:
```odin
// Allocation Stuff
Allocator_Mode :: enum byte {
Alloc,
Free,
Free_All,
Resize,
}
Allocator_Proc :: #type proc(allocator_data: rawptr, mode: Allocator_Mode,
size, alignment: int,
old_memory: rawptr, old_size: int, flags: u64 = 0, location: Source_Code_Location = #caller_location) -> rawptr;
Allocator :: struct {
procedure: Allocator_Proc,
data: rawptr,
}
```
Proposed Interface (General Idea):
```odin
Allocator_Flag :: enum {
Zero_Memory,
Free_All_Memory,
}
Allocator_Flags :: bit_set[Allocator_Flag; u64];
Allocator_Error :: enum {
None,
Out_Of_Memory,
Invalid_Memory,
}
Allocator_Proc :: #type proc(allocator_data: rawptr,
size, alignment: int, old_memory: []byte,
flags := Allocator_Flags{.Zero_Memory}, location := #caller_location) -> (memory: []byte, err: Allocator_Error);
Allocator :: struct {
procedure: Allocator_Proc,
data: rawptr,
}
```
Current Interface:
* Separates allocation modes and makes the allocator handle them separately, if at all.
* Takes in and returns a `rawptr` for memory
Proposed Interface:
* The procedure interface acts like a (__sane__) version of C's `realloc` meaning that `free` and `alloc` map to `realloc`
* e.g. `malloc(size) == realloc(NULL, size)` and `free(ptr) == realloc(ptr, 0)`
* Takes in and returns a `[]byte` for memory
* Returns an error
* Takes explicit flags for zeroing memory and freeing all memory
|
1.0
|
Redesign of the Allocator interface - Current Interface:
```odin
// Allocation Stuff
Allocator_Mode :: enum byte {
Alloc,
Free,
Free_All,
Resize,
}
Allocator_Proc :: #type proc(allocator_data: rawptr, mode: Allocator_Mode,
size, alignment: int,
old_memory: rawptr, old_size: int, flags: u64 = 0, location: Source_Code_Location = #caller_location) -> rawptr;
Allocator :: struct {
procedure: Allocator_Proc,
data: rawptr,
}
```
Proposed Interface (General Idea):
```odin
Allocator_Flag :: enum {
Zero_Memory,
Free_All_Memory,
}
Allocator_Flags :: bit_set[Allocator_Flag; u64];
Allocator_Error :: enum {
None,
Out_Of_Memory,
Invalid_Memory,
}
Allocator_Proc :: #type proc(allocator_data: rawptr,
size, alignment: int, old_memory: []byte,
flags := Allocator_Flags{.Zero_Memory}, location := #caller_location) -> (memory: []byte, err: Allocator_Error);
Allocator :: struct {
procedure: Allocator_Proc,
data: rawptr,
}
```
Current Interface:
* Separates allocation modes and makes the allocator handle them separately, if at all.
* Takes in and returns a `rawptr` for memory
Proposed Interface:
* The procedure interface acts like a (__sane__) version of C's `realloc` meaning that `free` and `alloc` map to `realloc`
* e.g. `malloc(size) == realloc(NULL, size)` and `free(ptr) == realloc(ptr, 0)`
* Takes in and returns a `[]byte` for memory
* Returns an error
* Takes explicit flags for zeroing memory and freeing all memory
|
non_process
|
redesign of the allocator interface current interface odin allocation stuff allocator mode enum byte alloc free free all resize allocator proc type proc allocator data rawptr mode allocator mode size alignment int old memory rawptr old size int flags location source code location caller location rawptr allocator struct procedure allocator proc data rawptr proposed interface general idea odin allocator flag enum zero memory free all memory allocator flags bit set allocator error enum none out of memory invalid memory allocator proc type proc allocator data rawptr size alignment int old memory byte flags allocator flags zero memory location caller location memory byte err allocator error allocator struct procedure allocator proc data rawptr current interface separates allocation modes and makes the allocator handle them separately if at all takes in and returns a rawptr for memory proposed interface the procedure interface acts like a sane version of c s realloc meaning that free and alloc map to realloc e g malloc size realloc null size and free ptr realloc ptr takes in and returns a byte for memory returns an error takes explicit flags for zeroing memory and freeing all memory
| 0
|
92,056
| 8,337,938,607
|
IssuesEvent
|
2018-09-28 12:53:47
|
hazelcast/hazelcast-jet
|
https://api.github.com/repos/hazelcast/hazelcast-jet
|
opened
|
com.hazelcast.jet.core.JobRestartWithSnapshotTest.when_nodeDown_then_jobRestartsFromSnapshot_twoStage
|
test-failure
|
https://hazelcast-l337.ci.cloudbees.com/job/jet-pr-builder/com.hazelcast.jet$hazelcast-jet-core/3208/testReport/junit/com.hazelcast.jet.core/JobRestartWithSnapshotTest/when_nodeDown_then_jobRestartsFromSnapshot_twoStage/
```
Error Message
test timed out after 300000 milliseconds
Stacktrace
org.junit.runners.model.TestTimedOutException: test timed out after 300000 milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at com.hazelcast.jet.impl.execution.TaskletExecutionService.awaitWorkerTermination(TaskletExecutionService.java:200)
at com.hazelcast.jet.impl.JetService.shutdown(JetService.java:193)
at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:312)
at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:303)
at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:482)
at com.hazelcast.instance.Node.shutdownServices(Node.java:496)
at com.hazelcast.instance.Node.shutdown(Node.java:453)
at com.hazelcast.instance.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:98)
at com.hazelcast.instance.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:86)
at com.hazelcast.jet.core.JobRestartWithSnapshotTest.when_nodeDown_then_jobRestartsFromSnapshot(JobRestartWithSnapshotTest.java:219)
at com.hazelcast.jet.core.JobRestartWithSnapshotTest.when_nodeDown_then_jobRestartsFromSnapshot_twoStage(JobRestartWithSnapshotTest.java:117)
```
|
1.0
|
com.hazelcast.jet.core.JobRestartWithSnapshotTest.when_nodeDown_then_jobRestartsFromSnapshot_twoStage - https://hazelcast-l337.ci.cloudbees.com/job/jet-pr-builder/com.hazelcast.jet$hazelcast-jet-core/3208/testReport/junit/com.hazelcast.jet.core/JobRestartWithSnapshotTest/when_nodeDown_then_jobRestartsFromSnapshot_twoStage/
```
Error Message
test timed out after 300000 milliseconds
Stacktrace
org.junit.runners.model.TestTimedOutException: test timed out after 300000 milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at com.hazelcast.jet.impl.execution.TaskletExecutionService.awaitWorkerTermination(TaskletExecutionService.java:200)
at com.hazelcast.jet.impl.JetService.shutdown(JetService.java:193)
at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:312)
at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:303)
at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:482)
at com.hazelcast.instance.Node.shutdownServices(Node.java:496)
at com.hazelcast.instance.Node.shutdown(Node.java:453)
at com.hazelcast.instance.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:98)
at com.hazelcast.instance.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:86)
at com.hazelcast.jet.core.JobRestartWithSnapshotTest.when_nodeDown_then_jobRestartsFromSnapshot(JobRestartWithSnapshotTest.java:219)
at com.hazelcast.jet.core.JobRestartWithSnapshotTest.when_nodeDown_then_jobRestartsFromSnapshot_twoStage(JobRestartWithSnapshotTest.java:117)
```
|
non_process
|
com hazelcast jet core jobrestartwithsnapshottest when nodedown then jobrestartsfromsnapshot twostage error message test timed out after milliseconds stacktrace org junit runners model testtimedoutexception test timed out after milliseconds at java lang object wait native method at java lang thread join thread java at java lang thread join thread java at com hazelcast jet impl execution taskletexecutionservice awaitworkertermination taskletexecutionservice java at com hazelcast jet impl jetservice shutdown jetservice java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance node shutdownservices node java at com hazelcast instance node shutdown node java at com hazelcast instance lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast jet core jobrestartwithsnapshottest when nodedown then jobrestartsfromsnapshot jobrestartwithsnapshottest java at com hazelcast jet core jobrestartwithsnapshottest when nodedown then jobrestartsfromsnapshot twostage jobrestartwithsnapshottest java
| 0
|
248,674
| 21,050,081,560
|
IssuesEvent
|
2022-03-31 19:45:08
|
sevidmusic/roady
|
https://api.github.com/repos/sevidmusic/roady
|
closed
|
ComponentCrudTestTrait: Refactor `testReadReturnsSpecifiedComponent()` to test whole Components for equality, not just uniqueIds
|
Refactor Tests
|
ComponentCrudTestTrait: Refactor `testReadReturnsSpecifiedComponent()` in [Tests/Unit/interfaces/component/Crud/TestTraits/ComponentCrudTestTrait.php](https://github.com/sevidmusic/roady/blob/roady/Tests/Unit/interfaces/component/Crud/TestTraits/ComponentCrudTestTrait.php) to test whole `Components` for equality, not just `uniqueIds`.
The following test:
```
/**
* Test that read() returns the specified Component.
*
* @return void
*/
public function testReadReturnsSpecifiedComponent(): void
{
$this->componentCrudToTest()->create(
$this->componentCrudToTest()
);
$this->assertEquals(
$this->componentCrudToTest()->getUniqueId(),
$this->componentCrudToTest()->read(
$this->componentCrudToTest()
)->getUniqueId()
);
}
```
Should be refactored to:
```
/**
* Test that read() returns the specified Component.
*
* @return void
*/
public function testReadReturnsSpecifiedComponent(): void
{
$this->componentCrudToTest()->create(
$this->componentCrudToTest()
);
$this->assertEquals(
$this->componentCrudToTest(),
$this->componentCrudToTest()->read(
$this->componentCrudToTest()
),
$this->componentCrudToTest()::class .
'->read() must return the stored Component whose ' .
'assigned ' . Storable::class . ' implementation ' .
'instance matches the specified ' . Storable::class .
' implementation instance.'
);
}
```
|
1.0
|
ComponentCrudTestTrait: Refactor `testReadReturnsSpecifiedComponent()` to test whole Components for equality, not just uniqueIds - ComponentCrudTestTrait: Refactor `testReadReturnsSpecifiedComponent()` in [Tests/Unit/interfaces/component/Crud/TestTraits/ComponentCrudTestTrait.php](https://github.com/sevidmusic/roady/blob/roady/Tests/Unit/interfaces/component/Crud/TestTraits/ComponentCrudTestTrait.php) to test whole `Components` for equality, not just `uniqueIds`.
The following test:
```
/**
* Test that read() returns the specified Component.
*
* @return void
*/
public function testReadReturnsSpecifiedComponent(): void
{
$this->componentCrudToTest()->create(
$this->componentCrudToTest()
);
$this->assertEquals(
$this->componentCrudToTest()->getUniqueId(),
$this->componentCrudToTest()->read(
$this->componentCrudToTest()
)->getUniqueId()
);
}
```
Should be refactored to:
```
/**
* Test that read() returns the specified Component.
*
* @return void
*/
public function testReadReturnsSpecifiedComponent(): void
{
$this->componentCrudToTest()->create(
$this->componentCrudToTest()
);
$this->assertEquals(
$this->componentCrudToTest(),
$this->componentCrudToTest()->read(
$this->componentCrudToTest()
),
$this->componentCrudToTest()::class .
'->read() must return the stored Component whose ' .
'assigned ' . Storable::class . ' implementation ' .
'instance matches the specified ' . Storable::class .
' implementation instance.'
);
}
```
|
non_process
|
componentcrudtesttrait refactor testreadreturnsspecifiedcomponent to test whole components for equality not just uniqueids componentcrudtesttrait refactor testreadreturnsspecifiedcomponent in to test whole components for equality not just uniqueids the following test test that read returns the specified component return void public function testreadreturnsspecifiedcomponent void this componentcrudtotest create this componentcrudtotest this assertequals this componentcrudtotest getuniqueid this componentcrudtotest read this componentcrudtotest getuniqueid should be refactored to test that read returns the specified component return void public function testreadreturnsspecifiedcomponent void this componentcrudtotest create this componentcrudtotest this assertequals this componentcrudtotest this componentcrudtotest read this componentcrudtotest this componentcrudtotest class read must return the stored component whose assigned storable class implementation instance matches the specified storable class implementation instance
| 0
|
43,179
| 23,138,558,611
|
IssuesEvent
|
2022-07-28 16:12:15
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
opened
|
Performance: Investigate and improve `HoverOverlay` performance (Spike)
|
spike team/frontend-platform UI performance 4.0
|
### Description
**Goal**: Render code intelligence faster
**Reason:**
- Particularly slow here, by default we only fetch code intelligence information on hover.
- Ideally we could do this server side, although that might be a big task
- Could we fetch all known symbols on load, and pre-populate those tokens ready to be used for hover overlays?
- This would help accessibility, as we could make them buttons that are tabbable too.
- This is essentially what the symbol sidebar does
|
True
|
Performance: Investigate and improve `HoverOverlay` performance (Spike) - ### Description
**Goal**: Render code intelligence faster
**Reason:**
- Particularly slow here, by default we only fetch code intelligence information on hover.
- Ideally we could do this server side, although that might be a big task
- Could we fetch all known symbols on load, and pre-populate those tokens ready to be used for hover overlays?
- This would help accessibility, as we could make them buttons that are tabbable too.
- This is essentially what the symbol sidebar does
|
non_process
|
performance investigate and improve hoveroverlay performance spike description goal render code intelligence faster reason particularly slow here by default we only fetch code intelligence information on hover ideally we could do this server side although that might be a big task could we fetch all known symbols on load and pre populate those tokens ready to be used for hover overlays this would help accessibility as we could make them buttons that are tabbable too this is essentially what the symbol sidebar does
| 0
|
16,899
| 9,921,966,385
|
IssuesEvent
|
2019-06-30 23:09:56
|
BendroCorp/bendrocorp-api
|
https://api.github.com/repos/BendroCorp/bendrocorp-api
|
opened
|
Migrate token use to proper jwt tokens
|
effort: medium enhancement security
|
Migrate from the current token structure to properly formatted jwt tokens.
|
True
|
Migrate token use to proper jwt tokens - Migrate from the current token structure to properly formatted jwt tokens.
|
non_process
|
migrate token use to proper jwt tokens migrate from the current token structure to properly formatted jwt tokens
| 0
|
41,063
| 10,606,606,006
|
IssuesEvent
|
2019-10-11 00:04:35
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
X Days Ago Time Filter
|
.Duplicate .Proposal Query Builder
|
I may have missed a trick here, but I need to generate reports for `x days ago`, not after or before x days, but I need to look at exactly 4 days from today.
Without writing this as an SQL query, is this achievable?
|
1.0
|
X Days Ago Time Filter - I may have missed a trick here, but I need to generate reports for `x days ago`, not after or before x days, but I need to look at exactly 4 days from today.
Without writing this as an SQL query, is this achievable?
|
non_process
|
x days ago time filter i may have missed a trick here but i need to generate reports for x days ago not after or before x days but i need to look at exactly days from today without writing this as an sql query is this achievable
| 0
|
2,986
| 3,052,049,107
|
IssuesEvent
|
2015-08-12 12:45:32
|
angular/angular
|
https://api.github.com/repos/angular/angular
|
closed
|
Watch broken in test.unit.dart
|
comp: build/dev-productivity comp: build/pipeline P1: urgent
|
Steps to reproduce:
* run `test.unit.dart`
* wait for tests to finish
* touch one of the files (ex. `compiler/integration_spec.ts`)
Problem:
> [10:46:33] Starting '!build/tree.dart'...
[10:46:35] '!build/tree.dart' errored after 1.83 s
[10:46:35] TypeError: [TSToDartTranspiler]: Cannot read property '0' of undefined
at FacadeConverter.getFileAndName (/source/facade_converter.ts:147:33)
at FacadeConverter.visitTypeName (/source/facade_converter.ts:122:30)
at ModuleTranspiler.visitNode (/source/module.ts:71:17)
at Transpiler.visit (/source/main.ts:254:31)
at ModuleTranspiler.TranspilerBase.visit (/source/base.ts:20:39)
at ModuleTranspiler.TranspilerBase.visitList (/source/base.ts:35:12)
at ModuleTranspiler.visitNode (/source/module.ts:57:14)
at Transpiler.visit (/source/main.ts:254:31)
at ModuleTranspiler.TranspilerBase.visit (/source/base.ts:20:39)
at ModuleTranspiler.visitNode (/source/module.ts:43:16)
This error makes running tests in Dart pretty miserable experience so I'm short of flagging is as P0....
//cc: @mprobst @IgorMinar
|
2.0
|
Watch broken in test.unit.dart - Steps to reproduce:
* run `test.unit.dart`
* wait for tests to finish
* touch one of the files (ex. `compiler/integration_spec.ts`)
Problem:
> [10:46:33] Starting '!build/tree.dart'...
[10:46:35] '!build/tree.dart' errored after 1.83 s
[10:46:35] TypeError: [TSToDartTranspiler]: Cannot read property '0' of undefined
at FacadeConverter.getFileAndName (/source/facade_converter.ts:147:33)
at FacadeConverter.visitTypeName (/source/facade_converter.ts:122:30)
at ModuleTranspiler.visitNode (/source/module.ts:71:17)
at Transpiler.visit (/source/main.ts:254:31)
at ModuleTranspiler.TranspilerBase.visit (/source/base.ts:20:39)
at ModuleTranspiler.TranspilerBase.visitList (/source/base.ts:35:12)
at ModuleTranspiler.visitNode (/source/module.ts:57:14)
at Transpiler.visit (/source/main.ts:254:31)
at ModuleTranspiler.TranspilerBase.visit (/source/base.ts:20:39)
at ModuleTranspiler.visitNode (/source/module.ts:43:16)
This error makes running tests in Dart pretty miserable experience so I'm short of flagging is as P0....
//cc: @mprobst @IgorMinar
|
non_process
|
watch broken in test unit dart steps to reproduce run test unit dart wait for tests to finish touch one of the files ex compiler integration spec ts problem starting build tree dart build tree dart errored after s typeerror cannot read property of undefined at facadeconverter getfileandname source facade converter ts at facadeconverter visittypename source facade converter ts at moduletranspiler visitnode source module ts at transpiler visit source main ts at moduletranspiler transpilerbase visit source base ts at moduletranspiler transpilerbase visitlist source base ts at moduletranspiler visitnode source module ts at transpiler visit source main ts at moduletranspiler transpilerbase visit source base ts at moduletranspiler visitnode source module ts this error makes running tests in dart pretty miserable experience so i m short of flagging is as cc mprobst igorminar
| 0
|
6,683
| 9,805,257,128
|
IssuesEvent
|
2019-06-12 08:34:26
|
EthVM/EthVM
|
https://api.github.com/repos/EthVM/EthVM
|
closed
|
U.block must not be null
|
bug priority:high project:processing
|
* **I'm submitting a ...**
- [ ] feature request
- [x] bug report
* **Bug Report**
I obtained the following exception on the parity-source connector:
Connector settings:
```
connector.class=com.ethvm.kafka.connect.sources.web3.ParitySourceConnector
schema.registry.url=http://kafka-schema-registry:8081
max.request.size=52428800
tasks.max=3
ws.url=ws://ip:port
producer.max.request.size=52428800
name=parity-source
```
Exception:
```
java.lang.IllegalStateException: u.block must not be null
at com.ethvm.kafka.connect.sources.web3.sources.ParityBlocksSource.fetchRange(ParityBlocksSource.kt:164)
at com.ethvm.kafka.connect.sources.web3.sources.AbstractParityEntitySource.poll(AbstractParityEntitySource.kt:55)
at com.ethvm.kafka.connect.sources.web3.ParitySourceTask.poll(ParitySourceTask.kt:65)
at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:245)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:221)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
|
1.0
|
U.block must not be null - * **I'm submitting a ...**
- [ ] feature request
- [x] bug report
* **Bug Report**
I obtained the following exception on the parity-source connector:
Connector settings:
```
connector.class=com.ethvm.kafka.connect.sources.web3.ParitySourceConnector
schema.registry.url=http://kafka-schema-registry:8081
max.request.size=52428800
tasks.max=3
ws.url=ws://ip:port
producer.max.request.size=52428800
name=parity-source
```
Exception:
```
java.lang.IllegalStateException: u.block must not be null
at com.ethvm.kafka.connect.sources.web3.sources.ParityBlocksSource.fetchRange(ParityBlocksSource.kt:164)
at com.ethvm.kafka.connect.sources.web3.sources.AbstractParityEntitySource.poll(AbstractParityEntitySource.kt:55)
at com.ethvm.kafka.connect.sources.web3.ParitySourceTask.poll(ParitySourceTask.kt:65)
at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:245)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:221)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
|
process
|
u block must not be null i m submitting a feature request bug report bug report i obtained the following exception on the parity source connector connector settings connector class com ethvm kafka connect sources paritysourceconnector schema registry url max request size tasks max ws url ws ip port producer max request size name parity source exception java lang illegalstateexception u block must not be null at com ethvm kafka connect sources sources parityblockssource fetchrange parityblockssource kt at com ethvm kafka connect sources sources abstractparityentitysource poll abstractparityentitysource kt at com ethvm kafka connect sources paritysourcetask poll paritysourcetask kt at org apache kafka connect runtime workersourcetask poll workersourcetask java at org apache kafka connect runtime workersourcetask execute workersourcetask java at org apache kafka connect runtime workertask dorun workertask java at org apache kafka connect runtime workertask run workertask java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java
| 1
|
11,569
| 14,441,010,636
|
IssuesEvent
|
2020-12-07 16:15:31
|
googleapis/google-api-python-client
|
https://api.github.com/repos/googleapis/google-api-python-client
|
closed
|
Automate publishing of `docs/dyn`
|
type: process
|
These docs are manually updated by refreshing the docs/dyn directory in `master` and `gh-pages`, but this should really be automated to run nightly.
|
1.0
|
Automate publishing of `docs/dyn` - These docs are manually updated by refreshing the docs/dyn directory in `master` and `gh-pages`, but this should really be automated to run nightly.
|
process
|
automate publishing of docs dyn these docs are manually updated by refreshing the docs dyn directory in master and gh pages but this should really be automated to run nightly
| 1
|
12,614
| 7,979,058,489
|
IssuesEvent
|
2018-07-17 20:24:23
|
zcash/zcash
|
https://api.github.com/repos/zcash/zcash
|
closed
|
Include authenticatable change-indicator in change note memo fields.
|
RPC interface change handling usability
|
**Synopsis:**
When viewing `z_listreceivedbyaddress` output or potentially other future output note RPC calls (ex: #2910) users or UX-designs probably want to distinguish between funds coming from "outside" versus "change notes" which are necessarily generated by the wallet.
I added the usability flag, because it seems like almost every good wallet UX would need this distinction.
**Implementation Requirements:**
A wallet could include a change-indicator memo field. To be secure, it would need to be authenticatable by the holder of the viewkey, such that only the spending authority secret key could create valid change-indicators. (Also, they need to be secure against replay attacks assuming a malicious viewkey holder.)
**Related:**
- We'd want to provide a standardized encoding of this change-indicator, thus #1849.
- If the RPC calls (such as `z_listreceivedbyaddress`) detect these and treat them differently (such as by adding a `"change": True` field), this would imply #2933 or at least a special case of it.
- #2911 *might be* a case where a user is confused by the presence of change inputs.
|
True
|
Include authenticatable change-indicator in change note memo fields. - **Synopsis:**
When viewing `z_listreceivedbyaddress` output or potentially other future output note RPC calls (ex: #2910) users or UX-designs probably want to distinguish between funds coming from "outside" versus "change notes" which are necessarily generated by the wallet.
I added the usability flag, because it seems like almost every good wallet UX would need this distinction.
**Implementation Requirements:**
A wallet could include a change-indicator memo field. To be secure, it would need to be authenticatable by the holder of the viewkey, such that only the spending authority secret key could create valid change-indicators. (Also, they need to be secure against replay attacks assuming a malicious viewkey holder.)
**Related:**
- We'd want to provide a standardized encoding of this change-indicator, thus #1849.
- If the RPC calls (such as `z_listreceivedbyaddress`) detect these and treat them differently (such as by adding a `"change": True` field), this would imply #2933 or at least a special case of it.
- #2911 *might be* a case where a user is confused by the presence of change inputs.
|
non_process
|
include authenticatable change indicator in change note memo fields synopsis when viewing z listreceivedbyaddress output or potentially other future output note rpc calls ex users or ux designs probably want to distinguish between funds coming from outside versus change notes which are necessarily generated by the wallet i added the usability flag because it seems like almost every good wallet ux would need this distinction implementation requirements a wallet could include a change indicator memo field to be secure it would need to be authenticatable by the holder of the viewkey such that only the spending authority secret key could create valid change indicators also they need to be secure against replay attacks assuming a malicious viewkey holder related we d want to provide a standardized encoding of this change indicator thus if the rpc calls such as z listreceivedbyaddress detect these and treat them differently such as by adding a change true field this would imply or at least a special case of it might be a case where a user is confused by the presence of change inputs
| 0
|
645,137
| 20,995,960,499
|
IssuesEvent
|
2022-03-29 13:31:44
|
nanopb/nanopb
|
https://api.github.com/repos/nanopb/nanopb
|
closed
|
Pass arguments to protoc
|
Priority-Low Type-Enhancement FixedInGit
|
Hi,
I use nanopb within my cmake project and need to pass
`--experimental_allow_proto3_optional` to protoc
I use nanopb 0.4.5 with the bundled protoc 3.13.0.
The test case proto3_optional adds the flag as a env variable in the SConscript file. But this does not work for me.
What is the correct way to pass the flag to protoc?
I added a ${nanopb_protoc_flags} variable to the custom command in FindNanopb.cmake:276.
I can create a pull request later if you like this approach.
This is maybe related to #628
Thanks.
|
1.0
|
Pass arguments to protoc - Hi,
I use nanopb within my cmake project and need to pass
`--experimental_allow_proto3_optional` to protoc
I use nanopb 0.4.5 with the bundled protoc 3.13.0.
The test case proto3_optional adds the flag as a env variable in the SConscript file. But this does not work for me.
What is the correct way to pass the flag to protoc?
I added a ${nanopb_protoc_flags} variable to the custom command in FindNanopb.cmake:276.
I can create a pull request later if you like this approach.
This is maybe related to #628
Thanks.
|
non_process
|
pass arguments to protoc hi i use nanopb within my cmake project and need to pass experimental allow optional to protoc i use nanopb with the bundled protoc the test case optional adds the flag as a env variable in the sconscript file but this does not work for me what is the correct way to pass the flag to protoc i added a nanopb protoc flags variable to the custom command in findnanopb cmake i can create a pull request later if you like this approach this is maybe related to thanks
| 0
|
2,055
| 4,863,656,491
|
IssuesEvent
|
2016-11-14 15:59:32
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
opened
|
shape analysis
|
enhancement sct_process_segmentation sct_register_multimodal
|
idea: add numerical outputs from segmentation that are sensitive to abnormal cord shape (compression).
- [ ] RL/AP eigenvalue ratio + max derivative along Z (Julien)
- [ ] rotation angle + max derivative along Z (Julien)
- [ ] columnwise RMS displacement + max derivative along Z (Julien)
- [ ] CSA max derivative along Z (Julien)
- [ ] contour analysis (Ben)
|
1.0
|
shape analysis - idea: add numerical outputs from segmentation that are sensitive to abnormal cord shape (compression).
- [ ] RL/AP eigenvalue ratio + max derivative along Z (Julien)
- [ ] rotation angle + max derivative along Z (Julien)
- [ ] columnwise RMS displacement + max derivative along Z (Julien)
- [ ] CSA max derivative along Z (Julien)
- [ ] contour analysis (Ben)
|
process
|
shape analysis idea add numerical outputs from segmentation that are sensitive to abnormal cord shape compression rl ap eigenvalue ratio max derivative along z julien rotation angle max derivative along z julien columnwise rms displacement max derivative along z julien csa max derivative along z julien contour analysis ben
| 1
|
935
| 3,398,474,829
|
IssuesEvent
|
2015-12-02 04:06:36
|
DarkEnergyScienceCollaboration/SRM_Task_List
|
https://api.github.com/repos/DarkEnergyScienceCollaboration/SRM_Task_List
|
opened
|
T:pd3.4:Outputs
|
ci DC3 DC3 SW: Implement the DESC L3 pipeline. Reprocess DC3 Data and Make Accessible for Analysis SW
|
DC3 RQ: Identify the L3 pipeline outputs needed to be captured in the DESC catalog.
|
1.0
|
T:pd3.4:Outputs - DC3 RQ: Identify the L3 pipeline outputs needed to be captured in the DESC catalog.
|
process
|
t outputs rq identify the pipeline outputs needed to be captured in the desc catalog
| 1
|
14,385
| 17,403,526,082
|
IssuesEvent
|
2021-08-03 00:18:20
|
ltechkorea/inference_results_v1.0
|
https://api.github.com/repos/ltechkorea/inference_results_v1.0
|
opened
|
[ BUG ] bert generate_engine error
|
bug natural language processing
|
<!--
label에 해당 카테고리 추가해 주세요.
-->
## **Describe the bug**
> A clear and concise description of what the bug is.
- bert generate_engine 생성 에러
- 버그 설명
### **Screenshots or Logs**
If applicable, add screenshots to help explain your problem.
```
[2021-08-02 17:29:40,907 __init__.py:255 INFO] Running command: CUDA_VISIBILE_ORDER=PCI_BUS_ID nvidia-smi --query-gpu=gpu_name,pci.device_id,uuid --format=csv
[2021-08-02 17:29:48,523 main.py:701 INFO] Detected System ID: V100S-PCIE-32GBx8
==============main.py================
system.get_id V100S-PCIE-32GBx8
main_args= {'action': 'generate_engines', 'audit_test': None, 'benchmarks': 'bert', 'config_ver': 'default', 'configs': '', 'gpu_only': False, 'no_child_process': False, 'no_gpu': False, 'power': False, 'profile': None, 'scenarios': 'Offline,Server', 'system_name': None} system= V100S-PCIE-32GBx8
=====================================
[2021-08-02 17:29:48,535 main.py:529 INFO] Using config files: configs/bert/Offline/config.json,configs/bert/Server/config.json
[2021-08-02 17:29:48,535 __init__.py:341 INFO] Parsing config file configs/bert/Offline/config.json ...
[2021-08-02 17:29:48,537 __init__.py:341 INFO] Parsing config file configs/bert/Server/config.json ...
[2021-08-02 17:29:48,538 main.py:542 INFO] Processing config "V100S-PCIE-32GBx8_bert_Offline"
[2021-08-02 17:29:48,613 main.py:82 INFO] Building engines for bert benchmark in Offline scenario...
[2021-08-02 17:29:48,614 main.py:102 INFO] Building GPU engine for V100S-PCIE-32GBx8_bert_Offline
[2021-08-02 17:29:55,260 bert_var_seqlen.py:63 INFO] Using workspace size: 7,516,192,768
[2021-08-02 17:29:55,699 __init__.py:255 INFO] Running command: CUDA_VISIBILE_ORDER=PCI_BUS_ID nvidia-smi --query-gpu=gpu_name,pci.device_id,uuid --format=csv
[TensorRT] WARNING: Tensor DataType is determined at build time for tensors not marked as input or output.
Replacing l0_fc_qkv with small-tile GEMM plugin, with fairshare cache size 120.
#assertionsrc/smallTileGEMMPlugin.cu,588
Traceback (most recent call last):
File "code/main.py", line 708, in <module>
main(main_args, system)
File "code/main.py", line 634, in main
launch_handle_generate_engine(*_gen_args, **_gen_kwargs)
File "code/main.py", line 62, in launch_handle_generate_engine
raise RuntimeError("Building engines failed!")
RuntimeError: Building engines failed!
Makefile:632: recipe for target 'generate_engines' failed
make: *** [generate_engines] Error 1
(mlperf) dong@mlperf-inference-dong:/work$ exit
exit
make: *** [Makefile:360: launch_docker] Error 2
```
## **Expected behavior**
> A clear and concise description of what you expected to happen.
- 정상 동작 설명
- 정상 동작 설명
## **Possible Solution**
1. 1st solution
2. 2nd solution
## **Additional context**
> Add any other context about the problem here.
- 추가 정보
- 추가 정보
|
1.0
|
[ BUG ] bert generate_engine error - <!--
label에 해당 카테고리 추가해 주세요.
-->
## **Describe the bug**
> A clear and concise description of what the bug is.
- bert generate_engine 생성 에러
- 버그 설명
### **Screenshots or Logs**
If applicable, add screenshots to help explain your problem.
```
[2021-08-02 17:29:40,907 __init__.py:255 INFO] Running command: CUDA_VISIBILE_ORDER=PCI_BUS_ID nvidia-smi --query-gpu=gpu_name,pci.device_id,uuid --format=csv
[2021-08-02 17:29:48,523 main.py:701 INFO] Detected System ID: V100S-PCIE-32GBx8
==============main.py================
system.get_id V100S-PCIE-32GBx8
main_args= {'action': 'generate_engines', 'audit_test': None, 'benchmarks': 'bert', 'config_ver': 'default', 'configs': '', 'gpu_only': False, 'no_child_process': False, 'no_gpu': False, 'power': False, 'profile': None, 'scenarios': 'Offline,Server', 'system_name': None} system= V100S-PCIE-32GBx8
=====================================
[2021-08-02 17:29:48,535 main.py:529 INFO] Using config files: configs/bert/Offline/config.json,configs/bert/Server/config.json
[2021-08-02 17:29:48,535 __init__.py:341 INFO] Parsing config file configs/bert/Offline/config.json ...
[2021-08-02 17:29:48,537 __init__.py:341 INFO] Parsing config file configs/bert/Server/config.json ...
[2021-08-02 17:29:48,538 main.py:542 INFO] Processing config "V100S-PCIE-32GBx8_bert_Offline"
[2021-08-02 17:29:48,613 main.py:82 INFO] Building engines for bert benchmark in Offline scenario...
[2021-08-02 17:29:48,614 main.py:102 INFO] Building GPU engine for V100S-PCIE-32GBx8_bert_Offline
[2021-08-02 17:29:55,260 bert_var_seqlen.py:63 INFO] Using workspace size: 7,516,192,768
[2021-08-02 17:29:55,699 __init__.py:255 INFO] Running command: CUDA_VISIBILE_ORDER=PCI_BUS_ID nvidia-smi --query-gpu=gpu_name,pci.device_id,uuid --format=csv
[TensorRT] WARNING: Tensor DataType is determined at build time for tensors not marked as input or output.
Replacing l0_fc_qkv with small-tile GEMM plugin, with fairshare cache size 120.
#assertionsrc/smallTileGEMMPlugin.cu,588
Traceback (most recent call last):
File "code/main.py", line 708, in <module>
main(main_args, system)
File "code/main.py", line 634, in main
launch_handle_generate_engine(*_gen_args, **_gen_kwargs)
File "code/main.py", line 62, in launch_handle_generate_engine
raise RuntimeError("Building engines failed!")
RuntimeError: Building engines failed!
Makefile:632: recipe for target 'generate_engines' failed
make: *** [generate_engines] Error 1
(mlperf) dong@mlperf-inference-dong:/work$ exit
exit
make: *** [Makefile:360: launch_docker] Error 2
```
## **Expected behavior**
> A clear and concise description of what you expected to happen.
- 정상 동작 설명
- 정상 동작 설명
## **Possible Solution**
1. 1st solution
2. 2nd solution
## **Additional context**
> Add any other context about the problem here.
- 추가 정보
- 추가 정보
|
process
|
bert generate engine error label에 해당 카테고리 추가해 주세요 describe the bug a clear and concise description of what the bug is bert generate engine 생성 에러 버그 설명 screenshots or logs if applicable add screenshots to help explain your problem running command cuda visibile order pci bus id nvidia smi query gpu gpu name pci device id uuid format csv detected system id pcie main py system get id pcie main args action generate engines audit test none benchmarks bert config ver default configs gpu only false no child process false no gpu false power false profile none scenarios offline server system name none system pcie using config files configs bert offline config json configs bert server config json parsing config file configs bert offline config json parsing config file configs bert server config json processing config pcie bert offline building engines for bert benchmark in offline scenario building gpu engine for pcie bert offline using workspace size running command cuda visibile order pci bus id nvidia smi query gpu gpu name pci device id uuid format csv warning tensor datatype is determined at build time for tensors not marked as input or output replacing fc qkv with small tile gemm plugin with fairshare cache size assertionsrc smalltilegemmplugin cu traceback most recent call last file code main py line in main main args system file code main py line in main launch handle generate engine gen args gen kwargs file code main py line in launch handle generate engine raise runtimeerror building engines failed runtimeerror building engines failed makefile recipe for target generate engines failed make error mlperf dong mlperf inference dong work exit exit make error expected behavior a clear and concise description of what you expected to happen 정상 동작 설명 정상 동작 설명 possible solution solution solution additional context add any other context about the problem here 추가 정보 추가 정보
| 1
|
13,487
| 16,018,537,997
|
IssuesEvent
|
2021-04-20 19:15:31
|
anlsys/aml
|
https://api.github.com/repos/anlsys/aml
|
closed
|
Define pinned nix shell environments for the CI to properly handle our collection of tools
|
focus:dev process:proposal status:feedback
|
In GitLab by @perarnau on Apr 15, 2020, 16:47
The `gitlab-ci.yml` file contains several nix-run commands that are just ugly, and we are starting to have some issues with the definition/versioning of tooling like clang-format, whose configuration format is heavily dependent on its version.
The proper solution should be to define nix environments or collection of packages that we can use to launch commands easily.
|
1.0
|
Define pinned nix shell environments for the CI to properly handle our collection of tools - In GitLab by @perarnau on Apr 15, 2020, 16:47
The `gitlab-ci.yml` file contains several nix-run commands that are just ugly, and we are starting to have some issues with the definition/versioning of tooling like clang-format, whose configuration format is heavily dependent on its version.
The proper solution should be to define nix environments or collection of packages that we can use to launch commands easily.
|
process
|
define pinned nix shell environments for the ci to properly handle our collection of tools in gitlab by perarnau on apr the gitlab ci yml file contains several nix run commands that are just ugly and we are starting to have some issues with the definition versioning of tooling like clang format whose configuration format is heavily dependent on its version the proper solution should be to define nix environments or collection of packages that we can use to launch commands easily
| 1
|
2,207
| 5,048,518,610
|
IssuesEvent
|
2016-12-20 13:13:24
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
closed
|
Process not displayed in process list when using activiti enterprise api
|
browser: all bug comp: activiti-processList
|
Beacuse there is no appid within the activiti enterprise api only the activiti private api can be used to create a process.
https://github.com/Alfresco/alfresco-js-api/blob/master/src/alfresco-activiti-rest-api/docs/ProcessApi.md#startNewProcessInstance
Using activiti enterprise api
1. Select an app
2. Create a new process
**Expected results**
Process is dislayed in process list
**Actual results**
Process not displayed in process list
Process does however display in the process list of the task app because the appId is null and so all are displayed
|
1.0
|
Process not displayed in process list when using activiti enterprise api - Beacuse there is no appid within the activiti enterprise api only the activiti private api can be used to create a process.
https://github.com/Alfresco/alfresco-js-api/blob/master/src/alfresco-activiti-rest-api/docs/ProcessApi.md#startNewProcessInstance
Using activiti enterprise api
1. Select an app
2. Create a new process
**Expected results**
Process is dislayed in process list
**Actual results**
Process not displayed in process list
Process does however display in the process list of the task app because the appId is null and so all are displayed
|
process
|
process not displayed in process list when using activiti enterprise api beacuse there is no appid within the activiti enterprise api only the activiti private api can be used to create a process using activiti enterprise api select an app create a new process expected results process is dislayed in process list actual results process not displayed in process list process does however display in the process list of the task app because the appid is null and so all are displayed
| 1
|
19,737
| 26,085,554,092
|
IssuesEvent
|
2022-12-26 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Mon, 26 Dec 22
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Fast Event-based Optical Flow Estimation by Triplet Matching
- **Authors:** Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Signal Processing (eess.SP)
- **Arxiv link:** https://arxiv.org/abs/2212.12218
- **Pdf link:** https://arxiv.org/pdf/2212.12218
- **Abstract**
Event cameras are novel bio-inspired sensors that offer advantages over traditional cameras (low latency, high dynamic range, low power, etc.). Optical flow estimation methods that work on packets of events trade off speed for accuracy, while event-by-event (incremental) methods have strong assumptions and have not been tested on common benchmarks that quantify progress in the field. Towards applications on resource-constrained devices, it is important to develop optical flow algorithms that are fast, light-weight and accurate. This work leverages insights from neuroscience, and proposes a novel optical flow estimation scheme based on triplet matching. The experiments on publicly available benchmarks demonstrate its capability to handle complex scenes with comparable results as prior packet-based algorithms. In addition, the proposed method achieves the fastest execution time (> 10 kHz) on standard CPUs as it requires only three events in estimation. We hope that our research opens the door to real-time, incremental motion estimation methods and applications in real-world scenarios.
## Keyword: event camera
### Fast Event-based Optical Flow Estimation by Triplet Matching
- **Authors:** Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Signal Processing (eess.SP)
- **Arxiv link:** https://arxiv.org/abs/2212.12218
- **Pdf link:** https://arxiv.org/pdf/2212.12218
- **Abstract**
Event cameras are novel bio-inspired sensors that offer advantages over traditional cameras (low latency, high dynamic range, low power, etc.). Optical flow estimation methods that work on packets of events trade off speed for accuracy, while event-by-event (incremental) methods have strong assumptions and have not been tested on common benchmarks that quantify progress in the field. Towards applications on resource-constrained devices, it is important to develop optical flow algorithms that are fast, light-weight and accurate. This work leverages insights from neuroscience, and proposes a novel optical flow estimation scheme based on triplet matching. The experiments on publicly available benchmarks demonstrate its capability to handle complex scenes with comparable results as prior packet-based algorithms. In addition, the proposed method achieves the fastest execution time (> 10 kHz) on standard CPUs as it requires only three events in estimation. We hope that our research opens the door to real-time, incremental motion estimation methods and applications in real-world scenarios.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### On Calibrating Semantic Segmentation Models: Analysis and An Algorithm
- **Authors:** Dongdong Wang, Boqing Gong, Liqiang Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.12053
- **Pdf link:** https://arxiv.org/pdf/2212.12053
- **Abstract**
We study the problem of semantic segmentation calibration. For image classification, lots of existing solutions are proposed to alleviate model miscalibration of confidence. However, to date, confidence calibration research on semantic segmentation is still limited. We provide a systematic study on the calibration of semantic segmentation models and propose a simple yet effective approach. First, we find that model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration. Among them, prediction correctness, especially misprediction, is more important to miscalibration due to over-confidence. Next, we propose a simple, unifying, and effective approach, namely selective scaling, by separating correct/incorrect prediction for scaling and more focusing on misprediction logit smoothing. Then, we study popular existing calibration methods and compare them with selective scaling on semantic segmentation calibration. We conduct extensive experiments with a variety of benchmarks on both in-domain and domain-shift calibration, and show that selective scaling consistently outperforms other methods.
### Image Classification with Small Datasets: Overview and Benchmark
- **Authors:** L. Brigato, B. Barz, L. Iocchi, J. Denzler
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE)
- **Arxiv link:** https://arxiv.org/abs/2212.12478
- **Pdf link:** https://arxiv.org/pdf/2212.12478
- **Abstract**
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### FFNeRV: Flow-Guided Frame-Wise Neural Representations for Videos
- **Authors:** Joo Chan Lee, Daniel Rho, Jong Hwan Ko, Eunbyung Park
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.12294
- **Pdf link:** https://arxiv.org/pdf/2212.12294
- **Abstract**
Neural fields, also known as coordinate-based or implicit neural representations, have shown a remarkable capability of representing, generating, and manipulating various forms of signals. For video representations, however, mapping pixel-wise coordinates to RGB colors has shown relatively low compression performance and slow convergence and inference speed. Frame-wise video representation, which maps a temporal coordinate to its entire frame, has recently emerged as an alternative method to represent videos, improving compression rates and encoding speed. While promising, it has still failed to reach the performance of state-of-the-art video compression algorithms. In this work, we propose FFNeRV, a novel method for incorporating flow information into frame-wise representations to exploit the temporal redundancy across the frames in videos inspired by the standard video codecs. Furthermore, we introduce a fully convolutional architecture, enabled by one-dimensional temporal grids, improving the continuity of spatial features. Experimental results show that FFNeRV yields the best performance for video compression and frame interpolation among the methods using frame-wise representations or neural fields. To reduce the model size even further, we devise a more compact convolutional architecture using the group and pointwise convolutions. With model compression techniques, including quantization-aware training and entropy coding, FFNeRV outperforms widely-used standard video codecs (H.264 and HEVC) and performs on par with state-of-the-art video compression algorithms.
## Keyword: RAW
### Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized Photography
- **Authors:** Ilya Chugunov, Yuxuan Zhang, Felix Heide
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12324
- **Pdf link:** https://arxiv.org/pdf/2212.12324
- **Abstract**
Modern mobile burst photography pipelines capture and merge a short sequence of frames to recover an enhanced image, but often disregard the 3D nature of the scene they capture, treating pixel motion between images as a 2D aggregation problem. We show that in a "long-burst", forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth. To this end, we devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion. Our plane plus depth model is trained end-to-end, and performs coarse-to-fine refinement by controlling which multi-resolution volume features the network has access to at what time during training. We validate the method experimentally, and demonstrate geometrically accurate depth reconstructions with no additional hardware or separate data pre-processing and pose-estimation steps.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Mon, 26 Dec 22 - ## Keyword: events
### Fast Event-based Optical Flow Estimation by Triplet Matching
- **Authors:** Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Signal Processing (eess.SP)
- **Arxiv link:** https://arxiv.org/abs/2212.12218
- **Pdf link:** https://arxiv.org/pdf/2212.12218
- **Abstract**
Event cameras are novel bio-inspired sensors that offer advantages over traditional cameras (low latency, high dynamic range, low power, etc.). Optical flow estimation methods that work on packets of events trade off speed for accuracy, while event-by-event (incremental) methods have strong assumptions and have not been tested on common benchmarks that quantify progress in the field. Towards applications on resource-constrained devices, it is important to develop optical flow algorithms that are fast, light-weight and accurate. This work leverages insights from neuroscience, and proposes a novel optical flow estimation scheme based on triplet matching. The experiments on publicly available benchmarks demonstrate its capability to handle complex scenes with comparable results as prior packet-based algorithms. In addition, the proposed method achieves the fastest execution time (> 10 kHz) on standard CPUs as it requires only three events in estimation. We hope that our research opens the door to real-time, incremental motion estimation methods and applications in real-world scenarios.
## Keyword: event camera
### Fast Event-based Optical Flow Estimation by Triplet Matching
- **Authors:** Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Signal Processing (eess.SP)
- **Arxiv link:** https://arxiv.org/abs/2212.12218
- **Pdf link:** https://arxiv.org/pdf/2212.12218
- **Abstract**
Event cameras are novel bio-inspired sensors that offer advantages over traditional cameras (low latency, high dynamic range, low power, etc.). Optical flow estimation methods that work on packets of events trade off speed for accuracy, while event-by-event (incremental) methods have strong assumptions and have not been tested on common benchmarks that quantify progress in the field. Towards applications on resource-constrained devices, it is important to develop optical flow algorithms that are fast, light-weight and accurate. This work leverages insights from neuroscience, and proposes a novel optical flow estimation scheme based on triplet matching. The experiments on publicly available benchmarks demonstrate its capability to handle complex scenes with comparable results as prior packet-based algorithms. In addition, the proposed method achieves the fastest execution time (> 10 kHz) on standard CPUs as it requires only three events in estimation. We hope that our research opens the door to real-time, incremental motion estimation methods and applications in real-world scenarios.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### On Calibrating Semantic Segmentation Models: Analysis and An Algorithm
- **Authors:** Dongdong Wang, Boqing Gong, Liqiang Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.12053
- **Pdf link:** https://arxiv.org/pdf/2212.12053
- **Abstract**
We study the problem of semantic segmentation calibration. For image classification, lots of existing solutions are proposed to alleviate model miscalibration of confidence. However, to date, confidence calibration research on semantic segmentation is still limited. We provide a systematic study on the calibration of semantic segmentation models and propose a simple yet effective approach. First, we find that model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration. Among them, prediction correctness, especially misprediction, is more important to miscalibration due to over-confidence. Next, we propose a simple, unifying, and effective approach, namely selective scaling, by separating correct/incorrect prediction for scaling and more focusing on misprediction logit smoothing. Then, we study popular existing calibration methods and compare them with selective scaling on semantic segmentation calibration. We conduct extensive experiments with a variety of benchmarks on both in-domain and domain-shift calibration, and show that selective scaling consistently outperforms other methods.
### Image Classification with Small Datasets: Overview and Benchmark
- **Authors:** L. Brigato, B. Barz, L. Iocchi, J. Denzler
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE)
- **Arxiv link:** https://arxiv.org/abs/2212.12478
- **Pdf link:** https://arxiv.org/pdf/2212.12478
- **Abstract**
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### FFNeRV: Flow-Guided Frame-Wise Neural Representations for Videos
- **Authors:** Joo Chan Lee, Daniel Rho, Jong Hwan Ko, Eunbyung Park
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.12294
- **Pdf link:** https://arxiv.org/pdf/2212.12294
- **Abstract**
Neural fields, also known as coordinate-based or implicit neural representations, have shown a remarkable capability of representing, generating, and manipulating various forms of signals. For video representations, however, mapping pixel-wise coordinates to RGB colors has shown relatively low compression performance and slow convergence and inference speed. Frame-wise video representation, which maps a temporal coordinate to its entire frame, has recently emerged as an alternative method to represent videos, improving compression rates and encoding speed. While promising, it has still failed to reach the performance of state-of-the-art video compression algorithms. In this work, we propose FFNeRV, a novel method for incorporating flow information into frame-wise representations to exploit the temporal redundancy across the frames in videos inspired by the standard video codecs. Furthermore, we introduce a fully convolutional architecture, enabled by one-dimensional temporal grids, improving the continuity of spatial features. Experimental results show that FFNeRV yields the best performance for video compression and frame interpolation among the methods using frame-wise representations or neural fields. To reduce the model size even further, we devise a more compact convolutional architecture using the group and pointwise convolutions. With model compression techniques, including quantization-aware training and entropy coding, FFNeRV outperforms widely-used standard video codecs (H.264 and HEVC) and performs on par with state-of-the-art video compression algorithms.
## Keyword: RAW
### Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized Photography
- **Authors:** Ilya Chugunov, Yuxuan Zhang, Felix Heide
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12324
- **Pdf link:** https://arxiv.org/pdf/2212.12324
- **Abstract**
Modern mobile burst photography pipelines capture and merge a short sequence of frames to recover an enhanced image, but often disregard the 3D nature of the scene they capture, treating pixel motion between images as a 2D aggregation problem. We show that in a "long-burst", forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth. To this end, we devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion. Our plane plus depth model is trained end-to-end, and performs coarse-to-fine refinement by controlling which multi-resolution volume features the network has access to at what time during training. We validate the method experimentally, and demonstrate geometrically accurate depth reconstructions with no additional hardware or separate data pre-processing and pose-estimation steps.
## Keyword: raw image
There is no result
|
process
|
new submissions for mon dec keyword events fast event based optical flow estimation by triplet matching authors shintaro shiba yoshimitsu aoki guillermo gallego subjects computer vision and pattern recognition cs cv robotics cs ro signal processing eess sp arxiv link pdf link abstract event cameras are novel bio inspired sensors that offer advantages over traditional cameras low latency high dynamic range low power etc optical flow estimation methods that work on packets of events trade off speed for accuracy while event by event incremental methods have strong assumptions and have not been tested on common benchmarks that quantify progress in the field towards applications on resource constrained devices it is important to develop optical flow algorithms that are fast light weight and accurate this work leverages insights from neuroscience and proposes a novel optical flow estimation scheme based on triplet matching the experiments on publicly available benchmarks demonstrate its capability to handle complex scenes with comparable results as prior packet based algorithms in addition the proposed method achieves the fastest execution time khz on standard cpus as it requires only three events in estimation we hope that our research opens the door to real time incremental motion estimation methods and applications in real world scenarios keyword event camera fast event based optical flow estimation by triplet matching authors shintaro shiba yoshimitsu aoki guillermo gallego subjects computer vision and pattern recognition cs cv robotics cs ro signal processing eess sp arxiv link pdf link abstract event cameras are novel bio inspired sensors that offer advantages over traditional cameras low latency high dynamic range low power etc optical flow estimation methods that work on packets of events trade off speed for accuracy while event by event incremental methods have strong assumptions and have not been tested on common benchmarks that quantify progress in the field towards applications on resource constrained devices it is important to develop optical flow algorithms that are fast light weight and accurate this work leverages insights from neuroscience and proposes a novel optical flow estimation scheme based on triplet matching the experiments on publicly available benchmarks demonstrate its capability to handle complex scenes with comparable results as prior packet based algorithms in addition the proposed method achieves the fastest execution time khz on standard cpus as it requires only three events in estimation we hope that our research opens the door to real time incremental motion estimation methods and applications in real world scenarios keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp on calibrating semantic segmentation models analysis and an algorithm authors dongdong wang boqing gong liqiang wang subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract we study the problem of semantic segmentation calibration for image classification lots of existing solutions are proposed to alleviate model miscalibration of confidence however to date confidence calibration research on semantic segmentation is still limited we provide a systematic study on the calibration of semantic segmentation models and propose a simple yet effective approach first we find that model capacity crop size multi scale testing and prediction correctness have impact on calibration among them prediction correctness especially misprediction is more important to miscalibration due to over confidence next we propose a simple unifying and effective approach namely selective scaling by separating correct incorrect prediction for scaling and more focusing on misprediction logit smoothing then we study popular existing calibration methods and compare them with selective scaling on semantic segmentation calibration we conduct extensive experiments with a variety of benchmarks on both in domain and domain shift calibration and show that selective scaling consistently outperforms other methods image classification with small datasets overview and benchmark authors l brigato b barz l iocchi j denzler subjects computer vision and pattern recognition cs cv artificial intelligence cs ai neural and evolutionary computing cs ne arxiv link pdf link abstract image classification with small datasets has been an active research area in the recent past however as research in this scope is still in its infancy two key ingredients are missing for ensuring reliable and truthful progress a systematic and extensive overview of the state of the art and a common benchmark to allow for objective comparisons between published methods this article addresses both issues first we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered second we propose a common benchmark that allows for an objective comparison of approaches it consists of five datasets spanning various domains e g natural images medical imagery satellite data and data types rgb grayscale multispectral we use this benchmark to re evaluate the standard cross entropy baseline and ten existing methods published between and at renowned venues surprisingly we find that thorough hyper parameter tuning on held out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years indeed only a single specialized method dating back to clearly wins our benchmark and outperforms the baseline classifier keyword image signal processing there is no result keyword image signal process there is no result keyword compression ffnerv flow guided frame wise neural representations for videos authors joo chan lee daniel rho jong hwan ko eunbyung park subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract neural fields also known as coordinate based or implicit neural representations have shown a remarkable capability of representing generating and manipulating various forms of signals for video representations however mapping pixel wise coordinates to rgb colors has shown relatively low compression performance and slow convergence and inference speed frame wise video representation which maps a temporal coordinate to its entire frame has recently emerged as an alternative method to represent videos improving compression rates and encoding speed while promising it has still failed to reach the performance of state of the art video compression algorithms in this work we propose ffnerv a novel method for incorporating flow information into frame wise representations to exploit the temporal redundancy across the frames in videos inspired by the standard video codecs furthermore we introduce a fully convolutional architecture enabled by one dimensional temporal grids improving the continuity of spatial features experimental results show that ffnerv yields the best performance for video compression and frame interpolation among the methods using frame wise representations or neural fields to reduce the model size even further we devise a more compact convolutional architecture using the group and pointwise convolutions with model compression techniques including quantization aware training and entropy coding ffnerv outperforms widely used standard video codecs h and hevc and performs on par with state of the art video compression algorithms keyword raw shakes on a plane unsupervised depth estimation from unstabilized photography authors ilya chugunov yuxuan zhang felix heide subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract modern mobile burst photography pipelines capture and merge a short sequence of frames to recover an enhanced image but often disregard the nature of the scene they capture treating pixel motion between images as a aggregation problem we show that in a long burst forty two megapixel raw frames captured in a two second sequence there is enough parallax information from natural hand tremor alone to recover high quality scene depth to this end we devise a test time optimization approach that fits a neural rgb d representation to long burst data and simultaneously estimates scene depth and camera motion our plane plus depth model is trained end to end and performs coarse to fine refinement by controlling which multi resolution volume features the network has access to at what time during training we validate the method experimentally and demonstrate geometrically accurate depth reconstructions with no additional hardware or separate data pre processing and pose estimation steps keyword raw image there is no result
| 1
|
3,341
| 6,475,028,307
|
IssuesEvent
|
2017-08-17 19:26:11
|
thewca/wca-regulations
|
https://api.github.com/repos/thewca/wca-regulations
|
closed
|
Criteria for new events
|
process regulations transparency
|
This comes up once in a while, and has a few details that distinguish it from [Regulation changes in general](https://github.com/cubing/wca-documents/issues/106).
I'm starting this to collect points we _may_ want to use to evaluate new puzzles.
- Community Interest
- Practicality
- Consistent with the [spirit and mission of the WCA](https://www.worldcubeassociation.org/about), as well as [Regulation 9a](https://www.worldcubeassociation.org/regulations/#9a) (though any of these _could_ change).
- Interesting to solve and watch (from [Sarah](http://www.speedsolving.com/forum/showthread.php?47301-The-quot-Why-I-hate-this-event-quot-and-quot-Why-this-event-should-be-added-quot-thread&p=974947&viewfull=1#post974947))
- A unique challenge compared to other events
Also see the addition of Skewb (#159, #102).
|
1.0
|
Criteria for new events - This comes up once in a while, and has a few details that distinguish it from [Regulation changes in general](https://github.com/cubing/wca-documents/issues/106).
I'm starting this to collect points we _may_ want to use to evaluate new puzzles.
- Community Interest
- Practicality
- Consistent with the [spirit and mission of the WCA](https://www.worldcubeassociation.org/about), as well as [Regulation 9a](https://www.worldcubeassociation.org/regulations/#9a) (though any of these _could_ change).
- Interesting to solve and watch (from [Sarah](http://www.speedsolving.com/forum/showthread.php?47301-The-quot-Why-I-hate-this-event-quot-and-quot-Why-this-event-should-be-added-quot-thread&p=974947&viewfull=1#post974947))
- A unique challenge compared to other events
Also see the addition of Skewb (#159, #102).
|
process
|
criteria for new events this comes up once in a while and has a few details that distinguish it from i m starting this to collect points we may want to use to evaluate new puzzles community interest practicality consistent with the as well as though any of these could change interesting to solve and watch from a unique challenge compared to other events also see the addition of skewb
| 1
|
252,907
| 21,638,897,687
|
IssuesEvent
|
2022-05-05 16:38:54
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Windows HostProcessContainer jobs fail due to mismatching COMPUTERNAME
|
sig/windows sig/testing kind/failing-test needs-triage
|
### Which jobs are failing?
Windows GCE jobs:
https://testgrid.k8s.io/sig-windows-gce#gce-windows-2019-containerd-master
https://testgrid.k8s.io/sig-windows-gce#gce-windows-20h2-containerd-master
### Which tests are failing?
[sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers should run as a process on the host/node
[sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers should support various volume mount types
### Since when has it been failing?
Since the jobs were reintroduced.
### Testgrid link
https://testgrid.k8s.io/sig-windows-gce#gce-windows-2019-containerd-master
### Reason for failure (if possible)
Windows COMPUTERNAMEs are typically limited to 15 characters [1], while the kubelet nodes spawned in the GCE test jobs are much longer (e.g.: ``e2e-e84a5b9f56-7f363-windows-node-group-0n9g``). The names might be truncated, resulting in the test failure.
[1] https://docs.microsoft.com/en-us/troubleshoot/windows-server/identity/naming-conventions-for-computer-domain-site-ou#netbios-computer-names
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig windows
/sig testing
|
2.0
|
Windows HostProcessContainer jobs fail due to mismatching COMPUTERNAME - ### Which jobs are failing?
Windows GCE jobs:
https://testgrid.k8s.io/sig-windows-gce#gce-windows-2019-containerd-master
https://testgrid.k8s.io/sig-windows-gce#gce-windows-20h2-containerd-master
### Which tests are failing?
[sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers should run as a process on the host/node
[sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers should support various volume mount types
### Since when has it been failing?
Since the jobs were reintroduced.
### Testgrid link
https://testgrid.k8s.io/sig-windows-gce#gce-windows-2019-containerd-master
### Reason for failure (if possible)
Windows COMPUTERNAMEs are typically limited to 15 characters [1], while the kubelet nodes spawned in the GCE test jobs are much longer (e.g.: ``e2e-e84a5b9f56-7f363-windows-node-group-0n9g``). The names might be truncated, resulting in the test failure.
[1] https://docs.microsoft.com/en-us/troubleshoot/windows-server/identity/naming-conventions-for-computer-domain-site-ou#netbios-computer-names
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig windows
/sig testing
|
non_process
|
windows hostprocesscontainer jobs fail due to mismatching computername which jobs are failing windows gce jobs which tests are failing hostprocess containers should run as a process on the host node hostprocess containers should support various volume mount types since when has it been failing since the jobs were reintroduced testgrid link reason for failure if possible windows computernames are typically limited to characters while the kubelet nodes spawned in the gce test jobs are much longer e g windows node group the names might be truncated resulting in the test failure anything else we need to know no response relevant sig s sig windows sig testing
| 0
|
8,895
| 11,991,319,178
|
IssuesEvent
|
2020-04-08 08:08:36
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
GO:0033662 modulation by symbiont of host defense-related protein level (OBs or merge)
|
multi-species process
|
GO:0033662 modulation by symbiont of host defense-related protein level
is a readout of immune system impression, not really a process
4 annotations
|
1.0
|
GO:0033662 modulation by symbiont of host defense-related protein level (OBs or merge) -
GO:0033662 modulation by symbiont of host defense-related protein level
is a readout of immune system impression, not really a process
4 annotations
|
process
|
go modulation by symbiont of host defense related protein level obs or merge go modulation by symbiont of host defense related protein level is a readout of immune system impression not really a process annotations
| 1
|
8,675
| 11,808,788,791
|
IssuesEvent
|
2020-03-19 13:57:55
|
kubeflow/kubeflow
|
https://api.github.com/repos/kubeflow/kubeflow
|
closed
|
Minikf 1.0 RC
|
kind/bug kind/process platform/minikf priority/p0
|
/kind process
Creating this bug to track creating a minikf 1.0 RC. The minikf page looks like the last update is from 201909
https://www.arrikto.com/minikf/
@vkoukis @yanniszark @elikatsis @jbottum do we have an ETA for a minikf 1.0 RC?
|
1.0
|
Minikf 1.0 RC - /kind process
Creating this bug to track creating a minikf 1.0 RC. The minikf page looks like the last update is from 201909
https://www.arrikto.com/minikf/
@vkoukis @yanniszark @elikatsis @jbottum do we have an ETA for a minikf 1.0 RC?
|
process
|
minikf rc kind process creating this bug to track creating a minikf rc the minikf page looks like the last update is from vkoukis yanniszark elikatsis jbottum do we have an eta for a minikf rc
| 1
|
133,876
| 5,216,029,183
|
IssuesEvent
|
2017-01-26 08:48:33
|
salesagility/SuiteCRM
|
https://api.github.com/repos/salesagility/SuiteCRM
|
reopened
|
Check box field do not Export value in module Reports
|
bug Fix Proposed Low Priority Pending Input
|
When I create a report for example Leads, and I list the field "do not call" the value is displayed on the screen correctly, but when I export to csv or Download PDF the value of this field is not shown.

and in csv file

|
1.0
|
Check box field do not Export value in module Reports - When I create a report for example Leads, and I list the field "do not call" the value is displayed on the screen correctly, but when I export to csv or Download PDF the value of this field is not shown.

and in csv file

|
non_process
|
check box field do not export value in module reports when i create a report for example leads and i list the field do not call the value is displayed on the screen correctly but when i export to csv or download pdf the value of this field is not shown and in csv file
| 0
|
687,150
| 23,515,415,251
|
IssuesEvent
|
2022-08-18 20:47:50
|
phylum-dev/cli
|
https://api.github.com/repos/phylum-dev/cli
|
closed
|
`phylum extension run --help` shows `phylum` help, not `run` help
|
bug medium priority
|
As stated in the title, the `--help`, `-h` and `help` output for `phylum extension run` is incorrect.
|
1.0
|
`phylum extension run --help` shows `phylum` help, not `run` help - As stated in the title, the `--help`, `-h` and `help` output for `phylum extension run` is incorrect.
|
non_process
|
phylum extension run help shows phylum help not run help as stated in the title the help h and help output for phylum extension run is incorrect
| 0
|
191,998
| 6,845,455,417
|
IssuesEvent
|
2017-11-13 08:19:45
|
vmware/harbor
|
https://api.github.com/repos/vmware/harbor
|
closed
|
Notary repo moving
|
priority/high
|
Hi. Notary has been accepted in to CNCF and will be moving from its current location to `GitHub.com/theupdateframework/notary`. As a downstream consumer you may want to update any references to use the new location.
I'm aiming to complete the move in the next 48 hours including updating all the internal imports. A redirect will be in place and you appear to have pinned to a specific commit for your go vendoring. However I noticed in [this](https://github.com/vmware/harbor/blob/ddaad98526dbd708ee95d09a04ea0a4f278985a1/make/photon/notary/builder_public#L38) location you appear to be cloning the head of master before building the server and signer binaries, which is likely to break after the move due to the internal imports changing from `github.com/docker/notary/...` to `github.com/theupdateframework/notary/...`
|
1.0
|
Notary repo moving - Hi. Notary has been accepted in to CNCF and will be moving from its current location to `GitHub.com/theupdateframework/notary`. As a downstream consumer you may want to update any references to use the new location.
I'm aiming to complete the move in the next 48 hours including updating all the internal imports. A redirect will be in place and you appear to have pinned to a specific commit for your go vendoring. However I noticed in [this](https://github.com/vmware/harbor/blob/ddaad98526dbd708ee95d09a04ea0a4f278985a1/make/photon/notary/builder_public#L38) location you appear to be cloning the head of master before building the server and signer binaries, which is likely to break after the move due to the internal imports changing from `github.com/docker/notary/...` to `github.com/theupdateframework/notary/...`
|
non_process
|
notary repo moving hi notary has been accepted in to cncf and will be moving from its current location to github com theupdateframework notary as a downstream consumer you may want to update any references to use the new location i m aiming to complete the move in the next hours including updating all the internal imports a redirect will be in place and you appear to have pinned to a specific commit for your go vendoring however i noticed in location you appear to be cloning the head of master before building the server and signer binaries which is likely to break after the move due to the internal imports changing from github com docker notary to github com theupdateframework notary
| 0
|
406,703
| 11,901,707,004
|
IssuesEvent
|
2020-03-30 12:54:18
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
bestbuy.ca - design is broken
|
browser-mobile-safari form-v2-experiment ml-needsdiagnosis-false os-ios priority-normal
|
<!-- @browser: Mobile Safari 13.1 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU iPhone OS 13_4 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1 Mobile/15E148 Safari/604.1 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/50831 -->
<!-- @extra_labels: form-v2-experiment -->
**URL**: https://bestbuy.ca
**Browser / Version**: Mobile Safari 13.1
**Operating System**: iOS 13.4
**Tested Another Browser**: No
**Problem type**: Design is broken
**Description**: Items are misaligned
**Steps to Reproduce**:
Site frequently freezes and becomes unresponsive
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
bestbuy.ca - design is broken - <!-- @browser: Mobile Safari 13.1 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU iPhone OS 13_4 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1 Mobile/15E148 Safari/604.1 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/50831 -->
<!-- @extra_labels: form-v2-experiment -->
**URL**: https://bestbuy.ca
**Browser / Version**: Mobile Safari 13.1
**Operating System**: iOS 13.4
**Tested Another Browser**: No
**Problem type**: Design is broken
**Description**: Items are misaligned
**Steps to Reproduce**:
Site frequently freezes and becomes unresponsive
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
bestbuy ca design is broken url browser version mobile safari operating system ios tested another browser no problem type design is broken description items are misaligned steps to reproduce site frequently freezes and becomes unresponsive browser configuration none from with ❤️
| 0
|
6,173
| 9,082,516,548
|
IssuesEvent
|
2019-02-17 13:05:46
|
FACK1/ReservationSystem
|
https://api.github.com/repos/FACK1/ReservationSystem
|
opened
|
Details and Approve Form "frontend"
|
inProcess
|
- [ ] basic details/approve Form.
- [ ] connect the form with the server to get the event details.
- [ ] Form style/css
|
1.0
|
Details and Approve Form "frontend" - - [ ] basic details/approve Form.
- [ ] connect the form with the server to get the event details.
- [ ] Form style/css
|
process
|
details and approve form frontend basic details approve form connect the form with the server to get the event details form style css
| 1
|
291,038
| 8,916,455,891
|
IssuesEvent
|
2019-01-19 16:46:03
|
blogtutor/blog-tutor-support
|
https://api.github.com/repos/blogtutor/blog-tutor-support
|
closed
|
Add Free Disk Space & Server Load to Admin Bar
|
enhancement high priority
|
Should only be visible if is_nerdpress() is true.
Show in as compact a form as possible; maybe something like this:
> Load: 0.1 0.2 0.5 Free Disk: 5GB
Bonus points if it adds an extra warning (maybe red & bold?) for either of these:
1. Free Disk - Less disk space than the uploads folder size + 10%.
2. Load: Any Load value is higher than the 75% of the number of CPU cores. (so, if it's a 4-core CPU, a load of 3 or higher would qualify. A 1-core CPU would qualify at 0.75.)
|
1.0
|
Add Free Disk Space & Server Load to Admin Bar - Should only be visible if is_nerdpress() is true.
Show in as compact a form as possible; maybe something like this:
> Load: 0.1 0.2 0.5 Free Disk: 5GB
Bonus points if it adds an extra warning (maybe red & bold?) for either of these:
1. Free Disk - Less disk space than the uploads folder size + 10%.
2. Load: Any Load value is higher than the 75% of the number of CPU cores. (so, if it's a 4-core CPU, a load of 3 or higher would qualify. A 1-core CPU would qualify at 0.75.)
|
non_process
|
add free disk space server load to admin bar should only be visible if is nerdpress is true show in as compact a form as possible maybe something like this load free disk bonus points if it adds an extra warning maybe red bold for either of these free disk less disk space than the uploads folder size load any load value is higher than the of the number of cpu cores so if it s a core cpu a load of or higher would qualify a core cpu would qualify at
| 0
|
982
| 3,439,047,663
|
IssuesEvent
|
2015-12-14 06:50:58
|
spootTheLousy/saguaro
|
https://api.github.com/repos/spootTheLousy/saguaro
|
closed
|
Unlocking threads needs to rebuild HTML
|
Bug: Minor Post/text processing REVISIT
|
Now that the postform is conditionally sent when building the page, when a thread is locked/unlocked, the page needs to be rebuilt.
|
1.0
|
Unlocking threads needs to rebuild HTML - Now that the postform is conditionally sent when building the page, when a thread is locked/unlocked, the page needs to be rebuilt.
|
process
|
unlocking threads needs to rebuild html now that the postform is conditionally sent when building the page when a thread is locked unlocked the page needs to be rebuilt
| 1
|
21,005
| 27,880,368,578
|
IssuesEvent
|
2023-03-21 18:54:34
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Looking for clarification on how $(Rev:r) pipeline variable works.
|
doc-bug Pri1 azure-devops-pipelines/svc azure-devops-pipelines-process/subsvc
|
The doc says:
"In Azure DevOps $(Rev:r) is a special variable format that only works in the build number field. When a build is completed, if nothing else in the build number has changed, the Rev integer value increases by one."
Two questions:
1. So this literally only works in the top-level "name" property of the Yaml pipeline? I can't use it anywhere else to construct a build number?
2. It says it increments only if nothing else in the version number changes. Ok, but does it reset when other parts of the build number DO change? I want to generate build numbers that look like this:
1.0.0315.0
1.0.0315.1
1.0.0315.2
1.0.0316.0
1.0.0317.0
1.0.0317.1
etc.
Is there a way to do this with the $(Rev:r) variable?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/run-number.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Looking for clarification on how $(Rev:r) pipeline variable works. -
The doc says:
"In Azure DevOps $(Rev:r) is a special variable format that only works in the build number field. When a build is completed, if nothing else in the build number has changed, the Rev integer value increases by one."
Two questions:
1. So this literally only works in the top-level "name" property of the Yaml pipeline? I can't use it anywhere else to construct a build number?
2. It says it increments only if nothing else in the version number changes. Ok, but does it reset when other parts of the build number DO change? I want to generate build numbers that look like this:
1.0.0315.0
1.0.0315.1
1.0.0315.2
1.0.0316.0
1.0.0317.0
1.0.0317.1
etc.
Is there a way to do this with the $(Rev:r) variable?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/run-number.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
looking for clarification on how rev r pipeline variable works the doc says in azure devops rev r is a special variable format that only works in the build number field when a build is completed if nothing else in the build number has changed the rev integer value increases by one two questions so this literally only works in the top level name property of the yaml pipeline i can t use it anywhere else to construct a build number it says it increments only if nothing else in the version number changes ok but does it reset when other parts of the build number do change i want to generate build numbers that look like this etc is there a way to do this with the rev r variable document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service azure devops pipelines sub service azure devops pipelines process github login juliakm microsoft alias jukullam
| 1
|
10,647
| 13,446,621,493
|
IssuesEvent
|
2020-09-08 13:13:08
|
chavarera/python-mini-projects
|
https://api.github.com/repos/chavarera/python-mini-projects
|
closed
|
Generate wordclouds images for Wikipedia article
|
Assigned Automation Image-processing beginner boring-stuffs good first issue
|
**Problem Statement**
Create wordclouds images for Wikipedia article
**Steps**
- [ ] Accept User input From user
- [ ] Search related article in Wikipedia
- [ ] use response of Wikipedia to generate wordclouds images.
Sample Output

|
1.0
|
Generate wordclouds images for Wikipedia article - **Problem Statement**
Create wordclouds images for Wikipedia article
**Steps**
- [ ] Accept User input From user
- [ ] Search related article in Wikipedia
- [ ] use response of Wikipedia to generate wordclouds images.
Sample Output

|
process
|
generate wordclouds images for wikipedia article problem statement create wordclouds images for wikipedia article steps accept user input from user search related article in wikipedia use response of wikipedia to generate wordclouds images sample output
| 1
|
236,247
| 18,076,604,323
|
IssuesEvent
|
2021-09-21 10:35:08
|
LiiLk/nvidia-symfony
|
https://api.github.com/repos/LiiLk/nvidia-symfony
|
closed
|
Création du Discord
|
documentation
|
**Création du Discord
Afin de communiquer à distance et teamViewer.
Aide à l'agilité du travail en groupe.**
|
1.0
|
Création du Discord - **Création du Discord
Afin de communiquer à distance et teamViewer.
Aide à l'agilité du travail en groupe.**
|
non_process
|
création du discord création du discord afin de communiquer à distance et teamviewer aide à l agilité du travail en groupe
| 0
|
20,242
| 13,773,532,327
|
IssuesEvent
|
2020-10-08 03:55:14
|
microsoft/react-native-windows
|
https://api.github.com/repos/microsoft/react-native-windows
|
closed
|
Error C1090: PDB API call failed, error code '24'
|
Area: Test Infrastructure bug must-have
|
CI sometimes runs into this linker bug:
Error C1090: PDB API call failed, error code '24':
DevCom: https://developercommunity.visualstudio.com/content/problem/1091375/fatal-error-c1090-pdb-api-call-failed-error-code-2.html
See example run: https://dev.azure.com/ms/react-native-windows/_build/results?buildId=107861&view=logs&j=3addbcfd-96b7-519e-7adc-4026c32cd4b1&t=2257ee5c-8718-5eb4-b280-b8b5aa006800&l=7340
I have a thread with the linker dev but filing here for tracking.
|
1.0
|
Error C1090: PDB API call failed, error code '24' - CI sometimes runs into this linker bug:
Error C1090: PDB API call failed, error code '24':
DevCom: https://developercommunity.visualstudio.com/content/problem/1091375/fatal-error-c1090-pdb-api-call-failed-error-code-2.html
See example run: https://dev.azure.com/ms/react-native-windows/_build/results?buildId=107861&view=logs&j=3addbcfd-96b7-519e-7adc-4026c32cd4b1&t=2257ee5c-8718-5eb4-b280-b8b5aa006800&l=7340
I have a thread with the linker dev but filing here for tracking.
|
non_process
|
error pdb api call failed error code ci sometimes runs into this linker bug error pdb api call failed error code devcom see example run i have a thread with the linker dev but filing here for tracking
| 0
|
20,753
| 27,487,306,724
|
IssuesEvent
|
2023-03-04 07:31:37
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
opened
|
Error if text is selected when clicking Add/Edit Citation
|
Word Processor Integration
|
https://forums.zotero.org/discussion/comment/429709/#Comment_429709
Currently results in "Field reference lost" error (at least on macOS). If we can detect this, we should either unselect and just put the cursor at the end or show a clearer error if we can't unselect.
|
1.0
|
Error if text is selected when clicking Add/Edit Citation - https://forums.zotero.org/discussion/comment/429709/#Comment_429709
Currently results in "Field reference lost" error (at least on macOS). If we can detect this, we should either unselect and just put the cursor at the end or show a clearer error if we can't unselect.
|
process
|
error if text is selected when clicking add edit citation currently results in field reference lost error at least on macos if we can detect this we should either unselect and just put the cursor at the end or show a clearer error if we can t unselect
| 1
|
21
| 2,496,262,675
|
IssuesEvent
|
2015-01-06 18:14:34
|
vivo-isf/vivo-isf-ontology
|
https://api.github.com/repos/vivo-isf/vivo-isf-ontology
|
closed
|
Hematopoiesis
|
biological_process imported
|
_From [rgar...@eagle-i.org](https://code.google.com/u/111247205719752845822/) on March 25, 2013 08:49:17_
\<b>**** Use the form below to request a new term ****</b>
\<b>**** Scroll down to see a term request example ****</b>
\<b>Please indicate the label for the proposed term:</b>
Hematopoiesis
\<b>Please provide a textual definition (with source):</b>
the formation of blood cellular components (<a href="https://en.wikipedia.org/wiki/Haematopoiesis" rel="nofollow">https://en.wikipedia.org/wiki/Haematopoiesis</a> )
\<b>Please add an example of usage for proposed term:</b>
\<b>Please provide any additional optional information below. (e.g. desired</b>
\<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b>
\<b>[ ] Instrument</b>
[X] Biological process
\<b>[ ] Disease</b>
\<b>[ ] Human studies</b>
\<b>[ ] Instrument</b>
\<b>[ ] Organism</b>
\<b>[ ] Reagent</b>
\<b>[ ] Software</b>
\<b>[ ] Technique</b>
\<b>[ ] Organization</b>
\<b>Additional info:</b>
\<b>*** Term request example ****</b>
\<b>Please indicate the label for the proposed term: four-terminal resistance</b>
\<b>sensor</b>
Please provide a textual definition (with source): "Four-terminal
\<b>resistance sensors are electrical impedance measuring instruments that use</b>
\<b>separate pairs of current-carrying and voltage-sensing electrodes to make</b>
\<b>accurate measurements that can be used to compute a material's electrical</b>
resistance." \<a href="http://en.wikipedia.org/wiki/Four-terminal_sensing" rel="nofollow">http://en.wikipedia.org/wiki/Four-terminal_sensing</a>
\<b>Please add an example of usage for proposed term: Measuring the inherent</b>
\<b>(per square) resistance of doped silicon.</b>
\<b>Please provide any additional optional information below. (e.g. desired</b>
\<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b>
\<b>[X] Instrument</b>
\<b>[ ] Biological process</b>
\<b>[ ] Disease</b>
\<b>[ ] Human studies</b>
\<b>[ ] Instrument</b>
\<b>[ ] Organism</b>
\<b>[ ] Reagent</b>
\<b>[ ] Software</b>
\<b>[ ] Technique</b>
\<b>[ ] Organization</b>
\<b>Additional info: AKA - 4T sensors, 4-wire sensor, or 4-point probe</b>
_Original issue: http://code.google.com/p/eagle-i/issues/detail?id=202_
|
1.0
|
Hematopoiesis - _From [rgar...@eagle-i.org](https://code.google.com/u/111247205719752845822/) on March 25, 2013 08:49:17_
\<b>**** Use the form below to request a new term ****</b>
\<b>**** Scroll down to see a term request example ****</b>
\<b>Please indicate the label for the proposed term:</b>
Hematopoiesis
\<b>Please provide a textual definition (with source):</b>
the formation of blood cellular components (<a href="https://en.wikipedia.org/wiki/Haematopoiesis" rel="nofollow">https://en.wikipedia.org/wiki/Haematopoiesis</a> )
\<b>Please add an example of usage for proposed term:</b>
\<b>Please provide any additional optional information below. (e.g. desired</b>
\<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b>
\<b>[ ] Instrument</b>
[X] Biological process
\<b>[ ] Disease</b>
\<b>[ ] Human studies</b>
\<b>[ ] Instrument</b>
\<b>[ ] Organism</b>
\<b>[ ] Reagent</b>
\<b>[ ] Software</b>
\<b>[ ] Technique</b>
\<b>[ ] Organization</b>
\<b>Additional info:</b>
\<b>*** Term request example ****</b>
\<b>Please indicate the label for the proposed term: four-terminal resistance</b>
\<b>sensor</b>
Please provide a textual definition (with source): "Four-terminal
\<b>resistance sensors are electrical impedance measuring instruments that use</b>
\<b>separate pairs of current-carrying and voltage-sensing electrodes to make</b>
\<b>accurate measurements that can be used to compute a material's electrical</b>
resistance." \<a href="http://en.wikipedia.org/wiki/Four-terminal_sensing" rel="nofollow">http://en.wikipedia.org/wiki/Four-terminal_sensing</a>
\<b>Please add an example of usage for proposed term: Measuring the inherent</b>
\<b>(per square) resistance of doped silicon.</b>
\<b>Please provide any additional optional information below. (e.g. desired</b>
\<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b>
\<b>[X] Instrument</b>
\<b>[ ] Biological process</b>
\<b>[ ] Disease</b>
\<b>[ ] Human studies</b>
\<b>[ ] Instrument</b>
\<b>[ ] Organism</b>
\<b>[ ] Reagent</b>
\<b>[ ] Software</b>
\<b>[ ] Technique</b>
\<b>[ ] Organization</b>
\<b>Additional info: AKA - 4T sensors, 4-wire sensor, or 4-point probe</b>
_Original issue: http://code.google.com/p/eagle-i/issues/detail?id=202_
|
process
|
hematopoiesis from on march use the form below to request a new term scroll down to see a term request example please indicate the label for the proposed term hematopoiesis please provide a textual definition with source the formation of blood cellular components please add an example of usage for proposed term please provide any additional optional information below e g desired asserted superclass in ero hierarchy or reference branch instrument biological process disease human studies instrument organism reagent software technique organization additional info term request example please indicate the label for the proposed term four terminal resistance sensor please provide a textual definition with source four terminal resistance sensors are electrical impedance measuring instruments that use separate pairs of current carrying and voltage sensing electrodes to make accurate measurements that can be used to compute a material s electrical resistance please add an example of usage for proposed term measuring the inherent per square resistance of doped silicon please provide any additional optional information below e g desired asserted superclass in ero hierarchy or reference branch instrument biological process disease human studies instrument organism reagent software technique organization additional info aka sensors wire sensor or point probe original issue
| 1
|
2,901
| 5,887,958,925
|
IssuesEvent
|
2017-05-17 08:55:53
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
multi-organism process term tweaks
|
multiorganism processes Other term-related request PARL-UCL
|
I've been using the MOP terms a bit recently in annotation, and have a few suggested minor edits:
1. multi-organism cellular localization ; GO:1902581 is missing the parent: multi-organism cellular process ; GO:0044764
2. Definition of 'modulation by host of viral exo-alpha-sialidase activity ; GO:0044866' needs tweaking so it refers to 'a change in VIRAL exo-alpha-sialidase activity' not 'a change in SYMBIONT exo-alpha-sialidase activity'.
3. New synonyms for ‘symbiont-containing vacuole ; GO:0020003
pathogen-occupied vacuole [PMID:24034612]
narrow synonym: Salmonella-containing vacuole [PMID:24034612, PMID:10449405 ]
SCV [PMID:24034612, PMID:10449405]
bacterium-containing vacuole [PMID:22042847]
4. Not sure I understand these two terms:
movement in symbiont environment ; GO:0052193
movement on or near symbiont ; GO:0052194
I wonder if they were just created to have complete host-symbiont pairs, and should in fact be obsoleted. They have 0 annotations.
Thanks!
|
1.0
|
multi-organism process term tweaks - I've been using the MOP terms a bit recently in annotation, and have a few suggested minor edits:
1. multi-organism cellular localization ; GO:1902581 is missing the parent: multi-organism cellular process ; GO:0044764
2. Definition of 'modulation by host of viral exo-alpha-sialidase activity ; GO:0044866' needs tweaking so it refers to 'a change in VIRAL exo-alpha-sialidase activity' not 'a change in SYMBIONT exo-alpha-sialidase activity'.
3. New synonyms for ‘symbiont-containing vacuole ; GO:0020003
pathogen-occupied vacuole [PMID:24034612]
narrow synonym: Salmonella-containing vacuole [PMID:24034612, PMID:10449405 ]
SCV [PMID:24034612, PMID:10449405]
bacterium-containing vacuole [PMID:22042847]
4. Not sure I understand these two terms:
movement in symbiont environment ; GO:0052193
movement on or near symbiont ; GO:0052194
I wonder if they were just created to have complete host-symbiont pairs, and should in fact be obsoleted. They have 0 annotations.
Thanks!
|
process
|
multi organism process term tweaks i ve been using the mop terms a bit recently in annotation and have a few suggested minor edits multi organism cellular localization go is missing the parent multi organism cellular process go definition of modulation by host of viral exo alpha sialidase activity go needs tweaking so it refers to a change in viral exo alpha sialidase activity not a change in symbiont exo alpha sialidase activity new synonyms for ‘symbiont containing vacuole go pathogen occupied vacuole narrow synonym salmonella containing vacuole scv bacterium containing vacuole not sure i understand these two terms movement in symbiont environment go movement on or near symbiont go i wonder if they were just created to have complete host symbiont pairs and should in fact be obsoleted they have annotations thanks
| 1
|
26,515
| 26,903,669,382
|
IssuesEvent
|
2023-02-06 17:20:38
|
DCS-LCSR/SignStream3
|
https://api.github.com/repos/DCS-LCSR/SignStream3
|
opened
|
Can not cancel "(Re)name Selected Participant"
|
enhancement severity LOW usability concern
|
With the database window open and a segment tier selected...In the main menu, under Macro Unit->Segment Tier->(Re)name Selected Participant...
A window opens, but it can not be closed by the "x" of the window, nor is there a cancel box.
|
True
|
Can not cancel "(Re)name Selected Participant" - With the database window open and a segment tier selected...In the main menu, under Macro Unit->Segment Tier->(Re)name Selected Participant...
A window opens, but it can not be closed by the "x" of the window, nor is there a cancel box.
|
non_process
|
can not cancel re name selected participant with the database window open and a segment tier selected in the main menu under macro unit segment tier re name selected participant a window opens but it can not be closed by the x of the window nor is there a cancel box
| 0
|
4,234
| 7,186,858,214
|
IssuesEvent
|
2018-02-02 01:29:48
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Should handle default function and constructor better
|
monitors-all status-inprocess type-bug
|
I don't really handle the constructor or the default function correctly in the monitor code. Look for this code:
if (!func.name.empty())
functions[functions.getCount()] = func;
If you remove this, it will show a new line in the grabABI tests, but they are not correct.
|
1.0
|
Should handle default function and constructor better - I don't really handle the constructor or the default function correctly in the monitor code. Look for this code:
if (!func.name.empty())
functions[functions.getCount()] = func;
If you remove this, it will show a new line in the grabABI tests, but they are not correct.
|
process
|
should handle default function and constructor better i don t really handle the constructor or the default function correctly in the monitor code look for this code if func name empty functions func if you remove this it will show a new line in the grababi tests but they are not correct
| 1
|
9,243
| 12,270,337,744
|
IssuesEvent
|
2020-05-07 15:21:19
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
opened
|
Include 3rd party licenses and source in images
|
P1 enhancement process
|
**Problem**
Requirements for Google Cloud Marketplace inclusion mandate that Docker images include the licenses for all 3rd party dependencies (including transitive dependencies). Additionally, for GPL and LGPL we may have to include the source code of dependencies. We're confirming if we have to for LGPL since we don't statically link. For GPL, all our dependencies have the classpath exception so we're confirming if we still have to.
Above applies to Java. Since NPM is an approved package manager and already includes license text in node_modules downloaded by default, we don't have to do either.
**Solution**
We can either do it manually or automatically:
- Manual: Use maven-license-plugin to download licenses, check into a well known path and configure jib to include as a resource
- Automated: Hook maven-license-plugin to appropriate maven phase to download license before building image, configure jib to include build time path as resource
**Alternatives**
None
**Additional Context**
|
1.0
|
Include 3rd party licenses and source in images - **Problem**
Requirements for Google Cloud Marketplace inclusion mandate that Docker images include the licenses for all 3rd party dependencies (including transitive dependencies). Additionally, for GPL and LGPL we may have to include the source code of dependencies. We're confirming if we have to for LGPL since we don't statically link. For GPL, all our dependencies have the classpath exception so we're confirming if we still have to.
Above applies to Java. Since NPM is an approved package manager and already includes license text in node_modules downloaded by default, we don't have to do either.
**Solution**
We can either do it manually or automatically:
- Manual: Use maven-license-plugin to download licenses, check into a well known path and configure jib to include as a resource
- Automated: Hook maven-license-plugin to appropriate maven phase to download license before building image, configure jib to include build time path as resource
**Alternatives**
None
**Additional Context**
|
process
|
include party licenses and source in images problem requirements for google cloud marketplace inclusion mandate that docker images include the licenses for all party dependencies including transitive dependencies additionally for gpl and lgpl we may have to include the source code of dependencies we re confirming if we have to for lgpl since we don t statically link for gpl all our dependencies have the classpath exception so we re confirming if we still have to above applies to java since npm is an approved package manager and already includes license text in node modules downloaded by default we don t have to do either solution we can either do it manually or automatically manual use maven license plugin to download licenses check into a well known path and configure jib to include as a resource automated hook maven license plugin to appropriate maven phase to download license before building image configure jib to include build time path as resource alternatives none additional context
| 1
|
290,018
| 25,031,227,608
|
IssuesEvent
|
2022-11-04 12:33:58
|
TestIntegrations/TestForwarding
|
https://api.github.com/repos/TestIntegrations/TestForwarding
|
opened
|
java.lang.RuntimeException: Unable to start activity...
|
Yousef test new h n e rtrt
|
**Title:** java.lang.RuntimeException: Unable to start activity ComponentInfo{com.sample.crashy/com.sample.crashy.landing.LandingActivity}: java.lang.RuntimeException: From crashy with love (LandingViewModel.kt:8)
**Number:** 87
**Type:** Crash
**Status:** New
**Reported At:** 2022-03-08 15:17:20 UTC
**Email:**
**Private URL:** https://dashboard.instabug.com/applications/abanoub-android-beta/beta/crashes/87?utm_source=github&utm_medium=integrations
**Categories:**
**App Version:** 1.0 (1)
**Current View:**
**Device:** Google Android SDK built for x86
**Location:** Cairo, Egypt
**Duration:** 2
**Screen Size:** 1440x2560
**Density:** xxhdpi
**User Data:**
**User Steps:**
```
15:17:20 com.sample.crashy.splash.SplashActivity was paused.
15:17:20 com.sample.crashy.landing.LandingActivity was created.
```
**Instabug Log:**
```
```
**Console Log:**
```
15:17:20 V/IB-UserManager(24220): getIdentifiedUserEmail: empty-email
15:17:20 V/IB-UserManager(24220): getIdentifiedUsername: empty_username
15:17:20 D/IB-InstabugNetworkLogDbHelper(24220): retrieveNetworkLogs
15:17:20 V/IB-UserDbHelper(24220): retrieve
15:17:20 D/IB-InstabugUncaughtExceptionHandler(24220): InstabugUncaughtExceptionHandler getReport
15:17:20 D/IB-API Checker(24220): Instabug.getTags
15:17:20 W/API Checker(24220): Threading violation: {Instabug.getTags} should only be called from a background thread, but was called from main thread.
15:17:20 D/IB-API Checker(24220): Instabug.getTags
15:17:20 W/API Checker(24220): Threading violation: {Instabug.getTags} should only be called from a background thread, but was called from main thread.
15:17:20 V/IB-UserDbHelper(24220): retrieve
```
**Locale:** en
|
1.0
|
java.lang.RuntimeException: Unable to start activity... - **Title:** java.lang.RuntimeException: Unable to start activity ComponentInfo{com.sample.crashy/com.sample.crashy.landing.LandingActivity}: java.lang.RuntimeException: From crashy with love (LandingViewModel.kt:8)
**Number:** 87
**Type:** Crash
**Status:** New
**Reported At:** 2022-03-08 15:17:20 UTC
**Email:**
**Private URL:** https://dashboard.instabug.com/applications/abanoub-android-beta/beta/crashes/87?utm_source=github&utm_medium=integrations
**Categories:**
**App Version:** 1.0 (1)
**Current View:**
**Device:** Google Android SDK built for x86
**Location:** Cairo, Egypt
**Duration:** 2
**Screen Size:** 1440x2560
**Density:** xxhdpi
**User Data:**
**User Steps:**
```
15:17:20 com.sample.crashy.splash.SplashActivity was paused.
15:17:20 com.sample.crashy.landing.LandingActivity was created.
```
**Instabug Log:**
```
```
**Console Log:**
```
15:17:20 V/IB-UserManager(24220): getIdentifiedUserEmail: empty-email
15:17:20 V/IB-UserManager(24220): getIdentifiedUsername: empty_username
15:17:20 D/IB-InstabugNetworkLogDbHelper(24220): retrieveNetworkLogs
15:17:20 V/IB-UserDbHelper(24220): retrieve
15:17:20 D/IB-InstabugUncaughtExceptionHandler(24220): InstabugUncaughtExceptionHandler getReport
15:17:20 D/IB-API Checker(24220): Instabug.getTags
15:17:20 W/API Checker(24220): Threading violation: {Instabug.getTags} should only be called from a background thread, but was called from main thread.
15:17:20 D/IB-API Checker(24220): Instabug.getTags
15:17:20 W/API Checker(24220): Threading violation: {Instabug.getTags} should only be called from a background thread, but was called from main thread.
15:17:20 V/IB-UserDbHelper(24220): retrieve
```
**Locale:** en
|
non_process
|
java lang runtimeexception unable to start activity title java lang runtimeexception unable to start activity componentinfo com sample crashy com sample crashy landing landingactivity java lang runtimeexception from crashy with love landingviewmodel kt number type crash status new reported at utc email private url categories app version current view device google android sdk built for location cairo egypt duration screen size density xxhdpi user data user steps com sample crashy splash splashactivity was paused com sample crashy landing landingactivity was created instabug log console log v ib usermanager getidentifieduseremail empty email v ib usermanager getidentifiedusername empty username d ib instabugnetworklogdbhelper retrievenetworklogs v ib userdbhelper retrieve d ib instabuguncaughtexceptionhandler instabuguncaughtexceptionhandler getreport d ib api checker instabug gettags w api checker threading violation instabug gettags should only be called from a background thread but was called from main thread d ib api checker instabug gettags w api checker threading violation instabug gettags should only be called from a background thread but was called from main thread v ib userdbhelper retrieve locale en
| 0
|
6,780
| 9,915,342,641
|
IssuesEvent
|
2019-06-28 16:35:59
|
GroceriStar/fetch-constants
|
https://api.github.com/repos/GroceriStar/fetch-constants
|
closed
|
#### [Groceristar][Ingredients][methods]
|
in-process
|
By using names on methods from this [page](https://groceristar.github.io/documentation/docs/groceristar-website-methods-list/ingredients-router/ingredients-router.html)
In order to make it better, we'll create set of constants, each for a different method.
Example:
*Create or find at db new ingredient new ingredient and attach it to grocery list*
will became `export const FIND_OR_CREATE_AND_ATTACH_TO_GROCERY = "FIND_OR_CREATE_AND_ATTACH_TO_GROCERY";`
|
1.0
|
#### [Groceristar][Ingredients][methods] - By using names on methods from this [page](https://groceristar.github.io/documentation/docs/groceristar-website-methods-list/ingredients-router/ingredients-router.html)
In order to make it better, we'll create set of constants, each for a different method.
Example:
*Create or find at db new ingredient new ingredient and attach it to grocery list*
will became `export const FIND_OR_CREATE_AND_ATTACH_TO_GROCERY = "FIND_OR_CREATE_AND_ATTACH_TO_GROCERY";`
|
process
|
by using names on methods from this in order to make it better we ll create set of constants each for a different method example create or find at db new ingredient new ingredient and attach it to grocery list will became export const find or create and attach to grocery find or create and attach to grocery
| 1
|
540,654
| 15,815,486,118
|
IssuesEvent
|
2021-04-05 11:24:39
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
xnxx.com - see bug description
|
browser-fixme os-mac priority-important
|
<!-- @browser: firefox -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/69739 -->
**URL**: https://xnxx.com
**Browser / Version**: firefox
**Operating System**: Mac OS X 11.2.3
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: When you hover over the video, it doesn't play as it does on Chrome
**Steps to Reproduce**:
The video highlights doesn't play in firefox but in all other browsers (chrome and safari).
In firefox it only shows images which is not very helpful.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
xnxx.com - see bug description - <!-- @browser: firefox -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/69739 -->
**URL**: https://xnxx.com
**Browser / Version**: firefox
**Operating System**: Mac OS X 11.2.3
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: When you hover over the video, it doesn't play as it does on Chrome
**Steps to Reproduce**:
The video highlights doesn't play in firefox but in all other browsers (chrome and safari).
In firefox it only shows images which is not very helpful.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
xnxx com see bug description url browser version firefox operating system mac os x tested another browser yes chrome problem type something else description when you hover over the video it doesn t play as it does on chrome steps to reproduce the video highlights doesn t play in firefox but in all other browsers chrome and safari in firefox it only shows images which is not very helpful browser configuration none from with ❤️
| 0
|
210,611
| 23,761,279,746
|
IssuesEvent
|
2022-09-01 09:04:49
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
Reindex from Remote using API Key
|
>enhancement :Security/Authentication :Distributed/Reindex Team:Distributed Team:Security
|
While attempting to use `_reindex` from remote, I ran into the issue that I cannot use it with an Elasticsearch API Key.
This is a far less generic request compared to #58396.
|
True
|
Reindex from Remote using API Key - While attempting to use `_reindex` from remote, I ran into the issue that I cannot use it with an Elasticsearch API Key.
This is a far less generic request compared to #58396.
|
non_process
|
reindex from remote using api key while attempting to use reindex from remote i ran into the issue that i cannot use it with an elasticsearch api key this is a far less generic request compared to
| 0
|
16,896
| 22,197,162,529
|
IssuesEvent
|
2022-06-07 08:01:35
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
multiprocessing: Queue does not work in virtualenv but works fine in main interpreter
|
type-bug stdlib 3.7 expert-multiprocessing
|
BPO | [36399](https://bugs.python.org/issue36399)
--- | :---
Nosy | @pitrou, @codeape2, @applio
Files | <li>[mp_queue_example.py](https://bugs.python.org/file48228/mp_queue_example.py "Uploaded as text/plain at 2019-03-22.08:51:15 by @codeape2")</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2019-03-22.08:51:16.000>
labels = ['3.7', 'type-bug', 'library']
title = 'multiprocessing: Queue does not work in virtualenv but works fine in main interpreter'
updated_at = <Date 2019-03-22.09:10:54.298>
user = 'https://github.com/codeape2'
```
bugs.python.org fields:
```python
activity = <Date 2019-03-22.09:10:54.298>
actor = 'Bernt.R\xc3\xb8skar.Brenna'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2019-03-22.08:51:16.000>
creator = 'Bernt.R\xc3\xb8skar.Brenna'
dependencies = []
files = ['48228']
hgrepos = []
issue_num = 36399
keywords = []
message_count = 4.0
messages = ['338591', '338592', '338593', '338594']
nosy_count = 3.0
nosy_names = ['pitrou', 'Bernt.R\xc3\xb8skar.Brenna', 'davin']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue36399'
versions = ['Python 3.7']
```
</p></details>
|
1.0
|
multiprocessing: Queue does not work in virtualenv but works fine in main interpreter - BPO | [36399](https://bugs.python.org/issue36399)
--- | :---
Nosy | @pitrou, @codeape2, @applio
Files | <li>[mp_queue_example.py](https://bugs.python.org/file48228/mp_queue_example.py "Uploaded as text/plain at 2019-03-22.08:51:15 by @codeape2")</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2019-03-22.08:51:16.000>
labels = ['3.7', 'type-bug', 'library']
title = 'multiprocessing: Queue does not work in virtualenv but works fine in main interpreter'
updated_at = <Date 2019-03-22.09:10:54.298>
user = 'https://github.com/codeape2'
```
bugs.python.org fields:
```python
activity = <Date 2019-03-22.09:10:54.298>
actor = 'Bernt.R\xc3\xb8skar.Brenna'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2019-03-22.08:51:16.000>
creator = 'Bernt.R\xc3\xb8skar.Brenna'
dependencies = []
files = ['48228']
hgrepos = []
issue_num = 36399
keywords = []
message_count = 4.0
messages = ['338591', '338592', '338593', '338594']
nosy_count = 3.0
nosy_names = ['pitrou', 'Bernt.R\xc3\xb8skar.Brenna', 'davin']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue36399'
versions = ['Python 3.7']
```
</p></details>
|
process
|
multiprocessing queue does not work in virtualenv but works fine in main interpreter bpo nosy pitrou applio files uploaded as text plain at by note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title multiprocessing queue does not work in virtualenv but works fine in main interpreter updated at user bugs python org fields python activity actor bernt r brenna assignee none closed false closed date none closer none components creation creator bernt r brenna dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage none status open superseder none type behavior url versions
| 1
|
108,349
| 16,770,055,550
|
IssuesEvent
|
2021-06-14 13:50:42
|
keep-network/coverage-pools
|
https://api.github.com/repos/keep-network/coverage-pools
|
closed
|
Multiple auctions per deposit
|
:eyeglasses: security-audit ❔question 👊 attack
|
Another one! Let's reproduce this in a test before patching :pray:
> it appears to be possible to notify a deposit being liquidated multiple times which results in the creation of multiple auctions for a single deposit.
@samczsun
|
True
|
Multiple auctions per deposit - Another one! Let's reproduce this in a test before patching :pray:
> it appears to be possible to notify a deposit being liquidated multiple times which results in the creation of multiple auctions for a single deposit.
@samczsun
|
non_process
|
multiple auctions per deposit another one let s reproduce this in a test before patching pray it appears to be possible to notify a deposit being liquidated multiple times which results in the creation of multiple auctions for a single deposit samczsun
| 0
|
565,669
| 16,766,924,489
|
IssuesEvent
|
2021-06-14 09:59:40
|
opensrp/opensrp-client-path-zeir
|
https://api.github.com/repos/opensrp/opensrp-client-path-zeir
|
opened
|
App not loading locactions
|
Show stopper Top priority
|
Tested with:
- [ ] App not loading locations. When a user logs in, the location for the user is not loaded. This affects data sync and also loading of unique OpenSRP IDs.
- [ ] Tapping on the top of the register page to display the location hierarchy crashes the app.
Note! This is a recurring bug that had been fixed and was working well.

|
1.0
|
App not loading locactions - Tested with:
- [ ] App not loading locations. When a user logs in, the location for the user is not loaded. This affects data sync and also loading of unique OpenSRP IDs.
- [ ] Tapping on the top of the register page to display the location hierarchy crashes the app.
Note! This is a recurring bug that had been fixed and was working well.

|
non_process
|
app not loading locactions tested with app not loading locations when a user logs in the location for the user is not loaded this affects data sync and also loading of unique opensrp ids tapping on the top of the register page to display the location hierarchy crashes the app note this is a recurring bug that had been fixed and was working well
| 0
|
67,722
| 8,178,355,791
|
IssuesEvent
|
2018-08-28 13:37:14
|
OperationCode/operationcode_frontend
|
https://api.github.com/repos/OperationCode/operationcode_frontend
|
closed
|
Convert vettec.operationcode.org to a page on the site
|
Priority: Low Type: Feature/Redesign
|
The content at http://vettec.operationcode.org/ should be converted to the rest of our regular site's structure.
|
1.0
|
Convert vettec.operationcode.org to a page on the site - The content at http://vettec.operationcode.org/ should be converted to the rest of our regular site's structure.
|
non_process
|
convert vettec operationcode org to a page on the site the content at should be converted to the rest of our regular site s structure
| 0
|
8,827
| 11,939,558,817
|
IssuesEvent
|
2020-04-02 15:22:49
|
prisma/migrate
|
https://api.github.com/repos/prisma/migrate
|
opened
|
`--preview` is ignored when doing `prisma migrate down --preview --experimental`
|
kind/feature process/candidate
|
Thanks to https://github.com/prisma/migrate/issues/393#issuecomment-607116984 who found out :)
The flag is listed in the help output
<img width="604" alt="Screen Shot 2020-04-02 at 17 20 48" src="https://user-images.githubusercontent.com/1328733/78266690-67cd9000-7506-11ea-9503-b361f6b8cd39.png">
But it seems that this feature is not implement and should be done in
https://github.com/prisma/migrate/blob/59f77f08db960889f75a8c7a09120c8b377ec622/src/Lift.ts#L54-L56
https://github.com/prisma/migrate/blob/b09bf83bd68bddc96407824c25853ec1c9492894/src/cli/commands/LiftDown.ts#L76
https://github.com/prisma/migrate/blob/59f77f08db960889f75a8c7a09120c8b377ec622/src/Lift.ts#L440
|
1.0
|
`--preview` is ignored when doing `prisma migrate down --preview --experimental` - Thanks to https://github.com/prisma/migrate/issues/393#issuecomment-607116984 who found out :)
The flag is listed in the help output
<img width="604" alt="Screen Shot 2020-04-02 at 17 20 48" src="https://user-images.githubusercontent.com/1328733/78266690-67cd9000-7506-11ea-9503-b361f6b8cd39.png">
But it seems that this feature is not implement and should be done in
https://github.com/prisma/migrate/blob/59f77f08db960889f75a8c7a09120c8b377ec622/src/Lift.ts#L54-L56
https://github.com/prisma/migrate/blob/b09bf83bd68bddc96407824c25853ec1c9492894/src/cli/commands/LiftDown.ts#L76
https://github.com/prisma/migrate/blob/59f77f08db960889f75a8c7a09120c8b377ec622/src/Lift.ts#L440
|
process
|
preview is ignored when doing prisma migrate down preview experimental thanks to who found out the flag is listed in the help output img width alt screen shot at src but it seems that this feature is not implement and should be done in
| 1
|
14,864
| 10,220,737,360
|
IssuesEvent
|
2019-08-15 22:21:10
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Name for cluster role binding is case-sensitive
|
Pri1 container-service/svc cxp doc-enhancement triaged
|
As discovered in a recent SR, name is case-sensitive but this is not mentioned in this article:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
It's more to do with Kubernetes and not AKS specifically, but it's worth mentioning here, as in some situations parts of AAD UPNs may be uppercase and cause confusion.
Thanks,
Adam
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a47c6df7-5103-1336-eb0b-68f95155e6c8
* Version Independent ID: f4420b08-9b57-f581-fcb4-f069afdd5b0c
* Content: [Integrate Azure Active Directory with Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/azure-ad-integration)
* Content Source: [articles/aks/azure-ad-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/azure-ad-integration.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
1.0
|
Name for cluster role binding is case-sensitive - As discovered in a recent SR, name is case-sensitive but this is not mentioned in this article:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
It's more to do with Kubernetes and not AKS specifically, but it's worth mentioning here, as in some situations parts of AAD UPNs may be uppercase and cause confusion.
Thanks,
Adam
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a47c6df7-5103-1336-eb0b-68f95155e6c8
* Version Independent ID: f4420b08-9b57-f581-fcb4-f069afdd5b0c
* Content: [Integrate Azure Active Directory with Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/azure-ad-integration)
* Content Source: [articles/aks/azure-ad-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/azure-ad-integration.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
non_process
|
name for cluster role binding is case sensitive as discovered in a recent sr name is case sensitive but this is not mentioned in this article it s more to do with kubernetes and not aks specifically but it s worth mentioning here as in some situations parts of aad upns may be uppercase and cause confusion thanks adam document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned
| 0
|
105,012
| 9,014,321,690
|
IssuesEvent
|
2019-02-05 22:02:34
|
owncloud/client
|
https://api.github.com/repos/owncloud/client
|
reopened
|
[Crash] [macOS 10.14.1] ownCloud crashes when trying to synchronise
|
Needs info OSX ReadyToTest
|
### Expected behaviour
ownCloud synchronises without crashing.
### Actual behaviour
ownCloud crashes when trying to synchronise.
### Steps to reproduce
1. Launch ownCloud.
2. Wait for the automatic synchronisation.
### Server configuration
Operating system: Linux 4.14.71-v7+ #1145 SMP Fri Sep 21 15:38:35 BST 2018 armv7l GNU/Linux
Web server: 10.0.8.5
Database: mysql Ver 15.1 Distrib 10.1.23-MariaDB, for debian-linux-gnueabihf (armv7l) using readline 5.2
PHP version: PHP 7.0.30-0+deb9u1 (cli) (built: Jun 14 2018 13:50:25) ( NTS )
ownCloud version:
Storage backend (external storage): yes
### Client configuration
Client version: Version 2.5.1 (build 10818)
Operating system: macOS 10.14.1
OS language: French
Qt version used by client package (Linux only, see also Settings dialog): NA
Client package (From ownCloud or distro) (Linux only): NA
Installation path of client: /Applications
### Logs
crash bp-6835b2d3-a6f3-454e-82ba-9a1c72181122
|
1.0
|
[Crash] [macOS 10.14.1] ownCloud crashes when trying to synchronise - ### Expected behaviour
ownCloud synchronises without crashing.
### Actual behaviour
ownCloud crashes when trying to synchronise.
### Steps to reproduce
1. Launch ownCloud.
2. Wait for the automatic synchronisation.
### Server configuration
Operating system: Linux 4.14.71-v7+ #1145 SMP Fri Sep 21 15:38:35 BST 2018 armv7l GNU/Linux
Web server: 10.0.8.5
Database: mysql Ver 15.1 Distrib 10.1.23-MariaDB, for debian-linux-gnueabihf (armv7l) using readline 5.2
PHP version: PHP 7.0.30-0+deb9u1 (cli) (built: Jun 14 2018 13:50:25) ( NTS )
ownCloud version:
Storage backend (external storage): yes
### Client configuration
Client version: Version 2.5.1 (build 10818)
Operating system: macOS 10.14.1
OS language: French
Qt version used by client package (Linux only, see also Settings dialog): NA
Client package (From ownCloud or distro) (Linux only): NA
Installation path of client: /Applications
### Logs
crash bp-6835b2d3-a6f3-454e-82ba-9a1c72181122
|
non_process
|
owncloud crashes when trying to synchronise expected behaviour owncloud synchronises without crashing actual behaviour owncloud crashes when trying to synchronise steps to reproduce launch owncloud wait for the automatic synchronisation server configuration operating system linux smp fri sep bst gnu linux web server database mysql ver distrib mariadb for debian linux gnueabihf using readline php version php cli built jun nts owncloud version storage backend external storage yes client configuration client version version build operating system macos os language french qt version used by client package linux only see also settings dialog na client package from owncloud or distro linux only na installation path of client applications logs crash bp
| 0
|
14,874
| 18,284,531,085
|
IssuesEvent
|
2021-10-05 08:49:04
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
some GP tests failing on py3.6
|
Bug module:gaussian_process
|
The following tests are failing, when trying to have a [py3.6 on the CI](https://dev.azure.com/scikit-learn/scikit-learn/_build/results?buildId=11762&view=logs&j=012a90ec-ecab-5a62-dc19-e6089f7dd6d3&t=9d207f12-5224-5d77-45f7-461cdc051c68&l=889):
``` python
_______________________ test_gpr_interpolation[kernel4] ________________________
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
@pytest.mark.parametrize('kernel', kernels)
def test_gpr_interpolation(kernel):
# Test the interpolating property for different kernels.
gpr = GaussianProcessRegressor(kernel=kernel).fit(X, y)
y_pred, y_cov = gpr.predict(X, return_cov=True)
> assert_almost_equal(y_pred, y)
E AssertionError:
E Arrays are not almost equal to 7 decimals
E
E (mismatch 100.0%)
E x: array([ 1.9363794, -2.7942843, -2.4075474, -0.2947012, 3.0977077,
E 7.7698931])
E y: array([ 0.841471 , 0.42336 , -4.7946214, -1.676493 , 4.5989062,
E 7.914866 ])
gpr = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,
kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,
optimizer='fmin_l_bfgs_b', random_state=None)
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
y_cov = array([[ 9.00541863e-11, 2.38742359e-11, -7.78754838e-12,
-1.06723519e-11, -4.91695573e-12, 9.47864010... [ 9.47864010e-12, -1.84314786e-11, -9.46442924e-12,
8.85336249e-12, 3.63939989e-11, 7.31432692e-11]])
y_pred = array([ 1.93637936, -2.79428431, -2.40754743, -0.29470122, 3.09770773,
7.76989309])
/io/sklearn/gaussian_process/tests/test_gpr.py:53: AssertionError
_________________________ test_lml_improving[kernel3] __________________________
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
@pytest.mark.parametrize('kernel', non_fixed_kernels)
def test_lml_improving(kernel):
# Test that hyperparameter-tuning improves log-marginal likelihood.
gpr = GaussianProcessRegressor(kernel=kernel).fit(X, y)
> assert (gpr.log_marginal_likelihood(gpr.kernel_.theta) >
gpr.log_marginal_likelihood(kernel.theta))
E AssertionError: assert -111269784349.14124 > -48.880110953374277
E + where -111269784349.14124 = <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None)>(array([ 4.60517019, 6.90775528, -11.51292546]))
E + where <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None)> = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,\n kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None).log_marginal_likelihood
E + and array([ 4.60517019, 6.90775528, -11.51292546]) = 10**2 * RBF(length_scale=1e+03) + 0.00316**2.theta
E + where 10**2 * RBF(length_scale=1e+03) + 0.00316**2 = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,\n kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None).kernel_
E + and -48.880110953374277 = <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None)>(array([ 0. , 0. , -11.51292546]))
E + where <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None)> = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,\n kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None).log_marginal_likelihood
E + and array([ 0. , 0. , -11.51292546]) = 1**2 * RBF(length_scale=1) + 0.00316**2.theta
gpr = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,
kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,
optimizer='fmin_l_bfgs_b', random_state=None)
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
/io/sklearn/gaussian_process/tests/test_gpr.py:75: AssertionError
_______________________ test_predict_cov_vs_std[kernel4] _______________________
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
@pytest.mark.parametrize('kernel', kernels)
def test_predict_cov_vs_std(kernel):
# Test that predicted std.-dev. is consistent with cov's diagonal.
gpr = GaussianProcessRegressor(kernel=kernel).fit(X, y)
y_mean, y_cov = gpr.predict(X2, return_cov=True)
y_mean, y_std = gpr.predict(X2, return_std=True)
> assert_almost_equal(np.sqrt(np.diag(y_cov)), y_std)
E AssertionError:
E Arrays are not almost equal to 7 decimals
E
E (mismatch 100.0%)
E x: array([ 6.5705842e-06, 6.5445791e-06, 5.8582603e-06, 5.0646414e-06,
E 6.5141087e-06])
E y: array([ 0.078642 , 0.0816751, 0.0748455, 0.0798408, 0.0814949])
gpr = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,
kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,
optimizer='fmin_l_bfgs_b', random_state=None)
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
y_cov = array([[ 4.31725766e-11, 2.48689958e-11, 1.17097443e-11,
3.24007488e-12, -5.03064257e-12],
[ 2...89646e-11],
[ -5.03064257e-12, -3.79429821e-12, 9.15179044e-12,
2.35189646e-11, 4.24336122e-11]])
y_mean = array([-1.06857149, -3.2405798 , -1.51121271, 1.24148886, 5.27415799])
y_std = array([ 0.07864202, 0.08167515, 0.0748455 , 0.07984077, 0.08149491])
/io/sklearn/gaussian_process/tests/test_gpr.py:182: AssertionError
```
|
1.0
|
some GP tests failing on py3.6 - The following tests are failing, when trying to have a [py3.6 on the CI](https://dev.azure.com/scikit-learn/scikit-learn/_build/results?buildId=11762&view=logs&j=012a90ec-ecab-5a62-dc19-e6089f7dd6d3&t=9d207f12-5224-5d77-45f7-461cdc051c68&l=889):
``` python
_______________________ test_gpr_interpolation[kernel4] ________________________
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
@pytest.mark.parametrize('kernel', kernels)
def test_gpr_interpolation(kernel):
# Test the interpolating property for different kernels.
gpr = GaussianProcessRegressor(kernel=kernel).fit(X, y)
y_pred, y_cov = gpr.predict(X, return_cov=True)
> assert_almost_equal(y_pred, y)
E AssertionError:
E Arrays are not almost equal to 7 decimals
E
E (mismatch 100.0%)
E x: array([ 1.9363794, -2.7942843, -2.4075474, -0.2947012, 3.0977077,
E 7.7698931])
E y: array([ 0.841471 , 0.42336 , -4.7946214, -1.676493 , 4.5989062,
E 7.914866 ])
gpr = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,
kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,
optimizer='fmin_l_bfgs_b', random_state=None)
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
y_cov = array([[ 9.00541863e-11, 2.38742359e-11, -7.78754838e-12,
-1.06723519e-11, -4.91695573e-12, 9.47864010... [ 9.47864010e-12, -1.84314786e-11, -9.46442924e-12,
8.85336249e-12, 3.63939989e-11, 7.31432692e-11]])
y_pred = array([ 1.93637936, -2.79428431, -2.40754743, -0.29470122, 3.09770773,
7.76989309])
/io/sklearn/gaussian_process/tests/test_gpr.py:53: AssertionError
_________________________ test_lml_improving[kernel3] __________________________
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
@pytest.mark.parametrize('kernel', non_fixed_kernels)
def test_lml_improving(kernel):
# Test that hyperparameter-tuning improves log-marginal likelihood.
gpr = GaussianProcessRegressor(kernel=kernel).fit(X, y)
> assert (gpr.log_marginal_likelihood(gpr.kernel_.theta) >
gpr.log_marginal_likelihood(kernel.theta))
E AssertionError: assert -111269784349.14124 > -48.880110953374277
E + where -111269784349.14124 = <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None)>(array([ 4.60517019, 6.90775528, -11.51292546]))
E + where <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None)> = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,\n kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None).log_marginal_likelihood
E + and array([ 4.60517019, 6.90775528, -11.51292546]) = 10**2 * RBF(length_scale=1e+03) + 0.00316**2.theta
E + where 10**2 * RBF(length_scale=1e+03) + 0.00316**2 = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,\n kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None).kernel_
E + and -48.880110953374277 = <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None)>(array([ 0. , 0. , -11.51292546]))
E + where <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None)> = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,\n kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,\n optimizer='fmin_l_bfgs_b', random_state=None).log_marginal_likelihood
E + and array([ 0. , 0. , -11.51292546]) = 1**2 * RBF(length_scale=1) + 0.00316**2.theta
gpr = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,
kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,
optimizer='fmin_l_bfgs_b', random_state=None)
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
/io/sklearn/gaussian_process/tests/test_gpr.py:75: AssertionError
_______________________ test_predict_cov_vs_std[kernel4] _______________________
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
@pytest.mark.parametrize('kernel', kernels)
def test_predict_cov_vs_std(kernel):
# Test that predicted std.-dev. is consistent with cov's diagonal.
gpr = GaussianProcessRegressor(kernel=kernel).fit(X, y)
y_mean, y_cov = gpr.predict(X2, return_cov=True)
y_mean, y_std = gpr.predict(X2, return_std=True)
> assert_almost_equal(np.sqrt(np.diag(y_cov)), y_std)
E AssertionError:
E Arrays are not almost equal to 7 decimals
E
E (mismatch 100.0%)
E x: array([ 6.5705842e-06, 6.5445791e-06, 5.8582603e-06, 5.0646414e-06,
E 6.5141087e-06])
E y: array([ 0.078642 , 0.0816751, 0.0748455, 0.0798408, 0.0814949])
gpr = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,
kernel=1**2 * RBF(length_scale=1) + ... n_restarts_optimizer=0, normalize_y=False,
optimizer='fmin_l_bfgs_b', random_state=None)
kernel = 1**2 * RBF(length_scale=1) + 0.00316**2
y_cov = array([[ 4.31725766e-11, 2.48689958e-11, 1.17097443e-11,
3.24007488e-12, -5.03064257e-12],
[ 2...89646e-11],
[ -5.03064257e-12, -3.79429821e-12, 9.15179044e-12,
2.35189646e-11, 4.24336122e-11]])
y_mean = array([-1.06857149, -3.2405798 , -1.51121271, 1.24148886, 5.27415799])
y_std = array([ 0.07864202, 0.08167515, 0.0748455 , 0.07984077, 0.08149491])
/io/sklearn/gaussian_process/tests/test_gpr.py:182: AssertionError
```
|
process
|
some gp tests failing on the following tests are failing when trying to have a python test gpr interpolation kernel rbf length scale pytest mark parametrize kernel kernels def test gpr interpolation kernel test the interpolating property for different kernels gpr gaussianprocessregressor kernel kernel fit x y y pred y cov gpr predict x return cov true assert almost equal y pred y e assertionerror e arrays are not almost equal to decimals e e mismatch e x array e e y array e gpr gaussianprocessregressor alpha copy x train true kernel rbf length scale n restarts optimizer normalize y false optimizer fmin l bfgs b random state none kernel rbf length scale y cov array y pred array io sklearn gaussian process tests test gpr py assertionerror test lml improving kernel rbf length scale pytest mark parametrize kernel non fixed kernels def test lml improving kernel test that hyperparameter tuning improves log marginal likelihood gpr gaussianprocessregressor kernel kernel fit x y assert gpr log marginal likelihood gpr kernel theta gpr log marginal likelihood kernel theta e assertionerror assert e where array e where gaussianprocessregressor alpha copy x train true n kernel rbf length scale n restarts optimizer normalize y false n optimizer fmin l bfgs b random state none log marginal likelihood e and array rbf length scale theta e where rbf length scale gaussianprocessregressor alpha copy x train true n kernel rbf length scale n restarts optimizer normalize y false n optimizer fmin l bfgs b random state none kernel e and array e where gaussianprocessregressor alpha copy x train true n kernel rbf length scale n restarts optimizer normalize y false n optimizer fmin l bfgs b random state none log marginal likelihood e and array rbf length scale theta gpr gaussianprocessregressor alpha copy x train true kernel rbf length scale n restarts optimizer normalize y false optimizer fmin l bfgs b random state none kernel rbf length scale io sklearn gaussian process tests test gpr py assertionerror test predict cov vs std kernel rbf length scale pytest mark parametrize kernel kernels def test predict cov vs std kernel test that predicted std dev is consistent with cov s diagonal gpr gaussianprocessregressor kernel kernel fit x y y mean y cov gpr predict return cov true y mean y std gpr predict return std true assert almost equal np sqrt np diag y cov y std e assertionerror e arrays are not almost equal to decimals e e mismatch e x array e e y array gpr gaussianprocessregressor alpha copy x train true kernel rbf length scale n restarts optimizer normalize y false optimizer fmin l bfgs b random state none kernel rbf length scale y cov array y mean array y std array io sklearn gaussian process tests test gpr py assertionerror
| 1
|
13,896
| 16,656,695,499
|
IssuesEvent
|
2021-06-05 16:59:16
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Inaccurate number of requested files
|
log-processing question
|
Hello, after many times of analysis and comparison, I found that the number of request files analyzed by goaccess is incorrect, I don't know how to proofread!


|
1.0
|
Inaccurate number of requested files - Hello, after many times of analysis and comparison, I found that the number of request files analyzed by goaccess is incorrect, I don't know how to proofread!


|
process
|
inaccurate number of requested files hello after many times of analysis and comparison i found that the number of request files analyzed by goaccess is incorrect i don t know how to proofread
| 1
|
241,815
| 20,163,189,984
|
IssuesEvent
|
2022-02-10 00:03:22
|
rancher/dashboard
|
https://api.github.com/repos/rancher/dashboard
|
closed
|
Private Registries Template override value not getting passed through
|
kind/bug [zube]: To Test team/area2
|
**Description of Issue**
Some user override values are not being passed through.
When creating a cluster with a cluster template that allows user override values for Private Registries, the value entered into the form is not passed through, but the default cluster template value is.
**Steps to Recreate**
Rancher setup: HA Rancher v2.6.0-rc3, rke v1.2.11 to create 3 node cluster (2 workers, 1 all), `helm install`
1. In your rancher instance, navigate to cluster management
2. Go to RKE1 Configuration >> RKE Templates and Add Template with the following values:
- Template Name: example-template

- Kuernetes Options
Kubernetes Version: Toggle Allow User Override

Cloud Provider: Azure(In-Tree)
Fill in required fields (*) and toggle allow
For the aadClientSecret, I used `examplepassword`

- Private Registry
Private Registry
Select enabled and fill out required fields, toggle Allow User Override for password
For the Password, I used `exampleregistrypassword`

3. Then click create to create the template and go to create a RKE1 cluster. I selected Digital Ocean because we do not need the cluster to provision to validate the PUT call.
4. Under cluster options, check the `Use an existing RKE Template and revision` and select `example-template` from the drop down.
5. Finish filing out the required fields.
6. Override the `aadClientSecret` field with `override1234`
7. Add a password for the Private Registry of `override5678`
8. Create the cluster
**Result**
If you are using the developer network tool, and you look at the Request Payload answers and values you see
```
answers: {values: {rancherKubernetesEngineConfig.kubernetesVersion: "v1.21.4-rancher1-1",…}}
values: {rancherKubernetesEngineConfig.kubernetesVersion: "v1.21.4-rancher1-1",…}
rancherKubernetesEngineConfig.cloudProvider.azureCloudProvider.aadClientId: "example123"
rancherKubernetesEngineConfig.cloudProvider.azureCloudProvider.aadClientSecret: "override1234"
rancherKubernetesEngineConfig.cloudProvider.azureCloudProvider.subscriptionId: "example456"
rancherKubernetesEngineConfig.cloudProvider.azureCloudProvider.tenantId: "example789"
rancherKubernetesEngineConfig.kubernetesVersion: "v1.21.4-rancher1-1"
rancherKubernetesEngineConfig.privateRegistries[0].password: "exampleregistrypassword"
```
Which shows that the user override value worked for the `aadClientSecret` field, but was not passed through for the `rancherKubernetesEngineConfig.privateRegistries[0].password` field.
**Expected**
Any field that allows for a user override should pass the override value through when performing the PUT
**Additional Info**
I did not test all potential override values so this could be an issue with other ones - I have seen the kubernetes config and cloud provider values be overridden as expected.
|
1.0
|
Private Registries Template override value not getting passed through - **Description of Issue**
Some user override values are not being passed through.
When creating a cluster with a cluster template that allows user override values for Private Registries, the value entered into the form is not passed through, but the default cluster template value is.
**Steps to Recreate**
Rancher setup: HA Rancher v2.6.0-rc3, rke v1.2.11 to create 3 node cluster (2 workers, 1 all), `helm install`
1. In your rancher instance, navigate to cluster management
2. Go to RKE1 Configuration >> RKE Templates and Add Template with the following values:
- Template Name: example-template

- Kuernetes Options
Kubernetes Version: Toggle Allow User Override

Cloud Provider: Azure(In-Tree)
Fill in required fields (*) and toggle allow
For the aadClientSecret, I used `examplepassword`

- Private Registry
Private Registry
Select enabled and fill out required fields, toggle Allow User Override for password
For the Password, I used `exampleregistrypassword`

3. Then click create to create the template and go to create a RKE1 cluster. I selected Digital Ocean because we do not need the cluster to provision to validate the PUT call.
4. Under cluster options, check the `Use an existing RKE Template and revision` and select `example-template` from the drop down.
5. Finish filing out the required fields.
6. Override the `aadClientSecret` field with `override1234`
7. Add a password for the Private Registry of `override5678`
8. Create the cluster
**Result**
If you are using the developer network tool, and you look at the Request Payload answers and values you see
```
answers: {values: {rancherKubernetesEngineConfig.kubernetesVersion: "v1.21.4-rancher1-1",…}}
values: {rancherKubernetesEngineConfig.kubernetesVersion: "v1.21.4-rancher1-1",…}
rancherKubernetesEngineConfig.cloudProvider.azureCloudProvider.aadClientId: "example123"
rancherKubernetesEngineConfig.cloudProvider.azureCloudProvider.aadClientSecret: "override1234"
rancherKubernetesEngineConfig.cloudProvider.azureCloudProvider.subscriptionId: "example456"
rancherKubernetesEngineConfig.cloudProvider.azureCloudProvider.tenantId: "example789"
rancherKubernetesEngineConfig.kubernetesVersion: "v1.21.4-rancher1-1"
rancherKubernetesEngineConfig.privateRegistries[0].password: "exampleregistrypassword"
```
Which shows that the user override value worked for the `aadClientSecret` field, but was not passed through for the `rancherKubernetesEngineConfig.privateRegistries[0].password` field.
**Expected**
Any field that allows for a user override should pass the override value through when performing the PUT
**Additional Info**
I did not test all potential override values so this could be an issue with other ones - I have seen the kubernetes config and cloud provider values be overridden as expected.
|
non_process
|
private registries template override value not getting passed through description of issue some user override values are not being passed through when creating a cluster with a cluster template that allows user override values for private registries the value entered into the form is not passed through but the default cluster template value is steps to recreate rancher setup ha rancher rke to create node cluster workers all helm install in your rancher instance navigate to cluster management go to configuration rke templates and add template with the following values template name example template kuernetes options kubernetes version toggle allow user override cloud provider azure in tree fill in required fields and toggle allow for the aadclientsecret i used examplepassword private registry private registry select enabled and fill out required fields toggle allow user override for password for the password i used exampleregistrypassword then click create to create the template and go to create a cluster i selected digital ocean because we do not need the cluster to provision to validate the put call under cluster options check the use an existing rke template and revision and select example template from the drop down finish filing out the required fields override the aadclientsecret field with add a password for the private registry of create the cluster result if you are using the developer network tool and you look at the request payload answers and values you see answers values rancherkubernetesengineconfig kubernetesversion … values rancherkubernetesengineconfig kubernetesversion … rancherkubernetesengineconfig cloudprovider azurecloudprovider aadclientid rancherkubernetesengineconfig cloudprovider azurecloudprovider aadclientsecret rancherkubernetesengineconfig cloudprovider azurecloudprovider subscriptionid rancherkubernetesengineconfig cloudprovider azurecloudprovider tenantid rancherkubernetesengineconfig kubernetesversion rancherkubernetesengineconfig privateregistries password exampleregistrypassword which shows that the user override value worked for the aadclientsecret field but was not passed through for the rancherkubernetesengineconfig privateregistries password field expected any field that allows for a user override should pass the override value through when performing the put additional info i did not test all potential override values so this could be an issue with other ones i have seen the kubernetes config and cloud provider values be overridden as expected
| 0
|
17,191
| 22,770,493,878
|
IssuesEvent
|
2022-07-08 09:31:47
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Feature: Start workflow from an arbitrary point (enables migrate workflow instance)
|
kind/feature scope/broker team/process-automation
|
**Is your feature request related to a problem? Please describe.**
When you want to migrate to different cluster, it would be useful to be able to rehydrate running processes from a previous instance. This could also be used to migrate running processes to a new process version on the same cluster.
**Describe the solution you'd like**
An extension to the CreateWorkflowInstance command that allows you to specify the element to start at. This allows you to take the variables from one running instance and pass them to the command, along with the process version, and start the new process instance at an arbitrary point.
|
1.0
|
Feature: Start workflow from an arbitrary point (enables migrate workflow instance) - **Is your feature request related to a problem? Please describe.**
When you want to migrate to different cluster, it would be useful to be able to rehydrate running processes from a previous instance. This could also be used to migrate running processes to a new process version on the same cluster.
**Describe the solution you'd like**
An extension to the CreateWorkflowInstance command that allows you to specify the element to start at. This allows you to take the variables from one running instance and pass them to the command, along with the process version, and start the new process instance at an arbitrary point.
|
process
|
feature start workflow from an arbitrary point enables migrate workflow instance is your feature request related to a problem please describe when you want to migrate to different cluster it would be useful to be able to rehydrate running processes from a previous instance this could also be used to migrate running processes to a new process version on the same cluster describe the solution you d like an extension to the createworkflowinstance command that allows you to specify the element to start at this allows you to take the variables from one running instance and pass them to the command along with the process version and start the new process instance at an arbitrary point
| 1
|
2,523
| 5,288,085,193
|
IssuesEvent
|
2017-02-08 14:17:24
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
Block vdisk creation on storagerouter with failed DB disk
|
process_wontfix type_enhancement
|
### Problem description
Adding a vdisk to a storagerouter with a failed DB disk is still possible. You can select this storagerouter in the wizard but the creation will fail because the volumedriver will error.
Since we know why the storagerouter failed, it should no longer be visible in the wizard.
My steps:
- Setup cluster
- Added roles to disks
- Added backend
- Added vpools
- Extended vpools to all nodes
- Remove the roles disk on node 1
- Add a vdisk on storagerouter 1
#### Disks on node 1

#### Gui error

#### Log
```
Nov 22 15:42:14 ovs-node-2 celery[3176]: 2016-11-22 15:42:14 54400 +0100 - ovs-node-2 - 6946/139684504037120 - lib/vdisk - 1863 - ERROR - Creating new vDisk myvdisk01 fai
led: failed to send XMLRPC request volumeCreate
Nov 22 15:42:14 ovs-node-2 celery[3176]: 2016-11-22 15:42:14 55200 +0100 - ovs-node-2 - 3176/139684504037120 - celery/celery.worker.job - 1868 - ERROR - Task ovs.vdisk.cr
eate_new[2d6ee048-9d21-434b-a3a5-568b6dac7ef3] raised unexpected: RuntimeError('failed to send XMLRPC request volumeCreate',)
Nov 22 15:42:14 ovs-node-2 celery[3176]: Traceback (most recent call last):
Nov 22 15:42:14 ovs-node-2 celery[3176]: File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
Nov 22 15:42:14 ovs-node-2 celery[3176]: R = retval = fun(*args, **kwargs)
Nov 22 15:42:14 ovs-node-2 celery[3176]: File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
Nov 22 15:42:14 ovs-node-2 celery[3176]: return self.run(*args, **kwargs)
Nov 22 15:42:14 ovs-node-2 celery[3176]: File "/opt/OpenvStorage/ovs/lib/vdisk.py", line 615, in create_new
Nov 22 15:42:14 ovs-node-2 celery[3176]: node_id=str(storagedriver.storagedriver_id))
Nov 22 15:42:14 ovs-node-2 celery[3176]: RuntimeError: failed to send XMLRPC request volumeCreate
```
Also noticed that afterward I couldn't fetch any more info about my vpool for some time (+- 2 mins)
|
1.0
|
Block vdisk creation on storagerouter with failed DB disk - ### Problem description
Adding a vdisk to a storagerouter with a failed DB disk is still possible. You can select this storagerouter in the wizard but the creation will fail because the volumedriver will error.
Since we know why the storagerouter failed, it should no longer be visible in the wizard.
My steps:
- Setup cluster
- Added roles to disks
- Added backend
- Added vpools
- Extended vpools to all nodes
- Remove the roles disk on node 1
- Add a vdisk on storagerouter 1
#### Disks on node 1

#### Gui error

#### Log
```
Nov 22 15:42:14 ovs-node-2 celery[3176]: 2016-11-22 15:42:14 54400 +0100 - ovs-node-2 - 6946/139684504037120 - lib/vdisk - 1863 - ERROR - Creating new vDisk myvdisk01 fai
led: failed to send XMLRPC request volumeCreate
Nov 22 15:42:14 ovs-node-2 celery[3176]: 2016-11-22 15:42:14 55200 +0100 - ovs-node-2 - 3176/139684504037120 - celery/celery.worker.job - 1868 - ERROR - Task ovs.vdisk.cr
eate_new[2d6ee048-9d21-434b-a3a5-568b6dac7ef3] raised unexpected: RuntimeError('failed to send XMLRPC request volumeCreate',)
Nov 22 15:42:14 ovs-node-2 celery[3176]: Traceback (most recent call last):
Nov 22 15:42:14 ovs-node-2 celery[3176]: File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
Nov 22 15:42:14 ovs-node-2 celery[3176]: R = retval = fun(*args, **kwargs)
Nov 22 15:42:14 ovs-node-2 celery[3176]: File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
Nov 22 15:42:14 ovs-node-2 celery[3176]: return self.run(*args, **kwargs)
Nov 22 15:42:14 ovs-node-2 celery[3176]: File "/opt/OpenvStorage/ovs/lib/vdisk.py", line 615, in create_new
Nov 22 15:42:14 ovs-node-2 celery[3176]: node_id=str(storagedriver.storagedriver_id))
Nov 22 15:42:14 ovs-node-2 celery[3176]: RuntimeError: failed to send XMLRPC request volumeCreate
```
Also noticed that afterward I couldn't fetch any more info about my vpool for some time (+- 2 mins)
|
process
|
block vdisk creation on storagerouter with failed db disk problem description adding a vdisk to a storagerouter with a failed db disk is still possible you can select this storagerouter in the wizard but the creation will fail because the volumedriver will error since we know why the storagerouter failed it should no longer be visible in the wizard my steps setup cluster added roles to disks added backend added vpools extended vpools to all nodes remove the roles disk on node add a vdisk on storagerouter disks on node gui error log nov ovs node celery ovs node lib vdisk error creating new vdisk fai led failed to send xmlrpc request volumecreate nov ovs node celery ovs node celery celery worker job error task ovs vdisk cr eate new raised unexpected runtimeerror failed to send xmlrpc request volumecreate nov ovs node celery traceback most recent call last nov ovs node celery file usr lib dist packages celery app trace py line in trace task nov ovs node celery r retval fun args kwargs nov ovs node celery file usr lib dist packages celery app trace py line in protected call nov ovs node celery return self run args kwargs nov ovs node celery file opt openvstorage ovs lib vdisk py line in create new nov ovs node celery node id str storagedriver storagedriver id nov ovs node celery runtimeerror failed to send xmlrpc request volumecreate also noticed that afterward i couldn t fetch any more info about my vpool for some time mins
| 1
|
15,947
| 20,167,452,537
|
IssuesEvent
|
2022-02-10 06:52:17
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Bug report] app.mpx 中的 <style src="" /> 无法将被引用的文件添加进 app.wxss
|
processing
|
**问题描述**
请用简洁的语言描述你遇到的bug,至少包括以下部分,如提供截图请尽量完整:
1. 问题触发的条件:
在 app.mpx 中使用 `<style src="" />` 引入样式文件,如
```
<style src="./windi.css"></style>
```
2. 期望的表现:
被引入的样式文件会以 `@import "";` 的形式添加到对应的 app.wxss 文件中
3. 实际的表现:
app.wxss 并未生成,即使生成也不会包含对应的 `@import "";`。同时 dist 目录会生成 styles 文件夹,其内包含被引用的样式 wxss 文件版本。dist 目录还会生成一个 `missing-filename.wxss` 里面包含所有 styles 文件夹内样式文件的 import 声明。可由于 app.wxss 并未引用目标文件所以目标样式不会在小程序中起作用。
但是,如果同样的场景放在 page.mpx 中则会正常起作用,便会符合预期。所以该问题只会发生在 app.mpx 中。
**环境信息描述**
至少包含以下部分:
1. 系统类型(Mac或者Windows)
Mac
2. Mpx依赖版本(@mpxjs/core、@mpxjs/webpack-plugin和@mpxjs/api-proxy的具体版本,可以通过package-lock.json或者实际去node_modules当中查看)
"@mpxjs/api-proxy": "^2.7.1",
"@mpxjs/core": "^2.7.2",
"@mpxjs/webpack-plugin": "^2.7.2"
复现 demo:
https://github.com/ItsRyanWu/mpx-debug
|
1.0
|
[Bug report] app.mpx 中的 <style src="" /> 无法将被引用的文件添加进 app.wxss - **问题描述**
请用简洁的语言描述你遇到的bug,至少包括以下部分,如提供截图请尽量完整:
1. 问题触发的条件:
在 app.mpx 中使用 `<style src="" />` 引入样式文件,如
```
<style src="./windi.css"></style>
```
2. 期望的表现:
被引入的样式文件会以 `@import "";` 的形式添加到对应的 app.wxss 文件中
3. 实际的表现:
app.wxss 并未生成,即使生成也不会包含对应的 `@import "";`。同时 dist 目录会生成 styles 文件夹,其内包含被引用的样式 wxss 文件版本。dist 目录还会生成一个 `missing-filename.wxss` 里面包含所有 styles 文件夹内样式文件的 import 声明。可由于 app.wxss 并未引用目标文件所以目标样式不会在小程序中起作用。
但是,如果同样的场景放在 page.mpx 中则会正常起作用,便会符合预期。所以该问题只会发生在 app.mpx 中。
**环境信息描述**
至少包含以下部分:
1. 系统类型(Mac或者Windows)
Mac
2. Mpx依赖版本(@mpxjs/core、@mpxjs/webpack-plugin和@mpxjs/api-proxy的具体版本,可以通过package-lock.json或者实际去node_modules当中查看)
"@mpxjs/api-proxy": "^2.7.1",
"@mpxjs/core": "^2.7.2",
"@mpxjs/webpack-plugin": "^2.7.2"
复现 demo:
https://github.com/ItsRyanWu/mpx-debug
|
process
|
app mpx 中的 无法将被引用的文件添加进 app wxss 问题描述 请用简洁的语言描述你遇到的bug,至少包括以下部分,如提供截图请尽量完整: 问题触发的条件: 在 app mpx 中使用 引入样式文件,如 期望的表现: 被引入的样式文件会以 import 的形式添加到对应的 app wxss 文件中 实际的表现: app wxss 并未生成,即使生成也不会包含对应的 import 。同时 dist 目录会生成 styles 文件夹,其内包含被引用的样式 wxss 文件版本。dist 目录还会生成一个 missing filename wxss 里面包含所有 styles 文件夹内样式文件的 import 声明。可由于 app wxss 并未引用目标文件所以目标样式不会在小程序中起作用。 但是,如果同样的场景放在 page mpx 中则会正常起作用,便会符合预期。所以该问题只会发生在 app mpx 中。 环境信息描述 至少包含以下部分: 系统类型 mac或者windows mac mpx依赖版本 mpxjs core、 mpxjs webpack plugin和 mpxjs api proxy的具体版本,可以通过package lock json或者实际去node modules当中查看 mpxjs api proxy mpxjs core mpxjs webpack plugin 复现 demo
| 1
|
8,515
| 11,696,612,866
|
IssuesEvent
|
2020-03-06 10:07:16
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
caret symbol is ignored in child_process.exec/execSync commands
|
child_process windows
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
-->
* **Version**: 12.14.1
* **Platform**: Windows 10 x64
* **Subsystem**: child_process
### What steps will reproduce the bug?
In any git repo execute `console.log(require('child_process').execSync('git rev-parse HEAD^', {encoding: 'utf-8' }))`.
### How often does it reproduce? Is there a required condition?
Constantly
### What is the expected behavior?
Previous commit hash should be returned
### What do you see instead?
HEAD commit hash returned
### Additional information
Double caret fix the issue. Also if replace `HEAD^` with `HEAD^1` than git produce error `fatal: ambiguous argument 'HEAD1': unknown revision or path not in the working tree.`
|
1.0
|
caret symbol is ignored in child_process.exec/execSync commands - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
-->
* **Version**: 12.14.1
* **Platform**: Windows 10 x64
* **Subsystem**: child_process
### What steps will reproduce the bug?
In any git repo execute `console.log(require('child_process').execSync('git rev-parse HEAD^', {encoding: 'utf-8' }))`.
### How often does it reproduce? Is there a required condition?
Constantly
### What is the expected behavior?
Previous commit hash should be returned
### What do you see instead?
HEAD commit hash returned
### Additional information
Double caret fix the issue. Also if replace `HEAD^` with `HEAD^1` than git produce error `fatal: ambiguous argument 'HEAD1': unknown revision or path not in the working tree.`
|
process
|
caret symbol is ignored in child process exec execsync commands thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name version platform windows subsystem child process what steps will reproduce the bug in any git repo execute console log require child process execsync git rev parse head encoding utf how often does it reproduce is there a required condition constantly what is the expected behavior previous commit hash should be returned what do you see instead head commit hash returned additional information double caret fix the issue also if replace head with head than git produce error fatal ambiguous argument unknown revision or path not in the working tree
| 1
|
4,284
| 7,190,610,372
|
IssuesEvent
|
2018-02-02 17:52:24
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Etherscan API key is not checked for validity
|
apps-ethslurp status-inprocess type-bug
|
If one removes the line in ~/quickBlocks/quickBlocks.toml that stores the Etherscan API key
api_key=68E1BQYW85...... (not real)
`ethslurp` requests a new key. One may enter any value as the key, and quickBlocks will not check that a valid key has been entered. There is no way to ensure that every key entered will be valid, but we could do some checks. For example, valid keys appear to be X characters long and all upper case.
Perhaps, if we get a weird looking key, we warn the user but proceed anyway as opposed to disallowing user from entering whatever they want. Actually, there is code in place to allow the end user to use any API that will provide the needed data, but we don't expose that.
|
1.0
|
Etherscan API key is not checked for validity - If one removes the line in ~/quickBlocks/quickBlocks.toml that stores the Etherscan API key
api_key=68E1BQYW85...... (not real)
`ethslurp` requests a new key. One may enter any value as the key, and quickBlocks will not check that a valid key has been entered. There is no way to ensure that every key entered will be valid, but we could do some checks. For example, valid keys appear to be X characters long and all upper case.
Perhaps, if we get a weird looking key, we warn the user but proceed anyway as opposed to disallowing user from entering whatever they want. Actually, there is code in place to allow the end user to use any API that will provide the needed data, but we don't expose that.
|
process
|
etherscan api key is not checked for validity if one removes the line in quickblocks quickblocks toml that stores the etherscan api key api key not real ethslurp requests a new key one may enter any value as the key and quickblocks will not check that a valid key has been entered there is no way to ensure that every key entered will be valid but we could do some checks for example valid keys appear to be x characters long and all upper case perhaps if we get a weird looking key we warn the user but proceed anyway as opposed to disallowing user from entering whatever they want actually there is code in place to allow the end user to use any api that will provide the needed data but we don t expose that
| 1
|
20,649
| 27,324,672,815
|
IssuesEvent
|
2023-02-25 00:11:18
|
microsoft/vscode-java-debug
|
https://api.github.com/repos/microsoft/vscode-java-debug
|
closed
|
Quarkus / MapStruct - Annotation Processing (build project or launchBeforeBuild)
|
needs more info compile annotation-processing
|
Just developing, _not_ running `quarkus:dev`, if I go to "build the project" or run any tests with `launchBeforeBuild=true`, I get errors that the MapStruct mappers are not correct as if the annotation processor did not run.
Running Maven in the CLI works fine and if I have `launchBeforeBuild=false` or proceed after the errors I can run and debug the tests without a problem (this is not acceptable though because it basically says everything is broken until I clear the workspace).
If I just build the project using the Java Project extension, I will get the errors but then running `Maven -> Reload Project` afterwards brings it back to the correct state (and no errors are displayed). Unfortunately, I cannot do this with tests if `launchBeforeBuild=true`.
Some things I have tried:
- Opening the single project only gives the same result
- Disabled m2e_apt "seemed" to get me closer but had other issues.
- Tried probably every post about annotation processing etc. regarding MapStruct and Lombok.
##### Environment
- Operating System: WSL2
- JDK version: JDK 17
- Visual Studio Code version: latest
- Java extension version: latest
- Java Debugger extension version: latest
##### Steps To Reproduce
Using the Java Project extension run **Build Project** or run a test case with `launchBeforeBuild=true`:

> If I have `launchBeforeBuild=false`, build outside VSCode using Maven, the tests run fine and I can debug them.
Using the Java Project extension run **Maven Reload Project** right after brings it back to the correct state:

##### Expected Result
Since I am running the same as I would be with Maven in the CLI and not using `quarkus:dev` it should generate and recognize the mappings being generated through the annotation processor. The build and the generated mappings work fine in the CLI using Maven.
|
1.0
|
Quarkus / MapStruct - Annotation Processing (build project or launchBeforeBuild) - Just developing, _not_ running `quarkus:dev`, if I go to "build the project" or run any tests with `launchBeforeBuild=true`, I get errors that the MapStruct mappers are not correct as if the annotation processor did not run.
Running Maven in the CLI works fine and if I have `launchBeforeBuild=false` or proceed after the errors I can run and debug the tests without a problem (this is not acceptable though because it basically says everything is broken until I clear the workspace).
If I just build the project using the Java Project extension, I will get the errors but then running `Maven -> Reload Project` afterwards brings it back to the correct state (and no errors are displayed). Unfortunately, I cannot do this with tests if `launchBeforeBuild=true`.
Some things I have tried:
- Opening the single project only gives the same result
- Disabled m2e_apt "seemed" to get me closer but had other issues.
- Tried probably every post about annotation processing etc. regarding MapStruct and Lombok.
##### Environment
- Operating System: WSL2
- JDK version: JDK 17
- Visual Studio Code version: latest
- Java extension version: latest
- Java Debugger extension version: latest
##### Steps To Reproduce
Using the Java Project extension run **Build Project** or run a test case with `launchBeforeBuild=true`:

> If I have `launchBeforeBuild=false`, build outside VSCode using Maven, the tests run fine and I can debug them.
Using the Java Project extension run **Maven Reload Project** right after brings it back to the correct state:

##### Expected Result
Since I am running the same as I would be with Maven in the CLI and not using `quarkus:dev` it should generate and recognize the mappings being generated through the annotation processor. The build and the generated mappings work fine in the CLI using Maven.
|
process
|
quarkus mapstruct annotation processing build project or launchbeforebuild just developing not running quarkus dev if i go to build the project or run any tests with launchbeforebuild true i get errors that the mapstruct mappers are not correct as if the annotation processor did not run running maven in the cli works fine and if i have launchbeforebuild false or proceed after the errors i can run and debug the tests without a problem this is not acceptable though because it basically says everything is broken until i clear the workspace if i just build the project using the java project extension i will get the errors but then running maven reload project afterwards brings it back to the correct state and no errors are displayed unfortunately i cannot do this with tests if launchbeforebuild true some things i have tried opening the single project only gives the same result disabled apt seemed to get me closer but had other issues tried probably every post about annotation processing etc regarding mapstruct and lombok environment operating system jdk version jdk visual studio code version latest java extension version latest java debugger extension version latest steps to reproduce using the java project extension run build project or run a test case with launchbeforebuild true if i have launchbeforebuild false build outside vscode using maven the tests run fine and i can debug them using the java project extension run maven reload project right after brings it back to the correct state expected result since i am running the same as i would be with maven in the cli and not using quarkus dev it should generate and recognize the mappings being generated through the annotation processor the build and the generated mappings work fine in the cli using maven
| 1
|
4,368
| 7,260,515,516
|
IssuesEvent
|
2018-02-18 10:54:19
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE] Native 'Promote to Multipart' algorithm
|
Automatic new feature Processing
|
Original commit: https://github.com/qgis/QGIS/commit/3484eb019c2b8eb682e5a41ff9e6e4b14b9e97d4 by nyalldawson
This algorithm is basically the equivalent of the ST_Multi(...)
command - it forces a feature's geometry to become multipart,
regardless of the input geometry type.
If input geometries are singlepart, they will output as
multipart with just 1 part. If they are already multipart,
they will be output unchanged.
|
1.0
|
[FEATURE] Native 'Promote to Multipart' algorithm - Original commit: https://github.com/qgis/QGIS/commit/3484eb019c2b8eb682e5a41ff9e6e4b14b9e97d4 by nyalldawson
This algorithm is basically the equivalent of the ST_Multi(...)
command - it forces a feature's geometry to become multipart,
regardless of the input geometry type.
If input geometries are singlepart, they will output as
multipart with just 1 part. If they are already multipart,
they will be output unchanged.
|
process
|
native promote to multipart algorithm original commit by nyalldawson this algorithm is basically the equivalent of the st multi command it forces a feature s geometry to become multipart regardless of the input geometry type if input geometries are singlepart they will output as multipart with just part if they are already multipart they will be output unchanged
| 1
|
12,826
| 15,211,211,283
|
IssuesEvent
|
2021-02-17 08:45:46
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Work flows are failed for participant manager and Participant manager datastore
|
Bug Process: Fixed
|
Please go through this links for the errors
https://github.com/boston-tech/develop-fda-mystudies/actions/runs/558335875
https://github.com/boston-tech/develop-fda-mystudies/actions/runs/558335871
|
1.0
|
Work flows are failed for participant manager and Participant manager datastore - Please go through this links for the errors
https://github.com/boston-tech/develop-fda-mystudies/actions/runs/558335875
https://github.com/boston-tech/develop-fda-mystudies/actions/runs/558335871
|
process
|
work flows are failed for participant manager and participant manager datastore please go through this links for the errors
| 1
|
449
| 2,888,972,353
|
IssuesEvent
|
2015-06-13 01:13:56
|
benjamingr/RexExp.escape
|
https://api.github.com/repos/benjamingr/RexExp.escape
|
opened
|
Advance to stage 2
|
process
|
From [the tc39 process](https://docs.google.com/document/d/1QbEE0BsO4lvl7NFTn5WXWeiEIBfaVUF7Dk0hpPpPDzU/edit):
- [x] Step 1 Criteria
- [x] Initial spec text
Available at http://benjamingr.github.io/RexExp.escape/ and at the readme file.
- [ ] tc39 approval of the advancement of the spec to the next level.
|
1.0
|
Advance to stage 2 - From [the tc39 process](https://docs.google.com/document/d/1QbEE0BsO4lvl7NFTn5WXWeiEIBfaVUF7Dk0hpPpPDzU/edit):
- [x] Step 1 Criteria
- [x] Initial spec text
Available at http://benjamingr.github.io/RexExp.escape/ and at the readme file.
- [ ] tc39 approval of the advancement of the spec to the next level.
|
process
|
advance to stage from step criteria initial spec text available at and at the readme file approval of the advancement of the spec to the next level
| 1
|
354,573
| 25,171,585,142
|
IssuesEvent
|
2022-11-11 04:10:56
|
line/armeria
|
https://api.github.com/repos/line/armeria
|
opened
|
Documentation for shutdown process of `Server`
|
documentation
|
@chris-ryan-square suggested:
> I think a small section in the docs about the shutdown process, grace period, listener state hooks (whenStopping, etc) and implications or limitations of it all would be a very useful addition.
|
1.0
|
Documentation for shutdown process of `Server` - @chris-ryan-square suggested:
> I think a small section in the docs about the shutdown process, grace period, listener state hooks (whenStopping, etc) and implications or limitations of it all would be a very useful addition.
|
non_process
|
documentation for shutdown process of server chris ryan square suggested i think a small section in the docs about the shutdown process grace period listener state hooks whenstopping etc and implications or limitations of it all would be a very useful addition
| 0
|
47,006
| 19,553,554,370
|
IssuesEvent
|
2022-01-03 04:23:57
|
PreMiD/Presences
|
https://api.github.com/repos/PreMiD/Presences
|
opened
|
Ryuten.io | ryuten.io
|
Service Request
|
### Discussed in https://github.com/PreMiD/Presences/discussions/4541
<div type='discussions-op-text'>
<sup>Originally posted by **GawdlyOTF** January 7, 2021</sup>
**Prerequisites and essential questions** <!--- Required, please answer the following questions as honestly as possible by changing the "[ ]" to "[x]" or by marking it after creating the issue (easier), not marking a question counts as "No". -->
- [x] Is it a popular site?
- [x] Is the website older than 2 months? <!--- It is necessary for the website to be older than 2 months. -->
- [ ] Is the site locked to a specific country/region?
- [ ] Is the site a paid service? (e.g. Netflix, Hulu)
- [ ] Does the website feature NSFW content? (e.g. porn, etc...)
- [ ] Are you a donator/patron?
- [x] Do you acknowledge that coding presences is completely voluntary and may take time for your service to be added regardless of priority?
**What's your Discord username?** Gawdly#0066<!--- Optional, unless you are a donator/patron. Ex. Clyde#0000 -->
**What's the name of the service?** Ryuten.io<!--- Required, Ex. www.youtube.com | YouTube -->
**What should the Presence display?** Currently there are 3 servers; America, Europe, and Asia. The presence should show what server the player is playing in. Currently, there are 4 game modes; Classic, Ultra Fission, Multibox 1v1, and Instant Merge. It should show the game mode the player is playing. There's an option called "Spectate", the presence should show when the player is spectating. There's also an option for "Settings", the presence should when the player is browsing their settings.
Another important feature this presence should have is, the player's in-game name, mass (score), and there tag the player is using. The in-game name for the tag is called "Team".
An example of how the presence should look can be shown below:
Ryuten.io
NA: Classic or North America: Classic
Gawdly - 42.9K mass (if the player is in a team) [Hello]Gawdly - 42.9k mass
For the other features I mentioned, it can look something like this:
Ryuten.io
Spectating (If possible, the presence can show what server and game mode the player is currently spectating in)
Ryuten.io
Browsing Settings
<!--- Required, make sure to be as clear as possible on what should be added. -->
**If possible, please provide a logo for the service (512x512 minimum)** <!--- Optional, it is recommended to upload the image here instead of using a 3rd-party host. -->

</div>
|
1.0
|
Ryuten.io | ryuten.io - ### Discussed in https://github.com/PreMiD/Presences/discussions/4541
<div type='discussions-op-text'>
<sup>Originally posted by **GawdlyOTF** January 7, 2021</sup>
**Prerequisites and essential questions** <!--- Required, please answer the following questions as honestly as possible by changing the "[ ]" to "[x]" or by marking it after creating the issue (easier), not marking a question counts as "No". -->
- [x] Is it a popular site?
- [x] Is the website older than 2 months? <!--- It is necessary for the website to be older than 2 months. -->
- [ ] Is the site locked to a specific country/region?
- [ ] Is the site a paid service? (e.g. Netflix, Hulu)
- [ ] Does the website feature NSFW content? (e.g. porn, etc...)
- [ ] Are you a donator/patron?
- [x] Do you acknowledge that coding presences is completely voluntary and may take time for your service to be added regardless of priority?
**What's your Discord username?** Gawdly#0066<!--- Optional, unless you are a donator/patron. Ex. Clyde#0000 -->
**What's the name of the service?** Ryuten.io<!--- Required, Ex. www.youtube.com | YouTube -->
**What should the Presence display?** Currently there are 3 servers; America, Europe, and Asia. The presence should show what server the player is playing in. Currently, there are 4 game modes; Classic, Ultra Fission, Multibox 1v1, and Instant Merge. It should show the game mode the player is playing. There's an option called "Spectate", the presence should show when the player is spectating. There's also an option for "Settings", the presence should when the player is browsing their settings.
Another important feature this presence should have is, the player's in-game name, mass (score), and there tag the player is using. The in-game name for the tag is called "Team".
An example of how the presence should look can be shown below:
Ryuten.io
NA: Classic or North America: Classic
Gawdly - 42.9K mass (if the player is in a team) [Hello]Gawdly - 42.9k mass
For the other features I mentioned, it can look something like this:
Ryuten.io
Spectating (If possible, the presence can show what server and game mode the player is currently spectating in)
Ryuten.io
Browsing Settings
<!--- Required, make sure to be as clear as possible on what should be added. -->
**If possible, please provide a logo for the service (512x512 minimum)** <!--- Optional, it is recommended to upload the image here instead of using a 3rd-party host. -->

</div>
|
non_process
|
ryuten io ryuten io discussed in originally posted by gawdlyotf january prerequisites and essential questions is it a popular site is the website older than months is the site locked to a specific country region is the site a paid service e g netflix hulu does the website feature nsfw content e g porn etc are you a donator patron do you acknowledge that coding presences is completely voluntary and may take time for your service to be added regardless of priority what s your discord username gawdly what s the name of the service ryuten io what should the presence display currently there are servers america europe and asia the presence should show what server the player is playing in currently there are game modes classic ultra fission multibox and instant merge it should show the game mode the player is playing there s an option called spectate the presence should show when the player is spectating there s also an option for settings the presence should when the player is browsing their settings another important feature this presence should have is the player s in game name mass score and there tag the player is using the in game name for the tag is called team an example of how the presence should look can be shown below ryuten io na classic or north america classic gawdly mass if the player is in a team gawdly mass for the other features i mentioned it can look something like this ryuten io spectating if possible the presence can show what server and game mode the player is currently spectating in ryuten io browsing settings if possible please provide a logo for the service minimum
| 0
|
13,670
| 16,389,250,411
|
IssuesEvent
|
2021-05-17 14:15:59
|
rjsears/chia_plot_manager
|
https://api.github.com/repos/rjsears/chia_plot_manager
|
reopened
|
Automated new drive formatting, mounting and updating chia/plot_manger configs
|
In Process TODO
|
Code that will identify when a new drive is added to the system detecting that it is not mounted not has any partitions on it, will partition it, format it xfs, mount it, enter the required entries in /etc/fstab and update plot_manager and chia with the new mount point information.
|
1.0
|
Automated new drive formatting, mounting and updating chia/plot_manger configs - Code that will identify when a new drive is added to the system detecting that it is not mounted not has any partitions on it, will partition it, format it xfs, mount it, enter the required entries in /etc/fstab and update plot_manager and chia with the new mount point information.
|
process
|
automated new drive formatting mounting and updating chia plot manger configs code that will identify when a new drive is added to the system detecting that it is not mounted not has any partitions on it will partition it format it xfs mount it enter the required entries in etc fstab and update plot manager and chia with the new mount point information
| 1
|
119,282
| 4,763,988,447
|
IssuesEvent
|
2016-10-25 15:47:20
|
NAVADMC/ADSM
|
https://api.github.com/repos/NAVADMC/ADSM
|
opened
|
Legacy Import Fail wipes previous file population
|
bug Medium Priority
|
OK, this is a weird one, and I can't post the files so I may have to email them. Maybe we can figure out the process.
When I create a new from Legacy, it leaves the currently open scenario in a weird state. It doesn't force a save on it, or wipe it to new. It leaves the name the same (I will call CURRENT for this example). If the import is successful, then it does rename the scenario based on how the parameter and population files were named.
However, it the import fails, then we are in a bad spot. I do get an error, which is not highly informative but does say to re-export from NAADSM. But, by this point it has wiped out the population that belonged to CURRENT. The population is still lurking out in the saved database, but the prompt is saying that changes need to be saved, and once you save, you wipe the population out of CURRENT. Boom! you have messed up. This happens if you close or navigate away.
Option - force user to name and save this early in process instead of doing name based on files?
|
1.0
|
Legacy Import Fail wipes previous file population - OK, this is a weird one, and I can't post the files so I may have to email them. Maybe we can figure out the process.
When I create a new from Legacy, it leaves the currently open scenario in a weird state. It doesn't force a save on it, or wipe it to new. It leaves the name the same (I will call CURRENT for this example). If the import is successful, then it does rename the scenario based on how the parameter and population files were named.
However, it the import fails, then we are in a bad spot. I do get an error, which is not highly informative but does say to re-export from NAADSM. But, by this point it has wiped out the population that belonged to CURRENT. The population is still lurking out in the saved database, but the prompt is saying that changes need to be saved, and once you save, you wipe the population out of CURRENT. Boom! you have messed up. This happens if you close or navigate away.
Option - force user to name and save this early in process instead of doing name based on files?
|
non_process
|
legacy import fail wipes previous file population ok this is a weird one and i can t post the files so i may have to email them maybe we can figure out the process when i create a new from legacy it leaves the currently open scenario in a weird state it doesn t force a save on it or wipe it to new it leaves the name the same i will call current for this example if the import is successful then it does rename the scenario based on how the parameter and population files were named however it the import fails then we are in a bad spot i do get an error which is not highly informative but does say to re export from naadsm but by this point it has wiped out the population that belonged to current the population is still lurking out in the saved database but the prompt is saying that changes need to be saved and once you save you wipe the population out of current boom you have messed up this happens if you close or navigate away option force user to name and save this early in process instead of doing name based on files
| 0
|
336,428
| 30,193,300,065
|
IssuesEvent
|
2023-07-04 17:33:12
|
Simple-as-Coding/tutoring-platform
|
https://api.github.com/repos/Simple-as-Coding/tutoring-platform
|
opened
|
Test for UserServiceImpl - isUserAlreadyTeacher
|
Unit tests
|
Unit test for the **isUserAlreadyTeacher** method:
- [ ] Verify that it returns true when the user has the teacher role.
- [ ] Verify that it returns false when the user doesn't have the teacher role.
_**instructions for the task**_
_use the convention //given //when //then
mock all injected dependencies with business logic using the Mockito library, e.g. UserRepository
try to use the BDD methodology, e.g. the assertJ library (optional)
|
1.0
|
Test for UserServiceImpl - isUserAlreadyTeacher - Unit test for the **isUserAlreadyTeacher** method:
- [ ] Verify that it returns true when the user has the teacher role.
- [ ] Verify that it returns false when the user doesn't have the teacher role.
_**instructions for the task**_
_use the convention //given //when //then
mock all injected dependencies with business logic using the Mockito library, e.g. UserRepository
try to use the BDD methodology, e.g. the assertJ library (optional)
|
non_process
|
test for userserviceimpl isuseralreadyteacher unit test for the isuseralreadyteacher method verify that it returns true when the user has the teacher role verify that it returns false when the user doesn t have the teacher role instructions for the task use the convention given when then mock all injected dependencies with business logic using the mockito library e g userrepository try to use the bdd methodology e g the assertj library optional
| 0
|
14,632
| 17,767,803,565
|
IssuesEvent
|
2021-08-30 09:47:18
|
googleapis/nodejs-billing
|
https://api.github.com/repos/googleapis/nodejs-billing
|
closed
|
Dependency Dashboard
|
type: process api: cloudbilling
|
This issue contains a list of Renovate updates and their statuses.
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/actions-setup-node-2.x -->chore(deps): update actions/setup-node action to v2
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/gts-3.x -->[chore(deps): update dependency gts to v3](../pull/102)
- [ ] <!-- recreate-branch=renovate/mocha-9.x -->[chore(deps): update dependency mocha to v9](../pull/155) (`mocha`, `@types/mocha`)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/actions-setup-node-2.x -->chore(deps): update actions/setup-node action to v2
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/gts-3.x -->[chore(deps): update dependency gts to v3](../pull/102)
- [ ] <!-- recreate-branch=renovate/mocha-9.x -->[chore(deps): update dependency mocha to v9](../pull/155) (`mocha`, `@types/mocha`)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses awaiting schedule these updates are awaiting their schedule click on a checkbox to get an update now chore deps update actions setup node action to ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull mocha types mocha check this box to trigger a request for renovate to run again on this repository
| 1
|
6,166
| 9,072,053,077
|
IssuesEvent
|
2019-02-15 01:05:19
|
pelias/geonames
|
https://api.github.com/repos/pelias/geonames
|
closed
|
city classified as venue
|
processed
|
``` javascript
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
9.19199,
45.46416
]
},
"properties": {
"id": "6542283",
"gid": "geonames:venue:6542283",
"layer": "venue",
"source": "geonames",
"name": "Milano",
"confidence": 0.882,
"country": "Italy",
"country_gid": "whosonfirst:country:85633253",
"country_a": "ITA",
"macroregion": "Lombardia",
"macroregion_gid": "whosonfirst:macroregion:404227497",
"region": "Milano",
"region_gid": "whosonfirst:region:85685243",
"localadmin": "Milano",
"localadmin_gid": "whosonfirst:localadmin:404468459",
"locality": "Milano",
"locality_gid": "whosonfirst:locality:101752703",
"neighbourhood": "Oggiaro",
"neighbourhood_gid": "whosonfirst:neighbourhood:85796907",
"label": "Milano, Oggiaro, Italy"
}
},
```
|
1.0
|
city classified as venue - ``` javascript
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
9.19199,
45.46416
]
},
"properties": {
"id": "6542283",
"gid": "geonames:venue:6542283",
"layer": "venue",
"source": "geonames",
"name": "Milano",
"confidence": 0.882,
"country": "Italy",
"country_gid": "whosonfirst:country:85633253",
"country_a": "ITA",
"macroregion": "Lombardia",
"macroregion_gid": "whosonfirst:macroregion:404227497",
"region": "Milano",
"region_gid": "whosonfirst:region:85685243",
"localadmin": "Milano",
"localadmin_gid": "whosonfirst:localadmin:404468459",
"locality": "Milano",
"locality_gid": "whosonfirst:locality:101752703",
"neighbourhood": "Oggiaro",
"neighbourhood_gid": "whosonfirst:neighbourhood:85796907",
"label": "Milano, Oggiaro, Italy"
}
},
```
|
process
|
city classified as venue javascript type feature geometry type point coordinates properties id gid geonames venue layer venue source geonames name milano confidence country italy country gid whosonfirst country country a ita macroregion lombardia macroregion gid whosonfirst macroregion region milano region gid whosonfirst region localadmin milano localadmin gid whosonfirst localadmin locality milano locality gid whosonfirst locality neighbourhood oggiaro neighbourhood gid whosonfirst neighbourhood label milano oggiaro italy
| 1
|
10,891
| 13,671,648,911
|
IssuesEvent
|
2020-09-29 07:20:54
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
cli/introspect tests should exit
|
process/candidate team/typescript
|
As of now, there is an open handle in the jest tests in cli/introspect. We should fix that.
|
1.0
|
cli/introspect tests should exit - As of now, there is an open handle in the jest tests in cli/introspect. We should fix that.
|
process
|
cli introspect tests should exit as of now there is an open handle in the jest tests in cli introspect we should fix that
| 1
|
367,142
| 10,840,985,454
|
IssuesEvent
|
2019-11-12 09:31:37
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.coinbase.com - see bug description
|
browser-firefox engine-gecko priority-normal
|
<!-- @browser: Firefox 71.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:71.0) Gecko/20100101 Firefox/71.0 -->
<!-- @reported_with: addon-reporter-firefox -->
**URL**: https://www.coinbase.com/dashboard
**Browser / Version**: Firefox 71.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Coinbase requires Chrome for verification
**Steps to Reproduce**:
Verification fails on Firefox but works fine on Chrome.
On Coinbase's support website, they said: "Use the Chrome browser to complete verification".
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.coinbase.com - see bug description - <!-- @browser: Firefox 71.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:71.0) Gecko/20100101 Firefox/71.0 -->
<!-- @reported_with: addon-reporter-firefox -->
**URL**: https://www.coinbase.com/dashboard
**Browser / Version**: Firefox 71.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Coinbase requires Chrome for verification
**Steps to Reproduce**:
Verification fails on Firefox but works fine on Chrome.
On Coinbase's support website, they said: "Use the Chrome browser to complete verification".
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
see bug description url browser version firefox operating system windows tested another browser yes problem type something else description coinbase requires chrome for verification steps to reproduce verification fails on firefox but works fine on chrome on coinbase s support website they said use the chrome browser to complete verification browser configuration none from with ❤️
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.