Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
1.02k
labels
stringlengths
4
1.54k
body
stringlengths
1
262k
index
stringclasses
17 values
text_combine
stringlengths
95
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
301,669
26,084,919,865
IssuesEvent
2022-12-26 00:35:01
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[REMOTO] Pessoa Engenheira de Software na Kenoby
EFETIVO(CLT) JAVASCRIPT AWS REMOTO TESTE INTEGRAÇÃO TESTES AUTOMATIZADOS HELP WANTED feature_request Stale
A Kenoby, empresa que vem revolucionando o mercado de recrutamento, foi escolhida pela Endeavor como uma das melhores 16 scale up SAAS! Atualmente, a empresa recebeu um investimento de 20 milhões do fundo Astela e é uma das startups que mais vem crescendo no Brasil. Estamos te esperando. Topa o desafio? 🙂 https://byintera.in/z9 ## Local Remoto ## Requisitos Não estamos em busca de nenhuma stack especifica, buscamos pessoas com propósito, proatividade, mão na massa e dispostas a aprender e crescer exponencialmente. Serão bem vindos conhecimentos em Javascript, AWS, testes automatizados, integração contínua... ## Benefícios - 30 dias de licença remunerada. - Licença maternidade/paternidade. - Clube de descontos ## Contratação CLT ## Como se candidatar https://byintera.in/z9 ## Labels #### Alocação - Remoto #### Regime - CLT #### Nível - Sênior - Especialista - Líder
2.0
[REMOTO] Pessoa Engenheira de Software na Kenoby - A Kenoby, empresa que vem revolucionando o mercado de recrutamento, foi escolhida pela Endeavor como uma das melhores 16 scale up SAAS! Atualmente, a empresa recebeu um investimento de 20 milhões do fundo Astela e é uma das startups que mais vem crescendo no Brasil. Estamos te esperando. Topa o desafio? 🙂 https://byintera.in/z9 ## Local Remoto ## Requisitos Não estamos em busca de nenhuma stack especifica, buscamos pessoas com propósito, proatividade, mão na massa e dispostas a aprender e crescer exponencialmente. Serão bem vindos conhecimentos em Javascript, AWS, testes automatizados, integração contínua... ## Benefícios - 30 dias de licença remunerada. - Licença maternidade/paternidade. - Clube de descontos ## Contratação CLT ## Como se candidatar https://byintera.in/z9 ## Labels #### Alocação - Remoto #### Regime - CLT #### Nível - Sênior - Especialista - Líder
test
pessoa engenheira de software na kenoby a kenoby empresa que vem revolucionando o mercado de recrutamento foi escolhida pela endeavor como uma das melhores scale up saas atualmente a empresa recebeu um investimento de milhões do fundo astela e é uma das startups que mais vem crescendo no brasil estamos te esperando topa o desafio 🙂 local remoto requisitos não estamos em busca de nenhuma stack especifica buscamos pessoas com propósito proatividade mão na massa e dispostas a aprender e crescer exponencialmente serão bem vindos conhecimentos em javascript aws testes automatizados integração contínua benefícios dias de licença remunerada licença maternidade paternidade clube de descontos contratação clt como se candidatar labels alocação remoto regime clt nível sênior especialista líder
1
108,259
23,587,663,335
IssuesEvent
2022-08-23 12:57:30
timescale/tobs
https://api.github.com/repos/timescale/tobs
closed
Update how we set Promscale image/tag
dependencies feature epic/tobs-code-stabilization
Creating this issue to remind me if [timescale/promscale#1520](https://github.com/timescale/promscale/pull/1525) is merged, we will need to refactor how we set the container image and tag in `values.yaml` for Promscale. This will need to be updated only when the PR is approved and the Promscale Helm chart version is updated.
1.0
Update how we set Promscale image/tag - Creating this issue to remind me if [timescale/promscale#1520](https://github.com/timescale/promscale/pull/1525) is merged, we will need to refactor how we set the container image and tag in `values.yaml` for Promscale. This will need to be updated only when the PR is approved and the Promscale Helm chart version is updated.
non_test
update how we set promscale image tag creating this issue to remind me if is merged we will need to refactor how we set the container image and tag in values yaml for promscale this will need to be updated only when the pr is approved and the promscale helm chart version is updated
0
242,043
20,191,199,870
IssuesEvent
2022-02-11 05:42:35
natyncaldas/todo-list-crud
https://api.github.com/repos/natyncaldas/todo-list-crud
closed
Labels route testing
unit test
### Requests `GET /api/v1/labels` `GET /api/v1/labels/:id` `POST /api/v1/labels` `PUT /api/v1/labels/:id` `DELETE /api/v1/labels/:id`
1.0
Labels route testing - ### Requests `GET /api/v1/labels` `GET /api/v1/labels/:id` `POST /api/v1/labels` `PUT /api/v1/labels/:id` `DELETE /api/v1/labels/:id`
test
labels route testing requests get api labels get api labels id post api labels put api labels id delete api labels id
1
285,283
24,656,023,542
IssuesEvent
2022-10-17 23:38:47
QubesOS/updates-status
https://api.github.com/repos/QubesOS/updates-status
closed
vmm-xen v4.14.5-8 (r4.2)
r4.2-host-cur-test r4.2-vm-bullseye-cur-test r4.2-vm-bookworm-cur-test
Update of vmm-xen to v4.14.5-8 for Qubes r4.2, see comments below for details and build status. From commit: https://github.com/QubesOS/qubes-vmm-xen/commit/1b7f208dd7d42435bad111a7234dedeaaff5f43c [Changes since previous version](https://github.com/QubesOS/qubes-vmm-xen/compare/v4.14.5-7...v4.14.5-8): QubesOS/qubes-vmm-xen@1b7f208 version 4.14.5-8 QubesOS/qubes-vmm-xen@50cda41 Merge branch 'console-xhci' into xen-4.14 QubesOS/qubes-vmm-xen@db06b38 Merge branch 'qmp-proxy-race' into xen-4.14 QubesOS/qubes-vmm-xen@02447c0 rpm: bump stubdom version dependency QubesOS/qubes-vmm-xen@d8e28a6 Add XHCI DbC console support QubesOS/qubes-vmm-xen@046904b Backport fix for XSTATE reporting QubesOS/qubes-vmm-xen@4087b3b Backport GCC12 build fixes QubesOS/qubes-vmm-xen@02577ba Apply fix for HVM startup race condition QubesOS/qubes-vmm-xen@8f9e2b6 Backport LPSS console support Referenced issues: QubesOS/qubes-issues#6824 If you're release manager, you can issue GPG-inline signed command: * `Upload-component r4.2 vmm-xen 1b7f208dd7d42435bad111a7234dedeaaff5f43c current all` (available 5 days from now) * `Upload-component r4.2 vmm-xen 1b7f208dd7d42435bad111a7234dedeaaff5f43c security-testing` You can choose subset of distributions like: * `Upload-component r4.2 vmm-xen 1b7f208dd7d42435bad111a7234dedeaaff5f43c current vm-bookworm,vm-fc37` (available 5 days from now) Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it). For more information on how to test this update, please take a look at https://www.qubes-os.org/doc/testing/#updates.
3.0
vmm-xen v4.14.5-8 (r4.2) - Update of vmm-xen to v4.14.5-8 for Qubes r4.2, see comments below for details and build status. From commit: https://github.com/QubesOS/qubes-vmm-xen/commit/1b7f208dd7d42435bad111a7234dedeaaff5f43c [Changes since previous version](https://github.com/QubesOS/qubes-vmm-xen/compare/v4.14.5-7...v4.14.5-8): QubesOS/qubes-vmm-xen@1b7f208 version 4.14.5-8 QubesOS/qubes-vmm-xen@50cda41 Merge branch 'console-xhci' into xen-4.14 QubesOS/qubes-vmm-xen@db06b38 Merge branch 'qmp-proxy-race' into xen-4.14 QubesOS/qubes-vmm-xen@02447c0 rpm: bump stubdom version dependency QubesOS/qubes-vmm-xen@d8e28a6 Add XHCI DbC console support QubesOS/qubes-vmm-xen@046904b Backport fix for XSTATE reporting QubesOS/qubes-vmm-xen@4087b3b Backport GCC12 build fixes QubesOS/qubes-vmm-xen@02577ba Apply fix for HVM startup race condition QubesOS/qubes-vmm-xen@8f9e2b6 Backport LPSS console support Referenced issues: QubesOS/qubes-issues#6824 If you're release manager, you can issue GPG-inline signed command: * `Upload-component r4.2 vmm-xen 1b7f208dd7d42435bad111a7234dedeaaff5f43c current all` (available 5 days from now) * `Upload-component r4.2 vmm-xen 1b7f208dd7d42435bad111a7234dedeaaff5f43c security-testing` You can choose subset of distributions like: * `Upload-component r4.2 vmm-xen 1b7f208dd7d42435bad111a7234dedeaaff5f43c current vm-bookworm,vm-fc37` (available 5 days from now) Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it). For more information on how to test this update, please take a look at https://www.qubes-os.org/doc/testing/#updates.
test
vmm xen update of vmm xen to for qubes see comments below for details and build status from commit qubesos qubes vmm xen version qubesos qubes vmm xen merge branch console xhci into xen qubesos qubes vmm xen merge branch qmp proxy race into xen qubesos qubes vmm xen rpm bump stubdom version dependency qubesos qubes vmm xen add xhci dbc console support qubesos qubes vmm xen backport fix for xstate reporting qubesos qubes vmm xen backport build fixes qubesos qubes vmm xen apply fix for hvm startup race condition qubesos qubes vmm xen backport lpss console support referenced issues qubesos qubes issues if you re release manager you can issue gpg inline signed command upload component vmm xen current all available days from now upload component vmm xen security testing you can choose subset of distributions like upload component vmm xen current vm bookworm vm available days from now above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it for more information on how to test this update please take a look at
1
140,673
11,355,562,996
IssuesEvent
2020-01-24 20:20:55
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
opened
org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStoreRepositoryTests testSnapshotWithLargeSegmentFiles
:Distributed/Snapshot/Restore >test-failure
Reproduce with: ``` ./gradlew ':plugins:repository-gcs:test' --tests "org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStoreRepositoryTests.testSnapshotWithLargeSegmentFiles" -Dtests.seed=79146B9E4DC8DB20 -Dtests.security.manager=true -Dtests.locale=ro -Dtests.timezone=America/La_Paz -Dcompiler.java=13 -Druntime.java=8 ``` Error ``` java.lang.AssertionError: Only index blobs should remain in repository but found [indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__U-yvyHGGSC-HG3-rYgESAQ, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__2W1fsDR7RHqVaHUbR2ljDg, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__PYCKH3IuR8OvsesbZ53Q4A, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__ChKaNwG-QDeOLEFMBmV4hg, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__4BrkUvaDSoaRxda8zaYzmw, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__8TyJLropTGCbwEdpGgkCgg, snap-6BvhIzAVS8Wk0yEzY1yHHw.dat, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__TYjSMqKWSAyimBORtvz2hQ, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__nr1hKmAYS8mwQoYGfONLxA, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__AIvhEKrQTT6rQ95Q_a6Mcw, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/snap-6BvhIzAVS8Wk0yEzY1yHHw.dat, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__OYx13Q6MT329W_IM0oSGsg, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__lc7ep5qgSfGUi5XOU2GgVA, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__FnKJqi01RL6xd-BqebZ9qw, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__iv8fkpEiRkGaiSfcDb6cTg, indices/U3Ra9xA9QYOIf2JnrS9qiw/meta-6BvhIzAVS8Wk0yEzY1yHHw.dat, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__45KqhxnDQJyJbNxg5bOdBw, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__zLdkon40Q5aMTM5YjMsXWQ, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__UAbVvyV5QHq0RsZGVRdphg, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__jXvlBf1lTn6Ct1d2xyEeHg, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__ED_68lu_Qci0brGcdBWWqA, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__0j2RqpNKRKqy6RNbv4iniA, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__SqeeF3m4TkqcEJrcZfx0XA, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__6j7bvik8RDqSNmZIHaHdlw, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__gj5mlfNKQmC_sqRVWkQh4A, meta-6BvhIzAVS8Wk0yEzY1yHHw.dat, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__YCpLvzOYR6m3ZMbSCjvCuw] Expected: a collection with size <0> but: collection size was <27> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.elasticsearch.repositories.blobstore.ESMockAPIBasedRepositoryIntegTestCase.tearDownHttpServer(ESMockAPIBasedRepositoryIntegTestCase.java:112) ``` Build scan : https://gradle-enterprise.elastic.co/s/5agjwd5uxd4g6 90 day history <img width="1396" alt="image" src="https://user-images.githubusercontent.com/976291/73100474-39528980-3eb3-11ea-98b5-42f496bb73ee.png"> Suspected related issues (via comments from prior failures) * https://github.com/elastic/elasticsearch/pull/48541 * https://github.com/elastic/elasticsearch/commit/732fc4d755abf1a6395e55fd50bb4762bbf1945d Note - it seems this happens (exclusively?) on 7.5/7.6/7.x and sometimes, but not always have a SocketTimeout too. For example (different build scan then above) https://gradle-enterprise.elastic.co/s/4z2vxrxrohjmq/console-log?anchor=7206
1.0
org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStoreRepositoryTests testSnapshotWithLargeSegmentFiles - Reproduce with: ``` ./gradlew ':plugins:repository-gcs:test' --tests "org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStoreRepositoryTests.testSnapshotWithLargeSegmentFiles" -Dtests.seed=79146B9E4DC8DB20 -Dtests.security.manager=true -Dtests.locale=ro -Dtests.timezone=America/La_Paz -Dcompiler.java=13 -Druntime.java=8 ``` Error ``` java.lang.AssertionError: Only index blobs should remain in repository but found [indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__U-yvyHGGSC-HG3-rYgESAQ, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__2W1fsDR7RHqVaHUbR2ljDg, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__PYCKH3IuR8OvsesbZ53Q4A, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__ChKaNwG-QDeOLEFMBmV4hg, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__4BrkUvaDSoaRxda8zaYzmw, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__8TyJLropTGCbwEdpGgkCgg, snap-6BvhIzAVS8Wk0yEzY1yHHw.dat, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__TYjSMqKWSAyimBORtvz2hQ, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__nr1hKmAYS8mwQoYGfONLxA, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__AIvhEKrQTT6rQ95Q_a6Mcw, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/snap-6BvhIzAVS8Wk0yEzY1yHHw.dat, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__OYx13Q6MT329W_IM0oSGsg, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__lc7ep5qgSfGUi5XOU2GgVA, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__FnKJqi01RL6xd-BqebZ9qw, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__iv8fkpEiRkGaiSfcDb6cTg, indices/U3Ra9xA9QYOIf2JnrS9qiw/meta-6BvhIzAVS8Wk0yEzY1yHHw.dat, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__45KqhxnDQJyJbNxg5bOdBw, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__zLdkon40Q5aMTM5YjMsXWQ, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__UAbVvyV5QHq0RsZGVRdphg, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__jXvlBf1lTn6Ct1d2xyEeHg, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__ED_68lu_Qci0brGcdBWWqA, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__0j2RqpNKRKqy6RNbv4iniA, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__SqeeF3m4TkqcEJrcZfx0XA, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__6j7bvik8RDqSNmZIHaHdlw, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__gj5mlfNKQmC_sqRVWkQh4A, meta-6BvhIzAVS8Wk0yEzY1yHHw.dat, indices/U3Ra9xA9QYOIf2JnrS9qiw/0/__YCpLvzOYR6m3ZMbSCjvCuw] Expected: a collection with size <0> but: collection size was <27> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.elasticsearch.repositories.blobstore.ESMockAPIBasedRepositoryIntegTestCase.tearDownHttpServer(ESMockAPIBasedRepositoryIntegTestCase.java:112) ``` Build scan : https://gradle-enterprise.elastic.co/s/5agjwd5uxd4g6 90 day history <img width="1396" alt="image" src="https://user-images.githubusercontent.com/976291/73100474-39528980-3eb3-11ea-98b5-42f496bb73ee.png"> Suspected related issues (via comments from prior failures) * https://github.com/elastic/elasticsearch/pull/48541 * https://github.com/elastic/elasticsearch/commit/732fc4d755abf1a6395e55fd50bb4762bbf1945d Note - it seems this happens (exclusively?) on 7.5/7.6/7.x and sometimes, but not always have a SocketTimeout too. For example (different build scan then above) https://gradle-enterprise.elastic.co/s/4z2vxrxrohjmq/console-log?anchor=7206
test
org elasticsearch repositories gcs googlecloudstorageblobstorerepositorytests testsnapshotwithlargesegmentfiles reproduce with gradlew plugins repository gcs test tests org elasticsearch repositories gcs googlecloudstorageblobstorerepositorytests testsnapshotwithlargesegmentfiles dtests seed dtests security manager true dtests locale ro dtests timezone america la paz dcompiler java druntime java error java lang assertionerror only index blobs should remain in repository but found expected a collection with size but collection size was at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org elasticsearch repositories blobstore esmockapibasedrepositoryintegtestcase teardownhttpserver esmockapibasedrepositoryintegtestcase java build scan day history img width alt image src suspected related issues via comments from prior failures note it seems this happens exclusively on x and sometimes but not always have a sockettimeout too for example different build scan then above
1
211,440
16,241,532,191
IssuesEvent
2021-05-07 10:05:56
x0b/rcx
https://api.github.com/repos/x0b/rcx
closed
Union not working on Cloud Storage Remotes
Needs Retest Priority: P2 🐞 Bug
Hi, Ive been trying to use the Union to merge two cloud storage remotes I've already created in the app. The merge seems to be going fine but when I try to access the merged folder the app throws an error stating 'Error Retrieving directory content'. Im also attaching the config file where you can see the entry for the Union between pcloud and drive. > 2021-01-03 22:31:37 - 2021/01/04 03:31:37 DEBUG : rclone: Version "v1.51.0" starting with parameters ["/data/app/io.github.x0b.rcx-9ATM-i4YQNNCof80_O__2g==/lib/arm64/librclone.so" "--cache-chunk-path" "/data/user/0/io.github.x0b.rcx/cache" "--cache-db-path" "/data/user/0/io.github.x0b.rcx/cache" "--config" "/data/user/0/io.github.x0b.rcx/files/rclone.conf" "-vvv" "lsjson" "Combined:"] 2021/01/04 03:31:37 DEBUG : Using config file from "/data/user/0/io.github.x0b.rcx/files/rclone.conf" 2021/01/04 03:31:37 Failed to create file system for "Combined:": didn't find section in config file Config file export : > [5d44cd8d-397c-4107-b79b-17f2b6a071e8] type = alias remote = /storage/emulated/0 [GDrive Pro] type = drive scope = drive token = "Redacted for privacy" > [Pcloud] type = pcloud token = "Redacted for privacy" > [Combined] type = union remotes = Pcloud:Rclone Encrypted GDrive Pro:RcloneEncrypted
1.0
Union not working on Cloud Storage Remotes - Hi, Ive been trying to use the Union to merge two cloud storage remotes I've already created in the app. The merge seems to be going fine but when I try to access the merged folder the app throws an error stating 'Error Retrieving directory content'. Im also attaching the config file where you can see the entry for the Union between pcloud and drive. > 2021-01-03 22:31:37 - 2021/01/04 03:31:37 DEBUG : rclone: Version "v1.51.0" starting with parameters ["/data/app/io.github.x0b.rcx-9ATM-i4YQNNCof80_O__2g==/lib/arm64/librclone.so" "--cache-chunk-path" "/data/user/0/io.github.x0b.rcx/cache" "--cache-db-path" "/data/user/0/io.github.x0b.rcx/cache" "--config" "/data/user/0/io.github.x0b.rcx/files/rclone.conf" "-vvv" "lsjson" "Combined:"] 2021/01/04 03:31:37 DEBUG : Using config file from "/data/user/0/io.github.x0b.rcx/files/rclone.conf" 2021/01/04 03:31:37 Failed to create file system for "Combined:": didn't find section in config file Config file export : > [5d44cd8d-397c-4107-b79b-17f2b6a071e8] type = alias remote = /storage/emulated/0 [GDrive Pro] type = drive scope = drive token = "Redacted for privacy" > [Pcloud] type = pcloud token = "Redacted for privacy" > [Combined] type = union remotes = Pcloud:Rclone Encrypted GDrive Pro:RcloneEncrypted
test
union not working on cloud storage remotes hi ive been trying to use the union to merge two cloud storage remotes i ve already created in the app the merge seems to be going fine but when i try to access the merged folder the app throws an error stating error retrieving directory content im also attaching the config file where you can see the entry for the union between pcloud and drive debug rclone version starting with parameters debug using config file from data user io github rcx files rclone conf failed to create file system for combined didn t find section in config file config file export type alias remote storage emulated type drive scope drive token redacted for privacy type pcloud token redacted for privacy type union remotes pcloud rclone encrypted gdrive pro rcloneencrypted
1
330,650
28,456,305,568
IssuesEvent
2023-04-17 07:21:16
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
Error: Timeout: get element '.extension-editor .monaco-action-bar .action-item:not(.disabled) .extension-action.uninstall' after 20 seconds
smoke-test-failure
Build: https://dev.azure.com/monacotools/a6d41577-0fa3-498e-af22-257312ff0545/_build/results?buildId=210762 Changes: https://github.com/Microsoft/vscode/compare/8e85312...5d454b0
1.0
Error: Timeout: get element '.extension-editor .monaco-action-bar .action-item:not(.disabled) .extension-action.uninstall' after 20 seconds - Build: https://dev.azure.com/monacotools/a6d41577-0fa3-498e-af22-257312ff0545/_build/results?buildId=210762 Changes: https://github.com/Microsoft/vscode/compare/8e85312...5d454b0
test
error timeout get element extension editor monaco action bar action item not disabled extension action uninstall after seconds build changes
1
59,207
17,016,473,760
IssuesEvent
2021-07-02 12:47:05
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
opened
Incorrect speed camera tag
Component: poiexport Priority: minor Type: defect
**[Submitted to the original trac issue database at 9.24pm, Monday, 13th January 2014]** index.php lists speed camera tag as amenity:speed_camera, but for long time now speed cameras are captured as highway:speed_camera Therefore this one in index.php: <option value="amenity:speed_camera"><?php msg('Speed camera'); ?></option> should be replaced with this one: <option value="highway:speed_camera"><?php msg('Speed camera'); ?></option>
1.0
Incorrect speed camera tag - **[Submitted to the original trac issue database at 9.24pm, Monday, 13th January 2014]** index.php lists speed camera tag as amenity:speed_camera, but for long time now speed cameras are captured as highway:speed_camera Therefore this one in index.php: <option value="amenity:speed_camera"><?php msg('Speed camera'); ?></option> should be replaced with this one: <option value="highway:speed_camera"><?php msg('Speed camera'); ?></option>
non_test
incorrect speed camera tag index php lists speed camera tag as amenity speed camera but for long time now speed cameras are captured as highway speed camera therefore this one in index php should be replaced with this one
0
323,719
27,748,458,604
IssuesEvent
2023-03-15 18:45:29
MPMG-DCC-UFMG/F01
https://api.github.com/repos/MPMG-DCC-UFMG/F01
closed
Teste de generalizacao para a tag Despesas - Pagamentos - Santana do Jacaré
generalization test development
DoD: Realizar o teste de Generalização do validador da tag Despesas - Pagamentos para o Município de Santana do Jacaré.
1.0
Teste de generalizacao para a tag Despesas - Pagamentos - Santana do Jacaré - DoD: Realizar o teste de Generalização do validador da tag Despesas - Pagamentos para o Município de Santana do Jacaré.
test
teste de generalizacao para a tag despesas pagamentos santana do jacaré dod realizar o teste de generalização do validador da tag despesas pagamentos para o município de santana do jacaré
1
423,795
28,933,830,334
IssuesEvent
2023-05-09 03:18:38
Jesus180Reyes/kariken_rider_app
https://api.github.com/repos/Jesus180Reyes/kariken_rider_app
closed
TODO Add Markers from Pickup Location to Destination Location
documentation enhancement
### TODO Add Markers Location from: * Pickup Location * Destination Location
1.0
TODO Add Markers from Pickup Location to Destination Location - ### TODO Add Markers Location from: * Pickup Location * Destination Location
non_test
todo add markers from pickup location to destination location todo add markers location from pickup location destination location
0
63,527
15,614,765,016
IssuesEvent
2021-03-19 18:14:24
JeffShepherd/JeffShepherdSite
https://api.github.com/repos/JeffShepherd/JeffShepherdSite
closed
Fix particle.js https load issue
Initial buildout - vanilla JS bug
Particle js load via CDN link not completing due to https issue. Possible fix: install package via npm rather than load via script
1.0
Fix particle.js https load issue - Particle js load via CDN link not completing due to https issue. Possible fix: install package via npm rather than load via script
non_test
fix particle js https load issue particle js load via cdn link not completing due to https issue possible fix install package via npm rather than load via script
0
7,398
17,690,793,843
IssuesEvent
2021-08-24 09:40:45
RasaHQ/rasa
https://api.github.com/repos/RasaHQ/rasa
closed
implement `NLUMessageConverter`
type:enhancement :sparkles: area:rasa-oss :ferris_wheel: priority:high effort:enable-squad/1 feature:rasa-3.0/architecture
**Overview of the Solution**: We need to implement the `NLUMessageConverter`. This component runs during inferences and takes a `UserMessage` object and converts it into a list of `Message` objects which can then be processed by our "NLU" components (tokenizer, featurizer, classifiers). [This figure](https://www.notion.so/rasa/Rasa-Open-Source-3-0-Architecture-Implementation-Proposal-51ab90b05c41435ca98189a101676a1e#48704bc28dab486dad98e9ebd3ab9077) shows the component's position in the graph. **Input to the component** `Optional[UserMessage]` (the message might be `None` in case we are predicting actions after actions (in contrast to predicting the first action after a user message). **Output:** A list of `Message` objects. The list has either length 1 (in case `UserMessage is not None`) or length 0 (in case `UserMessage is None`). **Definition of Done**: - [x] Component is implemented and unit tested
1.0
implement `NLUMessageConverter` - **Overview of the Solution**: We need to implement the `NLUMessageConverter`. This component runs during inferences and takes a `UserMessage` object and converts it into a list of `Message` objects which can then be processed by our "NLU" components (tokenizer, featurizer, classifiers). [This figure](https://www.notion.so/rasa/Rasa-Open-Source-3-0-Architecture-Implementation-Proposal-51ab90b05c41435ca98189a101676a1e#48704bc28dab486dad98e9ebd3ab9077) shows the component's position in the graph. **Input to the component** `Optional[UserMessage]` (the message might be `None` in case we are predicting actions after actions (in contrast to predicting the first action after a user message). **Output:** A list of `Message` objects. The list has either length 1 (in case `UserMessage is not None`) or length 0 (in case `UserMessage is None`). **Definition of Done**: - [x] Component is implemented and unit tested
non_test
implement nlumessageconverter overview of the solution we need to implement the nlumessageconverter this component runs during inferences and takes a usermessage object and converts it into a list of message objects which can then be processed by our nlu components tokenizer featurizer classifiers shows the component s position in the graph input to the component optional the message might be none in case we are predicting actions after actions in contrast to predicting the first action after a user message output a list of message objects the list has either length in case usermessage is not none or length in case usermessage is none definition of done component is implemented and unit tested
0
242,481
26,269,401,417
IssuesEvent
2023-01-06 15:34:19
kaidisn/netflix_conductor_fork
https://api.github.com/repos/kaidisn/netflix_conductor_fork
opened
CVE-2021-36374 (Medium) detected in ant-1.7.0.jar
security vulnerability
## CVE-2021-36374 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ant-1.7.0.jar</b></p></summary> <p>Apache Ant</p> <p>Library home page: <a href="http://ant.apache.org/">http://ant.apache.org/</a></p> <p>Path to dependency file: /cassandra-persistence/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.ant/ant/1.7.0/9746af1a485e50cf18dcb232489032a847067066/ant-1.7.0.jar</p> <p> Dependency Hierarchy: - cassandra-unit-3.5.0.1.jar (Root Library) - cassandra-all-3.11.2.jar - cassandra-thrift-3.11.2.jar - jflex-1.6.0.jar - :x: **ant-1.7.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kaidisn/netflix_conductor_fork/commit/e5f3a784765077c7776dd541a3c94011c256b35b">e5f3a784765077c7776dd541a3c94011c256b35b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> When reading a specially crafted ZIP archive, or a derived formats, an Apache Ant build can be made to allocate large amounts of memory that leads to an out of memory error, even for small inputs. This can be used to disrupt builds using Apache Ant. Commonly used derived formats from ZIP archives are for instance JAR files and many office files. Apache Ant prior to 1.9.16 and 1.10.11 were affected. <p>Publish Date: 2021-07-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-36374>CVE-2021-36374</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://ant.apache.org/security.html">https://ant.apache.org/security.html</a></p> <p>Release Date: 2021-07-14</p> <p>Fix Resolution: org.apache.ant:ant:1.9.16,1.10.11</p> </p> </details> <p></p>
True
CVE-2021-36374 (Medium) detected in ant-1.7.0.jar - ## CVE-2021-36374 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ant-1.7.0.jar</b></p></summary> <p>Apache Ant</p> <p>Library home page: <a href="http://ant.apache.org/">http://ant.apache.org/</a></p> <p>Path to dependency file: /cassandra-persistence/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.ant/ant/1.7.0/9746af1a485e50cf18dcb232489032a847067066/ant-1.7.0.jar</p> <p> Dependency Hierarchy: - cassandra-unit-3.5.0.1.jar (Root Library) - cassandra-all-3.11.2.jar - cassandra-thrift-3.11.2.jar - jflex-1.6.0.jar - :x: **ant-1.7.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kaidisn/netflix_conductor_fork/commit/e5f3a784765077c7776dd541a3c94011c256b35b">e5f3a784765077c7776dd541a3c94011c256b35b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> When reading a specially crafted ZIP archive, or a derived formats, an Apache Ant build can be made to allocate large amounts of memory that leads to an out of memory error, even for small inputs. This can be used to disrupt builds using Apache Ant. Commonly used derived formats from ZIP archives are for instance JAR files and many office files. Apache Ant prior to 1.9.16 and 1.10.11 were affected. <p>Publish Date: 2021-07-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-36374>CVE-2021-36374</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://ant.apache.org/security.html">https://ant.apache.org/security.html</a></p> <p>Release Date: 2021-07-14</p> <p>Fix Resolution: org.apache.ant:ant:1.9.16,1.10.11</p> </p> </details> <p></p>
non_test
cve medium detected in ant jar cve medium severity vulnerability vulnerable library ant jar apache ant library home page a href path to dependency file cassandra persistence build gradle path to vulnerable library home wss scanner gradle caches modules files org apache ant ant ant jar dependency hierarchy cassandra unit jar root library cassandra all jar cassandra thrift jar jflex jar x ant jar vulnerable library found in head commit a href found in base branch master vulnerability details when reading a specially crafted zip archive or a derived formats an apache ant build can be made to allocate large amounts of memory that leads to an out of memory error even for small inputs this can be used to disrupt builds using apache ant commonly used derived formats from zip archives are for instance jar files and many office files apache ant prior to and were affected publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache ant ant
0
75,045
25,498,186,394
IssuesEvent
2022-11-27 23:01:32
scipy/scipy
https://api.github.com/repos/scipy/scipy
opened
BUG: accuracy failures in _svds.py with some libraries for unclear reasons
defect
### Describe your issue. Working on #16712, I have experienced accuracy failures in _svds.py with some libraries for unclear reasons. The output of test_small_sigma2 is sensitive to seemingly innocent accuracy issues in basic LAPACK-like libraries, so you want to track if a version of such a library has changed. It is likely a sign of a bug in a library, where an "improvement" has been made to increase efficiency at the cost of accuracy I have added a condition, line 831, to test_svds.py for condition at line 832 to pass in all tests in #16712: if n < m: # else the assert fails with some libraries unclear why assert_allclose(sp_mat.transpose() @ su, 0, atol=1e-5, rtol=1e0) See also https://github.com/scipy/scipy/pull/16712/#issuecomment-1252293815 ### Reproducing Code Example ```python Comment out line 831 in test_svds.py: if n < m: # else the assert fails with some libraries unclear why ``` ### Error message ```shell For example, https://github.com/scipy/scipy/pull/16712/#issuecomment-1237401746 ``` ### SciPy/NumPy/Python version information the current main
1.0
BUG: accuracy failures in _svds.py with some libraries for unclear reasons - ### Describe your issue. Working on #16712, I have experienced accuracy failures in _svds.py with some libraries for unclear reasons. The output of test_small_sigma2 is sensitive to seemingly innocent accuracy issues in basic LAPACK-like libraries, so you want to track if a version of such a library has changed. It is likely a sign of a bug in a library, where an "improvement" has been made to increase efficiency at the cost of accuracy I have added a condition, line 831, to test_svds.py for condition at line 832 to pass in all tests in #16712: if n < m: # else the assert fails with some libraries unclear why assert_allclose(sp_mat.transpose() @ su, 0, atol=1e-5, rtol=1e0) See also https://github.com/scipy/scipy/pull/16712/#issuecomment-1252293815 ### Reproducing Code Example ```python Comment out line 831 in test_svds.py: if n < m: # else the assert fails with some libraries unclear why ``` ### Error message ```shell For example, https://github.com/scipy/scipy/pull/16712/#issuecomment-1237401746 ``` ### SciPy/NumPy/Python version information the current main
non_test
bug accuracy failures in svds py with some libraries for unclear reasons describe your issue working on i have experienced accuracy failures in svds py with some libraries for unclear reasons the output of test small is sensitive to seemingly innocent accuracy issues in basic lapack like libraries so you want to track if a version of such a library has changed it is likely a sign of a bug in a library where an improvement has been made to increase efficiency at the cost of accuracy i have added a condition line to test svds py for condition at line to pass in all tests in if n m else the assert fails with some libraries unclear why assert allclose sp mat transpose su atol rtol see also reproducing code example python comment out line in test svds py if n m else the assert fails with some libraries unclear why error message shell for example scipy numpy python version information the current main
0
335,573
30,052,596,792
IssuesEvent
2023-06-28 02:38:44
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
opened
[DocDB] heap-use-after-free in MasterFailoverTestIndexCreation/MasterFailoverTestIndexCreation.TestPauseAfterCreateIndexIssued/0
kind/failing-test area/docdb status/awaiting-triage
### Description Example log: https://jenkins.dev.yugabyte.com/job/github-yugabyte-db-alma8-master-clang16-asan/151/artifact/build/asan-clang16-dynamic-ninja/yb-test-logs/tests-integration-tests__master_failover-itest/MasterFailoverTestIndexCreation__MasterFailoverTestIndexCreation_TestPauseAfterCreateIndexIssued__0.log ``` [m-1] ==22942==ERROR: AddressSanitizer: heap-use-after-free on address 0x7fcfbc7b08b7 at pc 0x55e9c96f9a9e bp 0x7fcfd607e1f0 sp 0x7fcfd607d9b8 [m-1] READ of size 4 at 0x7fcfbc7b08b7 thread T13 (Master_reactorx) [m-1] #0 0x55e9c96f9a9d in __asan_memmove /opt/yb-build/llvm/yb-llvm-v16.0.6-yb-1-1687337167-5c765d34-almalinux8-x86_64-build/src/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:30:3 [m-1] #1 0x7fcfef8b18bb in std::pair<char const*, char*> std::__copy_trivial_impl[abi:v160006]<char const, char>(char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy_move_common.h:64:3 [m-1] #2 0x7fcfef8b18bb in std::pair<char const*, char*> std::__copy_trivial::operator()[abi:v160006]<char const, char, 0>(char const*, char const*, char*) const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy.h:105:12 [m-1] #3 0x7fcfef8b18bb in std::pair<char const*, char*> std::__unwrap_and_dispatch[abi:v160006]<std::__overload<std::__copy_loop<std::_ClassicAlgPolicy>, std::__copy_trivial>, char const*, char const*, char*, 0>(char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy_move_common.h:107:19 [m-1] #4 0x7fcfef8b18bb in std::pair<char const*, char*> std::__dispatch_copy_or_move[abi:v160006]<std::_ClassicAlgPolicy, std::__copy_loop<std::_ClassicAlgPolicy>, std::__copy_trivial, char const*, char const*, char*>(char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy_move_common.h:158:10 [m-1] #5 0x7fcfef8b18bb in std::pair<char const*, char*> std::__copy[abi:v160006]<std::_ClassicAlgPolicy, char const*, char const*, char*>(char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy.h:112:10 [m-1] #6 0x7fcfef8b18bb in char* std::copy[abi:v160006]<char const*, char*>(char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy.h:119:10 [m-1] #7 0x7fcfef8b18bb in char* std::__uninitialized_allocator_copy[abi:v160006]<std::allocator<char>, char, char, (void*)0>(std::allocator<char>&, char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__memory/uninitialized_algorithms.h:585:12 [m-1] #8 0x7fcfef8b18bb in void std::vector<char, std::allocator<char>>::__construct_at_end<char const*, 0>(char const*, char const*, unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/vector:1029:17 [m-1] #9 0x7fcfef8b0793 in std::__wrap_iter<char*> std::vector<char, std::allocator<char>>::insert<char const*, 0>(std::__wrap_iter<char const*>, char const*, char const*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/vector:1883:17 [m-1] #10 0x7fcfef8a151a in yb::IoVecsToBuffer(boost::container::small_vector<iovec, 4ul, void, void> const&, unsigned long, unsigned long, std::vector<char, std::allocator<char>>*) ${BUILD_ROOT}/../../src/yb/util/net/socket.cc:87:15 [m-1] #11 0x7fcff008924b in yb::rpc::BinaryCallParser::Parse(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>, std::shared_ptr<yb::MemTracker> const*) ${BUILD_ROOT}/../../src/yb/rpc/binary_call_parser.cc:86:5 [m-1] #12 0x7fcff026bad9 in yb::rpc::YBInboundConnectionContext::ProcessCalls(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:154:19 [m-1] #13 0x7fcff00b2935 in yb::rpc::Connection::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:317:27 [m-1] #14 0x7fcff0193b03 in yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc [m-1] #15 0x7fcff0195de1 in non-virtual thunk to yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc [m-1] #16 0x7fcff025747e in yb::rpc::TcpStream::TryProcessReceived() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:408:17 [m-1] #17 0x7fcff0253fb3 in yb::rpc::TcpStream::ReadHandler() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:334:31 [m-1] #18 0x7fcff0252a44 in yb::rpc::TcpStream::Handler(ev::io&, int) ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:276:14 [m-1] #19 0x7fcfeea2c6ca in ev_invoke_pending (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x86ca) [m-1] #20 0x7fcfeea2d3c6 in ev_run (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x93c6) [m-1] #21 0x7fcff01594fc in ev::loop_ref::run(int) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/include/ev++.h:211:7 [m-1] #22 0x7fcff01594fc in yb::rpc::Reactor::RunThread() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:630:9 [m-1] #23 0x7fcfef9e05a0 in std::__function::__value_func<void ()>::operator()[abi:v160006]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16 [m-1] #24 0x7fcfef9e05a0 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12 [m-1] #25 0x7fcfef9e05a0 in yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:842:3 [m-1] #26 0x7fcfeaa1f1c9 in start_thread (/lib64/libpthread.so.0+0x81c9) (BuildId: c46c0e44b55ff27501f607770ed2ae993fe0b823) [m-1] #27 0x7fcfea473e72 in clone (/lib64/libc.so.6+0x39e72) (BuildId: 6d1dc58340cb6c575073da1e2efb8ac2a3cadc23) [m-1] [m-1] 0x7fcfbc7b08b7 is located 183 bytes inside of 1061488-byte region [0x7fcfbc7b0800,0x7fcfbc8b3a70) [m-1] freed by thread T13 (Master_reactorx) here: [m-1] #0 0x55e9c96fa0d6 in free /opt/yb-build/llvm/yb-llvm-v16.0.6-yb-1-1687337167-5c765d34-almalinux8-x86_64-build/src/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cpp:52:3 [m-1] #1 0x7fcff024f911 in yb::rpc::TcpStream::Shutdown(yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:153:16 [m-1] #2 0x7fcff019217d in yb::rpc::RefinedStream::Shutdown(yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc:76:18 [m-1] #3 0x7fcff00ac3cf in yb::rpc::Connection::Shutdown(yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:140:12 [m-1] #4 0x7fcff016a5a8 in yb::rpc::Reactor::DestroyConnection(yb::rpc::Connection*, yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:745:9 [m-1] #5 0x7fcff00ad511 in yb::rpc::Connection::OutboundQueued() ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:169:17 [m-1] #6 0x7fcff00b1cf0 in yb::rpc::Connection::DoQueueOutboundData(std::shared_ptr<yb::rpc::OutboundData>, bool) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:306:5 [m-1] #7 0x7fcff00b6034 in yb::rpc::Connection::QueueOutboundData(std::shared_ptr<yb::rpc::OutboundData>) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:419:5 [m-1] #8 0x7fcff01c5dbc in yb::rpc::ConnectionContextWithCallId::QueueResponse(std::shared_ptr<yb::rpc::Connection> const&, std::shared_ptr<yb::rpc::InboundCall>) ${BUILD_ROOT}/../../src/yb/rpc/rpc_with_call_id.cc:82:16 [m-1] #9 0x7fcff00df4be in yb::rpc::InboundCall::QueueResponse(bool) ${BUILD_ROOT}/../../src/yb/rpc/inbound_call.cc:203:33 [m-1] #10 0x7fcff02766d2 in yb::rpc::YBInboundCall::Respond(yb::rpc::AnyMessageConstPtr, bool) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:437:3 [m-1] #11 0x7fcff02757fc in yb::rpc::YBInboundCall::RespondFailure(yb::rpc::ErrorStatusPB_RpcErrorCodePB, yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:399:3 [m-1] #12 0x7fcff00ffd51 in yb::rpc::Messenger::Handle(std::shared_ptr<yb::rpc::InboundCall>, yb::StronglyTypedBool<yb::rpc::Queue_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/messenger.cc:485:11 [m-1] #13 0x7fcff026d193 in yb::rpc::YBInboundConnectionContext::HandleCall(std::shared_ptr<yb::rpc::Connection> const&, yb::rpc::CallData*) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:189:38 [m-1] #14 0x7fcff026e681 in non-virtual thunk to yb::rpc::YBInboundConnectionContext::HandleCall(std::shared_ptr<yb::rpc::Connection> const&, yb::rpc::CallData*) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc [m-1] #15 0x7fcff00899bc in yb::rpc::BinaryCallParser::Parse(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>, std::shared_ptr<yb::MemTracker> const*) ${BUILD_ROOT}/../../src/yb/rpc/binary_call_parser.cc:167:7 [m-1] #16 0x7fcff026bad9 in yb::rpc::YBInboundConnectionContext::ProcessCalls(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:154:19 [m-1] #17 0x7fcff00b2935 in yb::rpc::Connection::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:317:27 [m-1] #18 0x7fcff0193b03 in yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc [m-1] #19 0x7fcff0195de1 in non-virtual thunk to yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc [m-1] #20 0x7fcff025747e in yb::rpc::TcpStream::TryProcessReceived() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:408:17 [m-1] #21 0x7fcff0253fb3 in yb::rpc::TcpStream::ReadHandler() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:334:31 [m-1] #22 0x7fcff0252a44 in yb::rpc::TcpStream::Handler(ev::io&, int) ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:276:14 [m-1] #23 0x7fcfeea2c6ca in ev_invoke_pending (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x86ca) [m-1] #24 0x7fcfeea2d3c6 in ev_run (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x93c6) [m-1] #25 0x7fcff01594fc in ev::loop_ref::run(int) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/include/ev++.h:211:7 [m-1] #26 0x7fcff01594fc in yb::rpc::Reactor::RunThread() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:630:9 [m-1] #27 0x7fcfef9e05a0 in std::__function::__value_func<void ()>::operator()[abi:v160006]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16 [m-1] #28 0x7fcfef9e05a0 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12 [m-1] #29 0x7fcfef9e05a0 in yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:842:3 [m-1] #30 0x7fcfeaa1f1c9 in start_thread (/lib64/libpthread.so.0+0x81c9) (BuildId: c46c0e44b55ff27501f607770ed2ae993fe0b823) ``` ### Warning: Please confirm that this issue does not contain any sensitive information - [X] I confirm this issue does not contain any sensitive information.
1.0
[DocDB] heap-use-after-free in MasterFailoverTestIndexCreation/MasterFailoverTestIndexCreation.TestPauseAfterCreateIndexIssued/0 - ### Description Example log: https://jenkins.dev.yugabyte.com/job/github-yugabyte-db-alma8-master-clang16-asan/151/artifact/build/asan-clang16-dynamic-ninja/yb-test-logs/tests-integration-tests__master_failover-itest/MasterFailoverTestIndexCreation__MasterFailoverTestIndexCreation_TestPauseAfterCreateIndexIssued__0.log ``` [m-1] ==22942==ERROR: AddressSanitizer: heap-use-after-free on address 0x7fcfbc7b08b7 at pc 0x55e9c96f9a9e bp 0x7fcfd607e1f0 sp 0x7fcfd607d9b8 [m-1] READ of size 4 at 0x7fcfbc7b08b7 thread T13 (Master_reactorx) [m-1] #0 0x55e9c96f9a9d in __asan_memmove /opt/yb-build/llvm/yb-llvm-v16.0.6-yb-1-1687337167-5c765d34-almalinux8-x86_64-build/src/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:30:3 [m-1] #1 0x7fcfef8b18bb in std::pair<char const*, char*> std::__copy_trivial_impl[abi:v160006]<char const, char>(char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy_move_common.h:64:3 [m-1] #2 0x7fcfef8b18bb in std::pair<char const*, char*> std::__copy_trivial::operator()[abi:v160006]<char const, char, 0>(char const*, char const*, char*) const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy.h:105:12 [m-1] #3 0x7fcfef8b18bb in std::pair<char const*, char*> std::__unwrap_and_dispatch[abi:v160006]<std::__overload<std::__copy_loop<std::_ClassicAlgPolicy>, std::__copy_trivial>, char const*, char const*, char*, 0>(char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy_move_common.h:107:19 [m-1] #4 0x7fcfef8b18bb in std::pair<char const*, char*> std::__dispatch_copy_or_move[abi:v160006]<std::_ClassicAlgPolicy, std::__copy_loop<std::_ClassicAlgPolicy>, std::__copy_trivial, char const*, char const*, char*>(char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy_move_common.h:158:10 [m-1] #5 0x7fcfef8b18bb in std::pair<char const*, char*> std::__copy[abi:v160006]<std::_ClassicAlgPolicy, char const*, char const*, char*>(char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy.h:112:10 [m-1] #6 0x7fcfef8b18bb in char* std::copy[abi:v160006]<char const*, char*>(char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__algorithm/copy.h:119:10 [m-1] #7 0x7fcfef8b18bb in char* std::__uninitialized_allocator_copy[abi:v160006]<std::allocator<char>, char, char, (void*)0>(std::allocator<char>&, char const*, char const*, char*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__memory/uninitialized_algorithms.h:585:12 [m-1] #8 0x7fcfef8b18bb in void std::vector<char, std::allocator<char>>::__construct_at_end<char const*, 0>(char const*, char const*, unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/vector:1029:17 [m-1] #9 0x7fcfef8b0793 in std::__wrap_iter<char*> std::vector<char, std::allocator<char>>::insert<char const*, 0>(std::__wrap_iter<char const*>, char const*, char const*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/vector:1883:17 [m-1] #10 0x7fcfef8a151a in yb::IoVecsToBuffer(boost::container::small_vector<iovec, 4ul, void, void> const&, unsigned long, unsigned long, std::vector<char, std::allocator<char>>*) ${BUILD_ROOT}/../../src/yb/util/net/socket.cc:87:15 [m-1] #11 0x7fcff008924b in yb::rpc::BinaryCallParser::Parse(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>, std::shared_ptr<yb::MemTracker> const*) ${BUILD_ROOT}/../../src/yb/rpc/binary_call_parser.cc:86:5 [m-1] #12 0x7fcff026bad9 in yb::rpc::YBInboundConnectionContext::ProcessCalls(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:154:19 [m-1] #13 0x7fcff00b2935 in yb::rpc::Connection::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:317:27 [m-1] #14 0x7fcff0193b03 in yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc [m-1] #15 0x7fcff0195de1 in non-virtual thunk to yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc [m-1] #16 0x7fcff025747e in yb::rpc::TcpStream::TryProcessReceived() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:408:17 [m-1] #17 0x7fcff0253fb3 in yb::rpc::TcpStream::ReadHandler() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:334:31 [m-1] #18 0x7fcff0252a44 in yb::rpc::TcpStream::Handler(ev::io&, int) ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:276:14 [m-1] #19 0x7fcfeea2c6ca in ev_invoke_pending (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x86ca) [m-1] #20 0x7fcfeea2d3c6 in ev_run (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x93c6) [m-1] #21 0x7fcff01594fc in ev::loop_ref::run(int) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/include/ev++.h:211:7 [m-1] #22 0x7fcff01594fc in yb::rpc::Reactor::RunThread() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:630:9 [m-1] #23 0x7fcfef9e05a0 in std::__function::__value_func<void ()>::operator()[abi:v160006]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16 [m-1] #24 0x7fcfef9e05a0 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12 [m-1] #25 0x7fcfef9e05a0 in yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:842:3 [m-1] #26 0x7fcfeaa1f1c9 in start_thread (/lib64/libpthread.so.0+0x81c9) (BuildId: c46c0e44b55ff27501f607770ed2ae993fe0b823) [m-1] #27 0x7fcfea473e72 in clone (/lib64/libc.so.6+0x39e72) (BuildId: 6d1dc58340cb6c575073da1e2efb8ac2a3cadc23) [m-1] [m-1] 0x7fcfbc7b08b7 is located 183 bytes inside of 1061488-byte region [0x7fcfbc7b0800,0x7fcfbc8b3a70) [m-1] freed by thread T13 (Master_reactorx) here: [m-1] #0 0x55e9c96fa0d6 in free /opt/yb-build/llvm/yb-llvm-v16.0.6-yb-1-1687337167-5c765d34-almalinux8-x86_64-build/src/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cpp:52:3 [m-1] #1 0x7fcff024f911 in yb::rpc::TcpStream::Shutdown(yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:153:16 [m-1] #2 0x7fcff019217d in yb::rpc::RefinedStream::Shutdown(yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc:76:18 [m-1] #3 0x7fcff00ac3cf in yb::rpc::Connection::Shutdown(yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:140:12 [m-1] #4 0x7fcff016a5a8 in yb::rpc::Reactor::DestroyConnection(yb::rpc::Connection*, yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:745:9 [m-1] #5 0x7fcff00ad511 in yb::rpc::Connection::OutboundQueued() ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:169:17 [m-1] #6 0x7fcff00b1cf0 in yb::rpc::Connection::DoQueueOutboundData(std::shared_ptr<yb::rpc::OutboundData>, bool) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:306:5 [m-1] #7 0x7fcff00b6034 in yb::rpc::Connection::QueueOutboundData(std::shared_ptr<yb::rpc::OutboundData>) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:419:5 [m-1] #8 0x7fcff01c5dbc in yb::rpc::ConnectionContextWithCallId::QueueResponse(std::shared_ptr<yb::rpc::Connection> const&, std::shared_ptr<yb::rpc::InboundCall>) ${BUILD_ROOT}/../../src/yb/rpc/rpc_with_call_id.cc:82:16 [m-1] #9 0x7fcff00df4be in yb::rpc::InboundCall::QueueResponse(bool) ${BUILD_ROOT}/../../src/yb/rpc/inbound_call.cc:203:33 [m-1] #10 0x7fcff02766d2 in yb::rpc::YBInboundCall::Respond(yb::rpc::AnyMessageConstPtr, bool) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:437:3 [m-1] #11 0x7fcff02757fc in yb::rpc::YBInboundCall::RespondFailure(yb::rpc::ErrorStatusPB_RpcErrorCodePB, yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:399:3 [m-1] #12 0x7fcff00ffd51 in yb::rpc::Messenger::Handle(std::shared_ptr<yb::rpc::InboundCall>, yb::StronglyTypedBool<yb::rpc::Queue_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/messenger.cc:485:11 [m-1] #13 0x7fcff026d193 in yb::rpc::YBInboundConnectionContext::HandleCall(std::shared_ptr<yb::rpc::Connection> const&, yb::rpc::CallData*) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:189:38 [m-1] #14 0x7fcff026e681 in non-virtual thunk to yb::rpc::YBInboundConnectionContext::HandleCall(std::shared_ptr<yb::rpc::Connection> const&, yb::rpc::CallData*) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc [m-1] #15 0x7fcff00899bc in yb::rpc::BinaryCallParser::Parse(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>, std::shared_ptr<yb::MemTracker> const*) ${BUILD_ROOT}/../../src/yb/rpc/binary_call_parser.cc:167:7 [m-1] #16 0x7fcff026bad9 in yb::rpc::YBInboundConnectionContext::ProcessCalls(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:154:19 [m-1] #17 0x7fcff00b2935 in yb::rpc::Connection::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:317:27 [m-1] #18 0x7fcff0193b03 in yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc [m-1] #19 0x7fcff0195de1 in non-virtual thunk to yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc [m-1] #20 0x7fcff025747e in yb::rpc::TcpStream::TryProcessReceived() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:408:17 [m-1] #21 0x7fcff0253fb3 in yb::rpc::TcpStream::ReadHandler() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:334:31 [m-1] #22 0x7fcff0252a44 in yb::rpc::TcpStream::Handler(ev::io&, int) ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:276:14 [m-1] #23 0x7fcfeea2c6ca in ev_invoke_pending (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x86ca) [m-1] #24 0x7fcfeea2d3c6 in ev_run (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x93c6) [m-1] #25 0x7fcff01594fc in ev::loop_ref::run(int) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/common/include/ev++.h:211:7 [m-1] #26 0x7fcff01594fc in yb::rpc::Reactor::RunThread() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:630:9 [m-1] #27 0x7fcfef9e05a0 in std::__function::__value_func<void ()>::operator()[abi:v160006]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16 [m-1] #28 0x7fcfef9e05a0 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230621185529-6777477baa-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12 [m-1] #29 0x7fcfef9e05a0 in yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:842:3 [m-1] #30 0x7fcfeaa1f1c9 in start_thread (/lib64/libpthread.so.0+0x81c9) (BuildId: c46c0e44b55ff27501f607770ed2ae993fe0b823) ``` ### Warning: Please confirm that this issue does not contain any sensitive information - [X] I confirm this issue does not contain any sensitive information.
test
heap use after free in masterfailovertestindexcreation masterfailovertestindexcreation testpauseaftercreateindexissued description example log error addresssanitizer heap use after free on address at pc bp sp read of size at thread master reactorx in asan memmove opt yb build llvm yb llvm yb build src llvm project compiler rt lib asan asan interceptors memintrinsics cpp in std pair std copy trivial impl char const char const char opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c algorithm copy move common h in std pair std copy trivial operator char const char const char const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c algorithm copy h in std pair std unwrap and dispatch std copy trivial char const char const char char const char const char opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c algorithm copy move common h in std pair std dispatch copy or move std copy trivial char const char const char char const char const char opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c algorithm copy move common h in std pair std copy char const char const char opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c algorithm copy h in char std copy char const char const char opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c algorithm copy h in char std uninitialized allocator copy char char void std allocator char const char const char opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c memory uninitialized algorithms h in void std vector construct at end char const char const unsigned long opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c vector in std wrap iter std vector insert std wrap iter char const char const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c vector in yb iovecstobuffer boost container small vector const unsigned long unsigned long std vector build root src yb util net socket cc in yb rpc binarycallparser parse std shared ptr const boost container small vector const yb stronglytypedbool std shared ptr const build root src yb rpc binary call parser cc in yb rpc ybinboundconnectioncontext processcalls std shared ptr const boost container small vector const yb stronglytypedbool build root src yb rpc yb rpc cc in yb rpc connection processreceived yb stronglytypedbool build root src yb rpc connection cc in yb rpc refinedstream processreceived yb stronglytypedbool build root src yb rpc refined stream cc in non virtual thunk to yb rpc refinedstream processreceived yb stronglytypedbool build root src yb rpc refined stream cc in yb rpc tcpstream tryprocessreceived build root src yb rpc tcp stream cc in yb rpc tcpstream readhandler build root src yb rpc tcp stream cc in yb rpc tcpstream handler ev io int build root src yb rpc tcp stream cc in ev invoke pending opt yb build thirdparty yugabyte db thirdparty installed common lib libev so in ev run opt yb build thirdparty yugabyte db thirdparty installed common lib libev so in ev loop ref run int opt yb build thirdparty yugabyte db thirdparty installed common include ev h in yb rpc reactor runthread build root src yb rpc reactor cc in std function value func operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in std function operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in yb thread supervisethread void build root src yb util thread cc in start thread libpthread so buildid in clone libc so buildid is located bytes inside of byte region freed by thread master reactorx here in free opt yb build llvm yb llvm yb build src llvm project compiler rt lib asan asan malloc linux cpp in yb rpc tcpstream shutdown yb status const build root src yb rpc tcp stream cc in yb rpc refinedstream shutdown yb status const build root src yb rpc refined stream cc in yb rpc connection shutdown yb status const build root src yb rpc connection cc in yb rpc reactor destroyconnection yb rpc connection yb status const build root src yb rpc reactor cc in yb rpc connection outboundqueued build root src yb rpc connection cc in yb rpc connection doqueueoutbounddata std shared ptr bool build root src yb rpc connection cc in yb rpc connection queueoutbounddata std shared ptr build root src yb rpc connection cc in yb rpc connectioncontextwithcallid queueresponse std shared ptr const std shared ptr build root src yb rpc rpc with call id cc in yb rpc inboundcall queueresponse bool build root src yb rpc inbound call cc in yb rpc ybinboundcall respond yb rpc anymessageconstptr bool build root src yb rpc yb rpc cc in yb rpc ybinboundcall respondfailure yb rpc errorstatuspb rpcerrorcodepb yb status const build root src yb rpc yb rpc cc in yb rpc messenger handle std shared ptr yb stronglytypedbool build root src yb rpc messenger cc in yb rpc ybinboundconnectioncontext handlecall std shared ptr const yb rpc calldata build root src yb rpc yb rpc cc in non virtual thunk to yb rpc ybinboundconnectioncontext handlecall std shared ptr const yb rpc calldata build root src yb rpc yb rpc cc in yb rpc binarycallparser parse std shared ptr const boost container small vector const yb stronglytypedbool std shared ptr const build root src yb rpc binary call parser cc in yb rpc ybinboundconnectioncontext processcalls std shared ptr const boost container small vector const yb stronglytypedbool build root src yb rpc yb rpc cc in yb rpc connection processreceived yb stronglytypedbool build root src yb rpc connection cc in yb rpc refinedstream processreceived yb stronglytypedbool build root src yb rpc refined stream cc in non virtual thunk to yb rpc refinedstream processreceived yb stronglytypedbool build root src yb rpc refined stream cc in yb rpc tcpstream tryprocessreceived build root src yb rpc tcp stream cc in yb rpc tcpstream readhandler build root src yb rpc tcp stream cc in yb rpc tcpstream handler ev io int build root src yb rpc tcp stream cc in ev invoke pending opt yb build thirdparty yugabyte db thirdparty installed common lib libev so in ev run opt yb build thirdparty yugabyte db thirdparty installed common lib libev so in ev loop ref run int opt yb build thirdparty yugabyte db thirdparty installed common include ev h in yb rpc reactor runthread build root src yb rpc reactor cc in std function value func operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in std function operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in yb thread supervisethread void build root src yb util thread cc in start thread libpthread so buildid warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information
1
83,656
10,334,260,867
IssuesEvent
2019-09-03 07:56:25
helmholtz-analytics/heat
https://api.github.com/repos/helmholtz-analytics/heat
closed
Document installation options
documentation :book:
It has been brought to our attention that users are struggling to install HeAT. There are several options for this, depending on whether one wants to install an official release or a development version. This information should be available in the documentation as well as the README.md file.
1.0
Document installation options - It has been brought to our attention that users are struggling to install HeAT. There are several options for this, depending on whether one wants to install an official release or a development version. This information should be available in the documentation as well as the README.md file.
non_test
document installation options it has been brought to our attention that users are struggling to install heat there are several options for this depending on whether one wants to install an official release or a development version this information should be available in the documentation as well as the readme md file
0
118,361
9,984,326,551
IssuesEvent
2019-07-10 14:18:36
NativeScript/nativescript-cli
https://api.github.com/repos/NativeScript/nativescript-cli
closed
Warnings for resolve of critical dependency when executing `tns test`
bug unit testing
- CLI: 6.0-rc - Cross-platform modules: rc - Android Runtime: rc - iOS Runtime: rc - nativescript-dev-webpack@rc - Plugin(s): nativescript-unit-test-runner@0.6.4 Steps: 1. `tns create MyApp --js` 2. `tns test init` 3. `tns migrate` , or install rc of all dependencies listed above 4. `tns test ios/android` Result: Tests run but there are warning in log: WARNING in ../node_modules/nativescript-hook/index.js 56:27-51 Critical dependency: the request of a dependency is an expression @ ../node_modules/nativescript-unit-test-runner/postinstall.js @ ../node_modules/nativescript-unit-test-runner sync (?<!App_Resources.*)\.(xml|css|js|(?<!d\.)ts|scss)$ @ ./main.ts WARNING in ../node_modules/nativescript-hook/index.js 64:11-53 Critical dependency: the request of a dependency is an expression @ ../node_modules/nativescript-unit-test-runner/postinstall.js @ ../node_modules/nativescript-unit-test-runner sync (?<!App_Resources.*)\.(xml|css|js|(?<!d\.)ts|scss)$ @ ./main.ts
1.0
Warnings for resolve of critical dependency when executing `tns test` - - CLI: 6.0-rc - Cross-platform modules: rc - Android Runtime: rc - iOS Runtime: rc - nativescript-dev-webpack@rc - Plugin(s): nativescript-unit-test-runner@0.6.4 Steps: 1. `tns create MyApp --js` 2. `tns test init` 3. `tns migrate` , or install rc of all dependencies listed above 4. `tns test ios/android` Result: Tests run but there are warning in log: WARNING in ../node_modules/nativescript-hook/index.js 56:27-51 Critical dependency: the request of a dependency is an expression @ ../node_modules/nativescript-unit-test-runner/postinstall.js @ ../node_modules/nativescript-unit-test-runner sync (?<!App_Resources.*)\.(xml|css|js|(?<!d\.)ts|scss)$ @ ./main.ts WARNING in ../node_modules/nativescript-hook/index.js 64:11-53 Critical dependency: the request of a dependency is an expression @ ../node_modules/nativescript-unit-test-runner/postinstall.js @ ../node_modules/nativescript-unit-test-runner sync (?<!App_Resources.*)\.(xml|css|js|(?<!d\.)ts|scss)$ @ ./main.ts
test
warnings for resolve of critical dependency when executing tns test cli rc cross platform modules rc android runtime rc ios runtime rc nativescript dev webpack rc plugin s nativescript unit test runner steps tns create myapp js tns test init tns migrate or install rc of all dependencies listed above tns test ios android result tests run but there are warning in log warning in node modules nativescript hook index js critical dependency the request of a dependency is an expression node modules nativescript unit test runner postinstall js node modules nativescript unit test runner sync app resources xml css js d ts scss main ts warning in node modules nativescript hook index js critical dependency the request of a dependency is an expression node modules nativescript unit test runner postinstall js node modules nativescript unit test runner sync app resources xml css js d ts scss main ts
1
116,174
11,903,096,826
IssuesEvent
2020-03-30 14:52:09
openhab/openhab-core
https://api.github.com/repos/openhab/openhab-core
closed
Define more files than just readme.md for addons
documentation
Currently only the readme.md file is considered for the addon page. I'd be happy if we can have an "xtend_examples.md" file as well. Especially considering that we can (and hopefully also do in the future) auto-generate the "configuration" parts, the examples do not really fit into the readme.md file. One reason is that the REST interface (aka the UI) is equally entitled to have an example section. If the website semantically knows about the content (due to the filename) it can provide buttons / links / detail-summary-html-elements to show one or the other example section.
1.0
Define more files than just readme.md for addons - Currently only the readme.md file is considered for the addon page. I'd be happy if we can have an "xtend_examples.md" file as well. Especially considering that we can (and hopefully also do in the future) auto-generate the "configuration" parts, the examples do not really fit into the readme.md file. One reason is that the REST interface (aka the UI) is equally entitled to have an example section. If the website semantically knows about the content (due to the filename) it can provide buttons / links / detail-summary-html-elements to show one or the other example section.
non_test
define more files than just readme md for addons currently only the readme md file is considered for the addon page i d be happy if we can have an xtend examples md file as well especially considering that we can and hopefully also do in the future auto generate the configuration parts the examples do not really fit into the readme md file one reason is that the rest interface aka the ui is equally entitled to have an example section if the website semantically knows about the content due to the filename it can provide buttons links detail summary html elements to show one or the other example section
0
40,989
6,886,624,710
IssuesEvent
2017-11-21 20:10:09
pac4j/play-pac4j
https://api.github.com/repos/pac4j/play-pac4j
closed
Need better Guice examples
documentation
In your README, you demonstrate creating a Guice `AbstractModule`, but you seem to be doing all the initialization in the module. This is problematic because [modules should be fast and side effect free](https://github.com/google/guice/wiki/ModulesShouldBeFastAndSideEffectFree) and also because it's very difficult to extend. Generally a Guice module should avoid creating actual instances of things in the `configure()` method and just declare bindings to classes, providers, and simple constant values. I would suggest using providers or [@Provides methods](https://github.com/google/guice/wiki/ProvidesMethods) for creating the various components. This way you're only doing initialization when the injector is created, and the example is easy to extend with additional dependency, for example to look up something in a database. (I am not a pac4j user but I'm a Play contributor and occasionally help Play users who use pac4j.)
1.0
Need better Guice examples - In your README, you demonstrate creating a Guice `AbstractModule`, but you seem to be doing all the initialization in the module. This is problematic because [modules should be fast and side effect free](https://github.com/google/guice/wiki/ModulesShouldBeFastAndSideEffectFree) and also because it's very difficult to extend. Generally a Guice module should avoid creating actual instances of things in the `configure()` method and just declare bindings to classes, providers, and simple constant values. I would suggest using providers or [@Provides methods](https://github.com/google/guice/wiki/ProvidesMethods) for creating the various components. This way you're only doing initialization when the injector is created, and the example is easy to extend with additional dependency, for example to look up something in a database. (I am not a pac4j user but I'm a Play contributor and occasionally help Play users who use pac4j.)
non_test
need better guice examples in your readme you demonstrate creating a guice abstractmodule but you seem to be doing all the initialization in the module this is problematic because and also because it s very difficult to extend generally a guice module should avoid creating actual instances of things in the configure method and just declare bindings to classes providers and simple constant values i would suggest using providers or for creating the various components this way you re only doing initialization when the injector is created and the example is easy to extend with additional dependency for example to look up something in a database i am not a user but i m a play contributor and occasionally help play users who use
0
319,323
9,742,161,435
IssuesEvent
2019-06-02 14:57:32
IntellectualSites/PlotSquared
https://api.github.com/repos/IntellectualSites/PlotSquared
closed
Unable to remove owner from plot
[-] Low Priority [✔] Answered
__*NOTICE: Bukkit/Spigot versions 1.7.10 to 1.12.2 are considered legacy and will receive limited support. Please consider upgrading to 1.13 for future support.*__ # Bug report template <!--- In order to create a valid issue report you have to follow this template. --> <!--- Incomplete reports might be marked as invalid. --> <!-- Feature requests and enhancements may be suggested at https://github.com/IntellectualSites/PlotSquaredSuggestions. --> **Debug paste link:** Currently unable to post a paste link; see #2332 **Description of the problem:** Currently I am not able to remove myself as the owner from a userplot: ![grafik](https://user-images.githubusercontent.com/42473485/56456253-fc78c580-6369-11e9-8e80-bd5cd1f04d5d.png) **How to replicate:** - Be OP - Go on any plot that has multiple owners - Use `/p remove <OWNER>` to try to remove one owner from the plot **Checklist**: <!-- Make sure you have completed the following steps (put an "X" between of brackets): --> - [] I included a `/plot debugpaste` link (-> Unable to do so) - [x] I made sure there are no duplicates of this report [(Use Search)](https://github.com/IntellectualSites/PlotSquared/issues?utf8=%E2%9C%93&q=is%3Aissue) - [x] I made sure I am using an up-to-date version of PlotSquared - [x] I Made sure the bug/error is not caused by any other plugin
1.0
Unable to remove owner from plot - __*NOTICE: Bukkit/Spigot versions 1.7.10 to 1.12.2 are considered legacy and will receive limited support. Please consider upgrading to 1.13 for future support.*__ # Bug report template <!--- In order to create a valid issue report you have to follow this template. --> <!--- Incomplete reports might be marked as invalid. --> <!-- Feature requests and enhancements may be suggested at https://github.com/IntellectualSites/PlotSquaredSuggestions. --> **Debug paste link:** Currently unable to post a paste link; see #2332 **Description of the problem:** Currently I am not able to remove myself as the owner from a userplot: ![grafik](https://user-images.githubusercontent.com/42473485/56456253-fc78c580-6369-11e9-8e80-bd5cd1f04d5d.png) **How to replicate:** - Be OP - Go on any plot that has multiple owners - Use `/p remove <OWNER>` to try to remove one owner from the plot **Checklist**: <!-- Make sure you have completed the following steps (put an "X" between of brackets): --> - [] I included a `/plot debugpaste` link (-> Unable to do so) - [x] I made sure there are no duplicates of this report [(Use Search)](https://github.com/IntellectualSites/PlotSquared/issues?utf8=%E2%9C%93&q=is%3Aissue) - [x] I made sure I am using an up-to-date version of PlotSquared - [x] I Made sure the bug/error is not caused by any other plugin
non_test
unable to remove owner from plot notice bukkit spigot versions to are considered legacy and will receive limited support please consider upgrading to for future support bug report template debug paste link currently unable to post a paste link see description of the problem currently i am not able to remove myself as the owner from a userplot how to replicate be op go on any plot that has multiple owners use p remove to try to remove one owner from the plot checklist i included a plot debugpaste link unable to do so i made sure there are no duplicates of this report i made sure i am using an up to date version of plotsquared i made sure the bug error is not caused by any other plugin
0
326,604
9,958,326,871
IssuesEvent
2019-07-05 20:35:48
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Bug: NullPointerException using Foreign key display values
Admin Panel & Setup Bug Need More Info Priority/P2
Hi, I've got an issue with setting up a new metabase env. Running into a NullPointerException on one of the tables. - Your browser and the version: Safari 11.03 - Your operating system: OS X 10.13.3 - Your databases: Postgres (Google Cloud SQL) - Metabase version: 0.28.6 - Metabase hosting environment: Kubernetes - Metabase internal database: Postgres (Google Cloud SQL) ### Steps to reproduce: - Add source db with 2 tables (i.e. tables: Issues & Users) - Set 2 foreign key mappings in the Data model: - Issue.Author_ID to PK User.ID - Issue.Assignee_ID to PK User.ID - Set display value to 'Foreign Key > User.Name' on both Weird enough, when changing the display value to original on the Issue.Assignee_ID it works. Doing this for Author_ID has no influence. The result is the NullPointerException (see stacktrace below): ```Java Apr 13 16:09:36 WARN metabase.query-processor :: {:status :failed, :class java.lang.NullPointerException, :error "java.lang.NullPointerException", :stacktrace ["query_processor.middleware.add_dimension_projections$col__GT_dim_map.invokeStatic(add_dimension_projections.clj:71)" "query_processor.middleware.add_dimension_projections$col__GT_dim_map.invoke(add_dimension_projections.clj:64)" "query_processor.middleware.add_dimension_projections$remap_results.invokeStatic(add_dimension_projections.clj:127)" "query_processor.middleware.add_dimension_projections$remap_results.invoke(add_dimension_projections.clj:114)" "query_processor.middleware.expand$expand_middleware$fn__29769.invoke(expand.clj:601)" "query_processor.middleware.add_row_count_and_status$add_row_count_and_status$fn__28153.invoke(add_row_count_and_status.clj:14)" "query_processor.middleware.driver_specific$process_query_in_context$fn__29900.invoke(driver_specific.clj:12)" "query_processor.middleware.resolve_driver$resolve_driver$fn__31353.invoke(resolve_driver.clj:15)" "query_processor.middleware.cache$maybe_return_cached_results$fn__28621.invoke(cache.clj:146)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__29817.invoke(catch_exceptions.clj:58)" "query_processor$process_query.invokeStatic(query_processor.clj:130)" "query_processor$process_query.invoke(query_processor.clj:126)" "query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:241)" "query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:235)" "query_processor$fn__31387$process_query_and_save_execution_BANG___31392$fn__31393.invoke(query_processor.clj:281)" "query_processor$fn__31387$process_query_and_save_execution_BANG___31392.invoke(query_processor.clj:267)" "query_processor$fn__31411$process_query_and_save_with_max_BANG___31416$fn__31417.invoke(query_processor.clj:302)" "query_processor$fn__31411$process_query_and_save_with_max_BANG___31416.invoke(query_processor.clj:298)" "api.dataset$fn__37935$fn__37938$fn__37939.invoke(dataset.clj:49)" "api.common$fn__20323$invoke_thunk_with_keepalive__20328$fn__20329$fn__20330.invoke(common.clj:402)"], :query {:type "query", :query {:source_table 5}, :parameters [], :constraints {:max-results 10000, :max-results-bare-rows 2000}, :info {:executed-by 1, :context :ad-hoc, :card-id nil, :nested? false, :query-hash [108, 84, 50, 34, 11, 38, 119, 50, -56, -60, 42, 94, 101, -64, 90, 33, 112, 60, 30, 100, 92, 60, -86, -3, -114, -10, -114, -29, 62, -24, -76, -68], :query-type "MBQL"}}, :expanded-query nil} Apr 13 16:09:36 WARN metabase.query-processor :: Query failure: java.lang.NullPointerException ["query_processor$assert_query_status_successful.invokeStatic(query_processor.clj:209)" "query_processor$assert_query_status_successful.invoke(query_processor.clj:202)" "query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:242)" "query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:235)" "query_processor$fn__31387$process_query_and_save_execution_BANG___31392$fn__31393.invoke(query_processor.clj:281)" "query_processor$fn__31387$process_query_and_save_execution_BANG___31392.invoke(query_processor.clj:267)" "query_processor$fn__31411$process_query_and_save_with_max_BANG___31416$fn__31417.invoke(query_processor.clj:302)" "query_processor$fn__31411$process_query_and_save_with_max_BANG___31416.invoke(query_processor.clj:298)" "api.dataset$fn__37935$fn__37938$fn__37939.invoke(dataset.clj:49)" "api.common$fn__20323$invoke_thunk_with_keepalive__20328$fn__20329$fn__20330.invoke(common.clj:402)"] ```
1.0
Bug: NullPointerException using Foreign key display values - Hi, I've got an issue with setting up a new metabase env. Running into a NullPointerException on one of the tables. - Your browser and the version: Safari 11.03 - Your operating system: OS X 10.13.3 - Your databases: Postgres (Google Cloud SQL) - Metabase version: 0.28.6 - Metabase hosting environment: Kubernetes - Metabase internal database: Postgres (Google Cloud SQL) ### Steps to reproduce: - Add source db with 2 tables (i.e. tables: Issues & Users) - Set 2 foreign key mappings in the Data model: - Issue.Author_ID to PK User.ID - Issue.Assignee_ID to PK User.ID - Set display value to 'Foreign Key > User.Name' on both Weird enough, when changing the display value to original on the Issue.Assignee_ID it works. Doing this for Author_ID has no influence. The result is the NullPointerException (see stacktrace below): ```Java Apr 13 16:09:36 WARN metabase.query-processor :: {:status :failed, :class java.lang.NullPointerException, :error "java.lang.NullPointerException", :stacktrace ["query_processor.middleware.add_dimension_projections$col__GT_dim_map.invokeStatic(add_dimension_projections.clj:71)" "query_processor.middleware.add_dimension_projections$col__GT_dim_map.invoke(add_dimension_projections.clj:64)" "query_processor.middleware.add_dimension_projections$remap_results.invokeStatic(add_dimension_projections.clj:127)" "query_processor.middleware.add_dimension_projections$remap_results.invoke(add_dimension_projections.clj:114)" "query_processor.middleware.expand$expand_middleware$fn__29769.invoke(expand.clj:601)" "query_processor.middleware.add_row_count_and_status$add_row_count_and_status$fn__28153.invoke(add_row_count_and_status.clj:14)" "query_processor.middleware.driver_specific$process_query_in_context$fn__29900.invoke(driver_specific.clj:12)" "query_processor.middleware.resolve_driver$resolve_driver$fn__31353.invoke(resolve_driver.clj:15)" "query_processor.middleware.cache$maybe_return_cached_results$fn__28621.invoke(cache.clj:146)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__29817.invoke(catch_exceptions.clj:58)" "query_processor$process_query.invokeStatic(query_processor.clj:130)" "query_processor$process_query.invoke(query_processor.clj:126)" "query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:241)" "query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:235)" "query_processor$fn__31387$process_query_and_save_execution_BANG___31392$fn__31393.invoke(query_processor.clj:281)" "query_processor$fn__31387$process_query_and_save_execution_BANG___31392.invoke(query_processor.clj:267)" "query_processor$fn__31411$process_query_and_save_with_max_BANG___31416$fn__31417.invoke(query_processor.clj:302)" "query_processor$fn__31411$process_query_and_save_with_max_BANG___31416.invoke(query_processor.clj:298)" "api.dataset$fn__37935$fn__37938$fn__37939.invoke(dataset.clj:49)" "api.common$fn__20323$invoke_thunk_with_keepalive__20328$fn__20329$fn__20330.invoke(common.clj:402)"], :query {:type "query", :query {:source_table 5}, :parameters [], :constraints {:max-results 10000, :max-results-bare-rows 2000}, :info {:executed-by 1, :context :ad-hoc, :card-id nil, :nested? false, :query-hash [108, 84, 50, 34, 11, 38, 119, 50, -56, -60, 42, 94, 101, -64, 90, 33, 112, 60, 30, 100, 92, 60, -86, -3, -114, -10, -114, -29, 62, -24, -76, -68], :query-type "MBQL"}}, :expanded-query nil} Apr 13 16:09:36 WARN metabase.query-processor :: Query failure: java.lang.NullPointerException ["query_processor$assert_query_status_successful.invokeStatic(query_processor.clj:209)" "query_processor$assert_query_status_successful.invoke(query_processor.clj:202)" "query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:242)" "query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:235)" "query_processor$fn__31387$process_query_and_save_execution_BANG___31392$fn__31393.invoke(query_processor.clj:281)" "query_processor$fn__31387$process_query_and_save_execution_BANG___31392.invoke(query_processor.clj:267)" "query_processor$fn__31411$process_query_and_save_with_max_BANG___31416$fn__31417.invoke(query_processor.clj:302)" "query_processor$fn__31411$process_query_and_save_with_max_BANG___31416.invoke(query_processor.clj:298)" "api.dataset$fn__37935$fn__37938$fn__37939.invoke(dataset.clj:49)" "api.common$fn__20323$invoke_thunk_with_keepalive__20328$fn__20329$fn__20330.invoke(common.clj:402)"] ```
non_test
bug nullpointerexception using foreign key display values hi i ve got an issue with setting up a new metabase env running into a nullpointerexception on one of the tables your browser and the version safari your operating system os x your databases postgres google cloud sql metabase version metabase hosting environment kubernetes metabase internal database postgres google cloud sql steps to reproduce add source db with tables i e tables issues users set foreign key mappings in the data model issue author id to pk user id issue assignee id to pk user id set display value to foreign key user name on both weird enough when changing the display value to original on the issue assignee id it works doing this for author id has no influence the result is the nullpointerexception see stacktrace below java apr warn metabase query processor status failed class java lang nullpointerexception error java lang nullpointerexception stacktrace query type query query source table parameters constraints max results max results bare rows info executed by context ad hoc card id nil nested false query hash query type mbql expanded query nil apr warn metabase query processor query failure java lang nullpointerexception
0
254,281
8,072,185,178
IssuesEvent
2018-08-06 15:14:37
emoncms/MyHomeEnergyPlanner
https://api.github.com/repos/emoncms/MyHomeEnergyPlanner
closed
Form that could populate the ‘householdquestionnaire’ in advance of the visit
Low priority feature
A form that ca be sent to the household to prefill and then to be importted byt the assesor into MHEP. Options a csv file or copy and paste from a spread sheet. Jonathan
1.0
Form that could populate the ‘householdquestionnaire’ in advance of the visit - A form that ca be sent to the household to prefill and then to be importted byt the assesor into MHEP. Options a csv file or copy and paste from a spread sheet. Jonathan
non_test
form that could populate the ‘householdquestionnaire’ in advance of the visit a form that ca be sent to the household to prefill and then to be importted byt the assesor into mhep options a csv file or copy and paste from a spread sheet jonathan
0
188,246
14,442,227,676
IssuesEvent
2020-12-07 17:51:08
kalexmills/github-vet-tests-dec2020
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
closed
slamidiot/gophercloud-lc: acceptance/openstack/dns/v2/zones_test.go; 3 LoC
fresh test tiny
Found a possible issue in [slamidiot/gophercloud-lc](https://www.github.com/slamidiot/gophercloud-lc) at [acceptance/openstack/dns/v2/zones_test.go](https://github.com/slamidiot/gophercloud-lc/blob/e646b2d6da2c9f8416947b5f66b146388c20043e/acceptance/openstack/dns/v2/zones_test.go#L30-L32) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call at line 31 may store a reference to zone [Click here to see the code in its original context.](https://github.com/slamidiot/gophercloud-lc/blob/e646b2d6da2c9f8416947b5f66b146388c20043e/acceptance/openstack/dns/v2/zones_test.go#L30-L32) <details> <summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary> ```go for _, zone := range allZones { tools.PrintResource(t, &zone) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: e646b2d6da2c9f8416947b5f66b146388c20043e
1.0
slamidiot/gophercloud-lc: acceptance/openstack/dns/v2/zones_test.go; 3 LoC - Found a possible issue in [slamidiot/gophercloud-lc](https://www.github.com/slamidiot/gophercloud-lc) at [acceptance/openstack/dns/v2/zones_test.go](https://github.com/slamidiot/gophercloud-lc/blob/e646b2d6da2c9f8416947b5f66b146388c20043e/acceptance/openstack/dns/v2/zones_test.go#L30-L32) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call at line 31 may store a reference to zone [Click here to see the code in its original context.](https://github.com/slamidiot/gophercloud-lc/blob/e646b2d6da2c9f8416947b5f66b146388c20043e/acceptance/openstack/dns/v2/zones_test.go#L30-L32) <details> <summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary> ```go for _, zone := range allZones { tools.PrintResource(t, &zone) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: e646b2d6da2c9f8416947b5f66b146388c20043e
test
slamidiot gophercloud lc acceptance openstack dns zones test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call at line may store a reference to zone click here to show the line s of go which triggered the analyzer go for zone range allzones tools printresource t zone leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
1
190,399
14,544,527,403
IssuesEvent
2020-12-15 18:18:36
ayz1n/RIPository
https://api.github.com/repos/ayz1n/RIPository
opened
Создать проект “модульные тесты” и автоматизировать тестовые сценарии
TRPO moduleTests
### **Создать проект “модульные тесты” и автоматизировать тестовые сценарии**
1.0
Создать проект “модульные тесты” и автоматизировать тестовые сценарии - ### **Создать проект “модульные тесты” и автоматизировать тестовые сценарии**
test
создать проект “модульные тесты” и автоматизировать тестовые сценарии создать проект “модульные тесты” и автоматизировать тестовые сценарии
1
197,567
14,934,361,374
IssuesEvent
2021-01-25 10:25:57
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
opened
[CI] TransformContinuousIT.testContinousEvents fails
:ml/Transform >test-failure Team:ML
**Build scan**: https://gradle-enterprise.elastic.co/s/d3vdkpkvqmslc **Repro line**: ```bash ./gradlew ':x-pack:plugin:transform:qa:multi-node-tests:javaRestTest' --tests "org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents" -Dtests.seed=4E7E45250C58F136 -Dtests.security.manager=true -Dtests.locale=fi-FI -Dtests.timezone=Africa/Accra -Druntime.java=11 ``` **Reproduces locally?**: No **Applicable branches**: `master` **Failure history**: https://gradle-enterprise.elastic.co/scans/tests?search.relativeStartTime=P7D&search.timeZoneId=Europe/Berlin&tests.container=org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT&tests.sortField=FAILED&tests.test=testContinousEvents&tests.unstableOnly=true Started happening on Jan 20th. **Failure excerpt**: ``` org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT > testContinousEvents FAILED java.lang.AssertionError: transform [continuous-histogram-pivot-test] does not progress, state: STARTED, reason: null Expected: a value greater than <2021-01-25T07:40:00.506764Z> but: <2021-01-25T07:39:59.383Z> was less than <2021-01-25T07:40:00.506764Z> at __randomizedtesting.SeedInfo.seed([4E7E45250C58F136:7548343E86337360]:0) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.lambda$waitUntilTransformsProcessedNewData$3(TransformContinuousIT.java:506) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:955) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.waitUntilTransformsProcessedNewData(TransformContinuousIT.java:504) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents(TransformContinuousIT.java:276) REPRODUCE WITH: ./gradlew ':x-pack:plugin:transform:qa:multi-node-tests:javaRestTest' --tests "org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents" -Dtests.seed=4E7E45250C58F136 -Dtests.security.manager=true -Dtests.locale=fi-FI -Dtests.timezone=Africa/Accra -Druntime.java=11 Suite: Test class org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT 1> [2021-01-25T07:36:35,379][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] before test 1> [2021-01-25T07:36:35,601][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] initializing REST clients against [http://[::1]:46643, http://127.0.0.1:36227, http://[::1]:37723, http://127.0.0.1:41451, http://[::1]:43801, http://127.0.0.1:40545] 1> [2021-01-25T07:36:36,840][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":0}}]} 1> [2021-01-25T07:36:37,318][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] Creating source index with: {"settings":{"index":{"number_of_shards":1,"codec":"best_compression"}},"mappings":{"properties":{"timestamp":{"type":"date_nanos","format":"strict_date_optional_time_nanos"},"event":{"type":"keyword"},"metric":{"type":"unsigned_long"},"location":{"type":"geo_point"},"run":{"type":"integer"},"metric-timestamp":{"type":"date_nanos"},"some-timestamp":{"type":"date_nanos"}},"runtime":{"metric-rt-2x":{"type":"long","script":{"source":"if (params._source.metric != null) {emit(params._source.metric * 2)}"}},"event-upper":{"type":"keyword","script":{"source":"if (params._source.event != null) {emit(params._source.event.toUpperCase())}"}},"timestamp-at-runtime":{"type":"date","script":{"source":"emit(parse(params._source.get('timestamp')))"}},"metric-timestamp-5m-earlier":{"type":"date","script":{"source":"if (doc['metric-timestamp'].size()!=0) {emit(doc['metric-timestamp'].value.minus(5, ChronoUnit.MINUTES).toInstant().toEpochMilli())}"}},"some-timestamp-10m-earlier":{"type":"date","script":{"source":"if (doc['some-timestamp'].size()!=0) {emit(doc['some-timestamp'].value.minus(10, ChronoUnit.MINUTES).toInstant().toEpochMilli())}"}},"metric":{"type":"long","script":{"source":"if (params._source.metric != null) {emit(params._source.metric * 3)}"}}}}} 1> [2021-01-25T07:36:38,433][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-terms-pivot-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-terms-pivot-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"pivot":{"group_by":{"event":{"terms":{"field":"event","missing_bucket":true}}},"aggregations":{"run.max":{"max":{"field":"run"}},"count":{"value_count":{"field":"run"}},"time.max":{"max":{"field":"timestamp"}},"metric.avg":{"avg":{"field":"metric-rt-2x"}}}},"settings":{"max_page_search_size":10}} 1> [2021-01-25T07:36:39,298][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-terms-on-date-pivot-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-terms-on-date-pivot-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"pivot":{"group_by":{"some-timestamp":{"terms":{"field":"some-timestamp-10m-earlier"}}},"aggregations":{"run.max":{"max":{"field":"run"}},"count":{"value_count":{"field":"run"}},"time.max":{"max":{"field":"timestamp"}},"metric.avg":{"avg":{"field":"metric-rt-2x"}}}},"settings":{"max_page_search_size":10}} 1> [2021-01-25T07:36:39,614][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-date-histogram-pivot-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-date-histogram-pivot-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"pivot":{"group_by":{"second":{"date_histogram":{"field":"timestamp","missing_bucket":true,"fixed_interval":"1s"}}},"aggregations":{"run.max":{"max":{"field":"run"}},"count":{"value_count":{"field":"run"}},"time.max":{"max":{"field":"timestamp"}}}},"settings":{"max_page_search_size":10}} 1> [2021-01-25T07:36:39,748][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-date-histogram-pivot-other-timefield-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-date-histogram-pivot-other-timefield-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"pivot":{"group_by":{"event":{"terms":{"field":"event-upper"}},"second":{"date_histogram":{"field":"metric-timestamp","fixed_interval":"1s"}}},"aggregations":{"run.max":{"max":{"field":"run"}},"count":{"value_count":{"field":"run"}},"time.max":{"max":{"field":"timestamp"}}}},"settings":{"max_page_search_size":10,"dates_as_epoch_millis":true}} 1> [2021-01-25T07:36:39,934][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-histogram-pivot-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-histogram-pivot-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"pivot":{"group_by":{"metric":{"histogram":{"field":"metric-rt-2x","interval":50.0}}},"aggregations":{"run.max":{"max":{"field":"run"}},"count":{"value_count":{"field":"run"}},"time.max":{"max":{"field":"timestamp"}}}},"settings":{"max_page_search_size":10}} 1> [2021-01-25T07:36:40,051][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-latest-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-latest-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"latest":{"unique_key":["event"],"sort":"timestamp"},"settings":{"max_page_search_size":10}} 1> [2021-01-25T07:36:40,161][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":0}}]} 1> [2021-01-25T07:36:49,717][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:36:50,341][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:36:50,775][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:36:51,515][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:36:52,166][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:36:52,774][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:36:53,297][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:36:54.296824Z (takes into account the delay: 1s) iteration: 0 1> [2021-01-25T07:36:56,013][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:36:56,151][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:36:56,294][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:36:56,406][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:36:56,518][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:36:56,632][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:36:56,826][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://127.0.0.1:36227/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:36:57,659][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":1}}]} 1> [2021-01-25T07:37:04,267][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:04,405][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:04,544][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:04,685][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:04,854][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:05,044][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:37:05,224][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:37:06.224574Z (takes into account the delay: 1s) iteration: 1 1> [2021-01-25T07:37:11,253][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:11,379][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:11,471][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:11,579][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:11,676][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:11,784][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:37:11,934][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://[::1]:46643/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:37:12,781][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":2}}]} 1> [2021-01-25T07:37:15,710][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:15,858][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:16,042][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:16,204][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:16,356][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:16,504][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:37:16,658][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:37:17.658275Z (takes into account the delay: 1s) iteration: 2 1> [2021-01-25T07:37:26,820][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:26,917][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:27,017][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:27,112][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:27,211][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:27,300][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:37:27,474][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://[::1]:46643/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:37:28,366][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":3}}]} 1> [2021-01-25T07:37:34,367][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:34,487][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:34,632][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:34,764][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:34,910][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:35,060][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:37:35,195][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:37:36.194908Z (takes into account the delay: 1s) iteration: 3 1> [2021-01-25T07:37:48,116][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:48,212][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:48,301][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:48,396][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:48,492][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:48,578][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:37:48,752][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://[::1]:46643/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:37:49,866][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":4}}]} 1> [2021-01-25T07:37:54,843][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:54,995][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:55,161][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:55,320][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:55,488][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:55,643][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:37:55,801][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:37:56.801269Z (takes into account the delay: 1s) iteration: 4 1> [2021-01-25T07:38:18,818][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:38:18,917][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:38:19,008][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:38:19,100][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:38:19,205][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:38:19,294][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:38:19,462][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://127.0.0.1:41451/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:38:20,882][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":5}}]} 1> [2021-01-25T07:38:26,736][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:38:26,874][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:38:27,010][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:38:27,134][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:38:27,266][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:38:27,393][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:38:27,505][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:38:28.505661Z (takes into account the delay: 1s) iteration: 5 1> [2021-01-25T07:38:50,647][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:38:50,747][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:38:50,836][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:38:50,964][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:38:51,066][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:38:51,168][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:38:51,336][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://127.0.0.1:36227/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:38:52,898][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":6}}]} 1> [2021-01-25T07:38:55,936][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:38:56,048][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:38:56,165][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:38:56,296][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:38:56,425][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:38:56,558][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:38:56,680][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:38:57.680161Z (takes into account the delay: 1s) iteration: 6 1> [2021-01-25T07:39:19,153][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:39:19,244][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:39:19,325][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:39:19,415][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:39:19,506][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:39:19,597][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:39:19,744][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://[::1]:46643/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:39:21,388][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":7}}]} 1> [2021-01-25T07:39:27,382][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:39:27,498][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:39:27,624][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:39:27,746][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:39:27,862][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:39:28,008][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:39:28,129][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:39:29.12901Z (takes into account the delay: 1s) iteration: 7 1> [2021-01-25T07:39:52,798][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:39:52,885][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:39:52,970][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:39:53,054][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:39:53,146][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:39:53,231][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:39:53,369][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://[::1]:46643/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:39:55,290][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":8}}]} 1> [2021-01-25T07:39:58,763][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:39:58,882][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:39:59,004][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:39:59,125][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:39:59,249][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:39:59,386][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:39:59,506][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:40:00.506764Z (takes into account the delay: 1s) iteration: 8 1> [2021-01-25T07:40:22,708][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:40:22,909][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:40:23,058][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:40:23,213][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-latest-test 1> [2021-01-25T07:40:23,356][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:40:23,493][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-terms-pivot-test 1> [2021-01-25T07:40:23,645][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deletePipeline: transform-ingest 1> [2021-01-25T07:40:23,983][WARN ][o.e.c.RestClient ] [testContinousEvents] request [DELETE http://[::1]:37723/*,-.ds-ilm-history-*?expand_wildcards=open%2Cclosed%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:40:24,335][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] after test 2> REPRODUCE WITH: ./gradlew ':x-pack:plugin:transform:qa:multi-node-tests:javaRestTest' --tests "org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents" -Dtests.seed=4E7E45250C58F136 -Dtests.security.manager=true -Dtests.locale=fi-FI -Dtests.timezone=Africa/Accra -Druntime.java=11 2> java.lang.AssertionError: transform [continuous-histogram-pivot-test] does not progress, state: STARTED, reason: null Expected: a value greater than <2021-01-25T07:40:00.506764Z> but: <2021-01-25T07:39:59.383Z> was less than <2021-01-25T07:40:00.506764Z> at __randomizedtesting.SeedInfo.seed([4E7E45250C58F136:7548343E86337360]:0) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.lambda$waitUntilTransformsProcessedNewData$3(TransformContinuousIT.java:506) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:955) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.waitUntilTransformsProcessedNewData(TransformContinuousIT.java:504) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents(TransformContinuousIT.java:276) 2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/elastic+elasticsearch+master+multijob-unix-compatibility/os/opensuse-15-1&&immutable/x-pack/plugin/transform/qa/multi-node-tests/build/testrun/javaRestTest/temp/org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT_4E7E45250C58F136-001 2> NOTE: test params are: codec=Asserting(Lucene87): {}, docValues:{}, maxPointsInLeafNode=692, maxMBSortInHeap=7.427266613072976, sim=Asserting(RandomSimilarity(queryNorm=false): {}), locale=fi-FI, timezone=Africa/Accra 2> NOTE: Linux 4.12.14-lp151.28.91-default amd64/Oracle Corporation 11.0.2 (64-bit)/cpus=32,threads=1,free=477713648,total=536870912 2> NOTE: All tests run in this JVM: [DateHistogramGroupByIT, DateHistogramGroupByOtherTimeFieldIT, HistogramGroupByIT, LatestContinuousIT, TermsGroupByIT, TermsOnDateGroupByIT, TransformContinuousIT] Tests with failures: - org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents 12 tests completed, 1 failed ```
1.0
[CI] TransformContinuousIT.testContinousEvents fails - **Build scan**: https://gradle-enterprise.elastic.co/s/d3vdkpkvqmslc **Repro line**: ```bash ./gradlew ':x-pack:plugin:transform:qa:multi-node-tests:javaRestTest' --tests "org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents" -Dtests.seed=4E7E45250C58F136 -Dtests.security.manager=true -Dtests.locale=fi-FI -Dtests.timezone=Africa/Accra -Druntime.java=11 ``` **Reproduces locally?**: No **Applicable branches**: `master` **Failure history**: https://gradle-enterprise.elastic.co/scans/tests?search.relativeStartTime=P7D&search.timeZoneId=Europe/Berlin&tests.container=org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT&tests.sortField=FAILED&tests.test=testContinousEvents&tests.unstableOnly=true Started happening on Jan 20th. **Failure excerpt**: ``` org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT > testContinousEvents FAILED java.lang.AssertionError: transform [continuous-histogram-pivot-test] does not progress, state: STARTED, reason: null Expected: a value greater than <2021-01-25T07:40:00.506764Z> but: <2021-01-25T07:39:59.383Z> was less than <2021-01-25T07:40:00.506764Z> at __randomizedtesting.SeedInfo.seed([4E7E45250C58F136:7548343E86337360]:0) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.lambda$waitUntilTransformsProcessedNewData$3(TransformContinuousIT.java:506) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:955) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.waitUntilTransformsProcessedNewData(TransformContinuousIT.java:504) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents(TransformContinuousIT.java:276) REPRODUCE WITH: ./gradlew ':x-pack:plugin:transform:qa:multi-node-tests:javaRestTest' --tests "org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents" -Dtests.seed=4E7E45250C58F136 -Dtests.security.manager=true -Dtests.locale=fi-FI -Dtests.timezone=Africa/Accra -Druntime.java=11 Suite: Test class org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT 1> [2021-01-25T07:36:35,379][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] before test 1> [2021-01-25T07:36:35,601][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] initializing REST clients against [http://[::1]:46643, http://127.0.0.1:36227, http://[::1]:37723, http://127.0.0.1:41451, http://[::1]:43801, http://127.0.0.1:40545] 1> [2021-01-25T07:36:36,840][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":0}}]} 1> [2021-01-25T07:36:37,318][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] Creating source index with: {"settings":{"index":{"number_of_shards":1,"codec":"best_compression"}},"mappings":{"properties":{"timestamp":{"type":"date_nanos","format":"strict_date_optional_time_nanos"},"event":{"type":"keyword"},"metric":{"type":"unsigned_long"},"location":{"type":"geo_point"},"run":{"type":"integer"},"metric-timestamp":{"type":"date_nanos"},"some-timestamp":{"type":"date_nanos"}},"runtime":{"metric-rt-2x":{"type":"long","script":{"source":"if (params._source.metric != null) {emit(params._source.metric * 2)}"}},"event-upper":{"type":"keyword","script":{"source":"if (params._source.event != null) {emit(params._source.event.toUpperCase())}"}},"timestamp-at-runtime":{"type":"date","script":{"source":"emit(parse(params._source.get('timestamp')))"}},"metric-timestamp-5m-earlier":{"type":"date","script":{"source":"if (doc['metric-timestamp'].size()!=0) {emit(doc['metric-timestamp'].value.minus(5, ChronoUnit.MINUTES).toInstant().toEpochMilli())}"}},"some-timestamp-10m-earlier":{"type":"date","script":{"source":"if (doc['some-timestamp'].size()!=0) {emit(doc['some-timestamp'].value.minus(10, ChronoUnit.MINUTES).toInstant().toEpochMilli())}"}},"metric":{"type":"long","script":{"source":"if (params._source.metric != null) {emit(params._source.metric * 3)}"}}}}} 1> [2021-01-25T07:36:38,433][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-terms-pivot-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-terms-pivot-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"pivot":{"group_by":{"event":{"terms":{"field":"event","missing_bucket":true}}},"aggregations":{"run.max":{"max":{"field":"run"}},"count":{"value_count":{"field":"run"}},"time.max":{"max":{"field":"timestamp"}},"metric.avg":{"avg":{"field":"metric-rt-2x"}}}},"settings":{"max_page_search_size":10}} 1> [2021-01-25T07:36:39,298][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-terms-on-date-pivot-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-terms-on-date-pivot-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"pivot":{"group_by":{"some-timestamp":{"terms":{"field":"some-timestamp-10m-earlier"}}},"aggregations":{"run.max":{"max":{"field":"run"}},"count":{"value_count":{"field":"run"}},"time.max":{"max":{"field":"timestamp"}},"metric.avg":{"avg":{"field":"metric-rt-2x"}}}},"settings":{"max_page_search_size":10}} 1> [2021-01-25T07:36:39,614][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-date-histogram-pivot-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-date-histogram-pivot-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"pivot":{"group_by":{"second":{"date_histogram":{"field":"timestamp","missing_bucket":true,"fixed_interval":"1s"}}},"aggregations":{"run.max":{"max":{"field":"run"}},"count":{"value_count":{"field":"run"}},"time.max":{"max":{"field":"timestamp"}}}},"settings":{"max_page_search_size":10}} 1> [2021-01-25T07:36:39,748][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-date-histogram-pivot-other-timefield-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-date-histogram-pivot-other-timefield-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"pivot":{"group_by":{"event":{"terms":{"field":"event-upper"}},"second":{"date_histogram":{"field":"metric-timestamp","fixed_interval":"1s"}}},"aggregations":{"run.max":{"max":{"field":"run"}},"count":{"value_count":{"field":"run"}},"time.max":{"max":{"field":"timestamp"}}}},"settings":{"max_page_search_size":10,"dates_as_epoch_millis":true}} 1> [2021-01-25T07:36:39,934][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-histogram-pivot-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-histogram-pivot-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"pivot":{"group_by":{"metric":{"histogram":{"field":"metric-rt-2x","interval":50.0}}},"aggregations":{"run.max":{"max":{"field":"run"}},"count":{"value_count":{"field":"run"}},"time.max":{"max":{"field":"timestamp"}}}},"settings":{"max_page_search_size":10}} 1> [2021-01-25T07:36:40,051][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putTransform: {"id":"continuous-latest-test","source":{"index":["test-transform-continuous-events"]},"dest":{"index":"continuous-latest-test","pipeline":"transform-ingest"},"frequency":"1s","sync":{"time":{"field":"timestamp","delay":"1s"}},"latest":{"unique_key":["event"],"sort":"timestamp"},"settings":{"max_page_search_size":10}} 1> [2021-01-25T07:36:40,161][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":0}}]} 1> [2021-01-25T07:36:49,717][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:36:50,341][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:36:50,775][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:36:51,515][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:36:52,166][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:36:52,774][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:36:53,297][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:36:54.296824Z (takes into account the delay: 1s) iteration: 0 1> [2021-01-25T07:36:56,013][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:36:56,151][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:36:56,294][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:36:56,406][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:36:56,518][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:36:56,632][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:36:56,826][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://127.0.0.1:36227/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:36:57,659][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":1}}]} 1> [2021-01-25T07:37:04,267][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:04,405][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:04,544][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:04,685][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:04,854][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:05,044][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:37:05,224][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:37:06.224574Z (takes into account the delay: 1s) iteration: 1 1> [2021-01-25T07:37:11,253][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:11,379][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:11,471][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:11,579][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:11,676][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:11,784][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:37:11,934][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://[::1]:46643/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:37:12,781][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":2}}]} 1> [2021-01-25T07:37:15,710][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:15,858][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:16,042][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:16,204][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:16,356][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:16,504][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:37:16,658][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:37:17.658275Z (takes into account the delay: 1s) iteration: 2 1> [2021-01-25T07:37:26,820][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:26,917][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:27,017][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:27,112][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:27,211][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:27,300][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:37:27,474][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://[::1]:46643/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:37:28,366][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":3}}]} 1> [2021-01-25T07:37:34,367][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:34,487][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:34,632][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:34,764][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:34,910][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:35,060][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:37:35,195][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:37:36.194908Z (takes into account the delay: 1s) iteration: 3 1> [2021-01-25T07:37:48,116][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:48,212][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:48,301][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:48,396][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:48,492][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:48,578][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:37:48,752][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://[::1]:46643/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:37:49,866][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":4}}]} 1> [2021-01-25T07:37:54,843][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:37:54,995][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:37:55,161][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:37:55,320][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:37:55,488][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:37:55,643][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:37:55,801][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:37:56.801269Z (takes into account the delay: 1s) iteration: 4 1> [2021-01-25T07:38:18,818][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:38:18,917][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:38:19,008][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:38:19,100][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:38:19,205][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:38:19,294][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:38:19,462][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://127.0.0.1:41451/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:38:20,882][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":5}}]} 1> [2021-01-25T07:38:26,736][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:38:26,874][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:38:27,010][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:38:27,134][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:38:27,266][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:38:27,393][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:38:27,505][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:38:28.505661Z (takes into account the delay: 1s) iteration: 5 1> [2021-01-25T07:38:50,647][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:38:50,747][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:38:50,836][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:38:50,964][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:38:51,066][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:38:51,168][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:38:51,336][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://127.0.0.1:36227/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:38:52,898][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":6}}]} 1> [2021-01-25T07:38:55,936][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:38:56,048][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:38:56,165][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:38:56,296][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:38:56,425][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:38:56,558][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:38:56,680][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:38:57.680161Z (takes into account the delay: 1s) iteration: 6 1> [2021-01-25T07:39:19,153][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:39:19,244][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:39:19,325][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:39:19,415][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:39:19,506][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:39:19,597][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:39:19,744][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://[::1]:46643/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:39:21,388][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":7}}]} 1> [2021-01-25T07:39:27,382][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:39:27,498][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:39:27,624][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:39:27,746][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:39:27,862][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:39:28,008][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:39:28,129][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:39:29.12901Z (takes into account the delay: 1s) iteration: 7 1> [2021-01-25T07:39:52,798][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-pivot-test 1> [2021-01-25T07:39:52,885][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:39:52,970][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:39:53,054][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:39:53,146][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:39:53,231][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] stopTransform: continuous-latest-test 1> [2021-01-25T07:39:53,369][WARN ][o.e.c.RestClient ] [testContinousEvents] request [POST http://[::1]:46643/_refresh?expand_wildcards=open%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:39:55,290][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] putPipeline: {"processors":[{"set":{"field":"run_ingest","value":8}}]} 1> [2021-01-25T07:39:58,763][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-pivot-test 1> [2021-01-25T07:39:58,882][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:39:59,004][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:39:59,125][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:39:59,249][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:39:59,386][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] startTransform: continuous-latest-test 1> [2021-01-25T07:39:59,506][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] wait until transform reaches timestamp_millis: 2021-01-25T07:40:00.506764Z (takes into account the delay: 1s) iteration: 8 1> [2021-01-25T07:40:22,708][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-date-histogram-pivot-other-timefield-test 1> [2021-01-25T07:40:22,909][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-date-histogram-pivot-test 1> [2021-01-25T07:40:23,058][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-histogram-pivot-test 1> [2021-01-25T07:40:23,213][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-latest-test 1> [2021-01-25T07:40:23,356][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-terms-on-date-pivot-test 1> [2021-01-25T07:40:23,493][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deleteTransform: continuous-terms-pivot-test 1> [2021-01-25T07:40:23,645][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] deletePipeline: transform-ingest 1> [2021-01-25T07:40:23,983][WARN ][o.e.c.RestClient ] [testContinousEvents] request [DELETE http://[::1]:37723/*,-.ds-ilm-history-*?expand_wildcards=open%2Cclosed%2Chidden] returned 1 warnings: [299 Elasticsearch-8.0.0-SNAPSHOT-63c85ff3e846a13ae1294e99fc8ad63b9dd72d38 "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default"] 1> [2021-01-25T07:40:24,335][INFO ][o.e.x.t.i.c.TransformContinuousIT] [testContinousEvents] after test 2> REPRODUCE WITH: ./gradlew ':x-pack:plugin:transform:qa:multi-node-tests:javaRestTest' --tests "org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents" -Dtests.seed=4E7E45250C58F136 -Dtests.security.manager=true -Dtests.locale=fi-FI -Dtests.timezone=Africa/Accra -Druntime.java=11 2> java.lang.AssertionError: transform [continuous-histogram-pivot-test] does not progress, state: STARTED, reason: null Expected: a value greater than <2021-01-25T07:40:00.506764Z> but: <2021-01-25T07:39:59.383Z> was less than <2021-01-25T07:40:00.506764Z> at __randomizedtesting.SeedInfo.seed([4E7E45250C58F136:7548343E86337360]:0) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.lambda$waitUntilTransformsProcessedNewData$3(TransformContinuousIT.java:506) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:955) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.waitUntilTransformsProcessedNewData(TransformContinuousIT.java:504) at org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents(TransformContinuousIT.java:276) 2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/elastic+elasticsearch+master+multijob-unix-compatibility/os/opensuse-15-1&&immutable/x-pack/plugin/transform/qa/multi-node-tests/build/testrun/javaRestTest/temp/org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT_4E7E45250C58F136-001 2> NOTE: test params are: codec=Asserting(Lucene87): {}, docValues:{}, maxPointsInLeafNode=692, maxMBSortInHeap=7.427266613072976, sim=Asserting(RandomSimilarity(queryNorm=false): {}), locale=fi-FI, timezone=Africa/Accra 2> NOTE: Linux 4.12.14-lp151.28.91-default amd64/Oracle Corporation 11.0.2 (64-bit)/cpus=32,threads=1,free=477713648,total=536870912 2> NOTE: All tests run in this JVM: [DateHistogramGroupByIT, DateHistogramGroupByOtherTimeFieldIT, HistogramGroupByIT, LatestContinuousIT, TermsGroupByIT, TermsOnDateGroupByIT, TransformContinuousIT] Tests with failures: - org.elasticsearch.xpack.transform.integration.continuous.TransformContinuousIT.testContinousEvents 12 tests completed, 1 failed ```
test
transformcontinuousit testcontinousevents fails build scan repro line bash gradlew x pack plugin transform qa multi node tests javaresttest tests org elasticsearch xpack transform integration continuous transformcontinuousit testcontinousevents dtests seed dtests security manager true dtests locale fi fi dtests timezone africa accra druntime java reproduces locally no applicable branches master failure history started happening on jan failure excerpt org elasticsearch xpack transform integration continuous transformcontinuousit testcontinousevents failed java lang assertionerror transform does not progress state started reason null expected a value greater than but was less than at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org elasticsearch xpack transform integration continuous transformcontinuousit lambda waituntiltransformsprocessednewdata transformcontinuousit java at org elasticsearch test estestcase assertbusy estestcase java at org elasticsearch xpack transform integration continuous transformcontinuousit waituntiltransformsprocessednewdata transformcontinuousit java at org elasticsearch xpack transform integration continuous transformcontinuousit testcontinousevents transformcontinuousit java reproduce with gradlew x pack plugin transform qa multi node tests javaresttest tests org elasticsearch xpack transform integration continuous transformcontinuousit testcontinousevents dtests seed dtests security manager true dtests locale fi fi dtests timezone africa accra druntime java suite test class org elasticsearch xpack transform integration continuous transformcontinuousit before test initializing rest clients against http http putpipeline processors creating source index with settings index number of shards codec best compression mappings properties timestamp type date nanos format strict date optional time nanos event type keyword metric type unsigned long location type geo point run type integer metric timestamp type date nanos some timestamp type date nanos runtime metric rt type long script source if params source metric null emit params source metric event upper type keyword script source if params source event null emit params source event touppercase timestamp at runtime type date script source emit parse params source get timestamp metric timestamp earlier type date script source if doc size emit doc value minus chronounit minutes toinstant toepochmilli some timestamp earlier type date script source if doc size emit doc value minus chronounit minutes toinstant toepochmilli metric type long script source if params source metric null emit params source metric puttransform id continuous terms pivot test source index dest index continuous terms pivot test pipeline transform ingest frequency sync time field timestamp delay pivot group by event terms field event missing bucket true aggregations run max max field run count value count field run time max max field timestamp metric avg avg field metric rt settings max page search size puttransform id continuous terms on date pivot test source index dest index continuous terms on date pivot test pipeline transform ingest frequency sync time field timestamp delay pivot group by some timestamp terms field some timestamp earlier aggregations run max max field run count value count field run time max max field timestamp metric avg avg field metric rt settings max page search size puttransform id continuous date histogram pivot test source index dest index continuous date histogram pivot test pipeline transform ingest frequency sync time field timestamp delay pivot group by second date histogram field timestamp missing bucket true fixed interval aggregations run max max field run count value count field run time max max field timestamp settings max page search size puttransform id continuous date histogram pivot other timefield test source index dest index continuous date histogram pivot other timefield test pipeline transform ingest frequency sync time field timestamp delay pivot group by event terms field event upper second date histogram field metric timestamp fixed interval aggregations run max max field run count value count field run time max max field timestamp settings max page search size dates as epoch millis true puttransform id continuous histogram pivot test source index dest index continuous histogram pivot test pipeline transform ingest frequency sync time field timestamp delay pivot group by metric histogram field metric rt interval aggregations run max max field run count value count field run time max max field timestamp settings max page search size puttransform id continuous latest test source index dest index continuous latest test pipeline transform ingest frequency sync time field timestamp delay latest unique key sort timestamp settings max page search size putpipeline processors starttransform continuous terms pivot test starttransform continuous terms on date pivot test starttransform continuous date histogram pivot test starttransform continuous date histogram pivot other timefield test starttransform continuous histogram pivot test starttransform continuous latest test wait until transform reaches timestamp millis takes into account the delay iteration stoptransform continuous terms pivot test stoptransform continuous terms on date pivot test stoptransform continuous date histogram pivot test stoptransform continuous date histogram pivot other timefield test stoptransform continuous histogram pivot test stoptransform continuous latest test request returned warnings but in a future major version direct access to system indices will be prevented by default putpipeline processors starttransform continuous terms pivot test starttransform continuous terms on date pivot test starttransform continuous date histogram pivot test starttransform continuous date histogram pivot other timefield test starttransform continuous histogram pivot test starttransform continuous latest test wait until transform reaches timestamp millis takes into account the delay iteration stoptransform continuous terms pivot test stoptransform continuous terms on date pivot test stoptransform continuous date histogram pivot test stoptransform continuous date histogram pivot other timefield test stoptransform continuous histogram pivot test stoptransform continuous latest test request refresh expand wildcards open returned warnings but in a future major version direct access to system indices will be prevented by default putpipeline processors starttransform continuous terms pivot test starttransform continuous terms on date pivot test starttransform continuous date histogram pivot test starttransform continuous date histogram pivot other timefield test starttransform continuous histogram pivot test starttransform continuous latest test wait until transform reaches timestamp millis takes into account the delay iteration stoptransform continuous terms pivot test stoptransform continuous terms on date pivot test stoptransform continuous date histogram pivot test stoptransform continuous date histogram pivot other timefield test stoptransform continuous histogram pivot test stoptransform continuous latest test request refresh expand wildcards open returned warnings but in a future major version direct access to system indices will be prevented by default putpipeline processors starttransform continuous terms pivot test starttransform continuous terms on date pivot test starttransform continuous date histogram pivot test starttransform continuous date histogram pivot other timefield test starttransform continuous histogram pivot test starttransform continuous latest test wait until transform reaches timestamp millis takes into account the delay iteration stoptransform continuous terms pivot test stoptransform continuous terms on date pivot test stoptransform continuous date histogram pivot test stoptransform continuous date histogram pivot other timefield test stoptransform continuous histogram pivot test stoptransform continuous latest test request refresh expand wildcards open returned warnings but in a future major version direct access to system indices will be prevented by default putpipeline processors starttransform continuous terms pivot test starttransform continuous terms on date pivot test starttransform continuous date histogram pivot test starttransform continuous date histogram pivot other timefield test starttransform continuous histogram pivot test starttransform continuous latest test wait until transform reaches timestamp millis takes into account the delay iteration stoptransform continuous terms pivot test stoptransform continuous terms on date pivot test stoptransform continuous date histogram pivot test stoptransform continuous date histogram pivot other timefield test stoptransform continuous histogram pivot test stoptransform continuous latest test request returned warnings but in a future major version direct access to system indices will be prevented by default putpipeline processors starttransform continuous terms pivot test starttransform continuous terms on date pivot test starttransform continuous date histogram pivot test starttransform continuous date histogram pivot other timefield test starttransform continuous histogram pivot test starttransform continuous latest test wait until transform reaches timestamp millis takes into account the delay iteration stoptransform continuous terms pivot test stoptransform continuous terms on date pivot test stoptransform continuous date histogram pivot test stoptransform continuous date histogram pivot other timefield test stoptransform continuous histogram pivot test stoptransform continuous latest test request returned warnings but in a future major version direct access to system indices will be prevented by default putpipeline processors starttransform continuous terms pivot test starttransform continuous terms on date pivot test starttransform continuous date histogram pivot test starttransform continuous date histogram pivot other timefield test starttransform continuous histogram pivot test starttransform continuous latest test wait until transform reaches timestamp millis takes into account the delay iteration stoptransform continuous terms pivot test stoptransform continuous terms on date pivot test stoptransform continuous date histogram pivot test stoptransform continuous date histogram pivot other timefield test stoptransform continuous histogram pivot test stoptransform continuous latest test request refresh expand wildcards open returned warnings but in a future major version direct access to system indices will be prevented by default putpipeline processors starttransform continuous terms pivot test starttransform continuous terms on date pivot test starttransform continuous date histogram pivot test starttransform continuous date histogram pivot other timefield test starttransform continuous histogram pivot test starttransform continuous latest test wait until transform reaches timestamp millis takes into account the delay iteration stoptransform continuous terms pivot test stoptransform continuous terms on date pivot test stoptransform continuous date histogram pivot test stoptransform continuous date histogram pivot other timefield test stoptransform continuous histogram pivot test stoptransform continuous latest test request refresh expand wildcards open returned warnings but in a future major version direct access to system indices will be prevented by default putpipeline processors starttransform continuous terms pivot test starttransform continuous terms on date pivot test starttransform continuous date histogram pivot test starttransform continuous date histogram pivot other timefield test starttransform continuous histogram pivot test starttransform continuous latest test wait until transform reaches timestamp millis takes into account the delay iteration deletetransform continuous date histogram pivot other timefield test deletetransform continuous date histogram pivot test deletetransform continuous histogram pivot test deletetransform continuous latest test deletetransform continuous terms on date pivot test deletetransform continuous terms pivot test deletepipeline transform ingest request ds ilm history expand wildcards open returned warnings but in a future major version direct access to system indices will be prevented by default after test reproduce with gradlew x pack plugin transform qa multi node tests javaresttest tests org elasticsearch xpack transform integration continuous transformcontinuousit testcontinousevents dtests seed dtests security manager true dtests locale fi fi dtests timezone africa accra druntime java java lang assertionerror transform does not progress state started reason null expected a value greater than but was less than at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org elasticsearch xpack transform integration continuous transformcontinuousit lambda waituntiltransformsprocessednewdata transformcontinuousit java at org elasticsearch test estestcase assertbusy estestcase java at org elasticsearch xpack transform integration continuous transformcontinuousit waituntiltransformsprocessednewdata transformcontinuousit java at org elasticsearch xpack transform integration continuous transformcontinuousit testcontinousevents transformcontinuousit java note leaving temporary files on disk at var lib jenkins workspace elastic elasticsearch master multijob unix compatibility os opensuse immutable x pack plugin transform qa multi node tests build testrun javaresttest temp org elasticsearch xpack transform integration continuous transformcontinuousit note test params are codec asserting docvalues maxpointsinleafnode maxmbsortinheap sim asserting randomsimilarity querynorm false locale fi fi timezone africa accra note linux default oracle corporation bit cpus threads free total note all tests run in this jvm tests with failures org elasticsearch xpack transform integration continuous transformcontinuousit testcontinousevents tests completed failed
1
133,054
10,787,831,163
IssuesEvent
2019-11-05 08:31:26
fedora-infra/bodhi
https://api.github.com/repos/fedora-infra/bodhi
closed
Convert bodhi/tests/server/test_alembic.py to PyTest
EasyFix Low Priority Tests
[bodhi/tests/server/test_alembic.py](https://github.com/fedora-infra/bodhi/blob/develop/bodhi/tests/server/test_alembic.py) still contains classes that derive from `unittest.TestCase`. We use PyTest, so they could be migrated to not derive from `unittest.TestCase`, or derive from `bodhi.tests.server.base.BasePyTestCase` in case they need to use the test database or WSGI app. This also allows the use of the simpler assert constructs instead of the self.assert*() methods. Instead of unittest `setUp()` or `tearDown()` methods, PyTest uses `setup_method(...)` or `teardown_method(...)`, or analog methods to setup/teardown stuff for different scopes. The following PR is an example of the changes that might need to be done to this file: #3612
1.0
Convert bodhi/tests/server/test_alembic.py to PyTest - [bodhi/tests/server/test_alembic.py](https://github.com/fedora-infra/bodhi/blob/develop/bodhi/tests/server/test_alembic.py) still contains classes that derive from `unittest.TestCase`. We use PyTest, so they could be migrated to not derive from `unittest.TestCase`, or derive from `bodhi.tests.server.base.BasePyTestCase` in case they need to use the test database or WSGI app. This also allows the use of the simpler assert constructs instead of the self.assert*() methods. Instead of unittest `setUp()` or `tearDown()` methods, PyTest uses `setup_method(...)` or `teardown_method(...)`, or analog methods to setup/teardown stuff for different scopes. The following PR is an example of the changes that might need to be done to this file: #3612
test
convert bodhi tests server test alembic py to pytest still contains classes that derive from unittest testcase we use pytest so they could be migrated to not derive from unittest testcase or derive from bodhi tests server base basepytestcase in case they need to use the test database or wsgi app this also allows the use of the simpler assert constructs instead of the self assert methods instead of unittest setup or teardown methods pytest uses setup method or teardown method or analog methods to setup teardown stuff for different scopes the following pr is an example of the changes that might need to be done to this file
1
199,931
15,083,321,211
IssuesEvent
2021-02-05 15:41:16
pandas-dev/pandas
https://api.github.com/repos/pandas-dev/pandas
closed
Multiindex slicing with NaNs, unexpected results
Missing-data MultiIndex Needs Tests good first issue
#### Code Sample, a copy-pastable example if possible ```python import pandas as pd df = pd.DataFrame( pd.np.random.rand(2, 3), columns=pd.MultiIndex.from_tuples([('a', 'foo'), ('b', 'bar'), ('b', pd.np.nan)], names=['first','second']) ) # EXPECTED slicing everything on first level df.loc[:, (['a', 'b'])] Out[35]: first a b second foo bar NaN 0 0.678021 0.383672 0.074164 1 0.738492 0.992545 0.661247 # EXPECTED just slicing one value from first level df.loc[:, (['b'])] Out[29]: first b second bar NaN 0 0.383672 0.074164 1 0.992545 0.661247 # EXPECTED slicing out b, bar df.loc[:, (['b'], ['bar'])] Out[33]: first b second bar 0 0.383672 1 0.992545 # UNEXPECTED slicing out b, nan df.loc[:, (['b'], [pd.np.nan])] Out[36]: Empty DataFrame Columns: [] Index: [0, 1] # UNEXPECTED slicing out b, [nan, 'bar'] df.loc[:, (['b'], ['bar', pd.np.nan])] Out[39]: first b second bar 0 0.383672 1 0.992545 # EXPECTED slicing out b, nan without the index df.loc[:, ('b', pd.np.nan)] Out[37]: 0 0.074164 1 0.661247 Name: (b, nan), dtype: float64 ``` #### Problem description When trying to slice out multiple values from a particular level including levels with a nan value, the levels with nan are not retrieved. #### Expected Output Both of these I expect to work: ```python df.loc[:, (['b'], ['bar', pd.np.nan])] Out[40]: first b second bar NaN 0 0.383672 0.074164 1 0.992545 0.661247 df.loc[:, (['b'], [pd.np.nan])] Out[40]: first b second NaN 0 0.074164 1 0.661247 ``` #### Output of ``pd.show_versions()`` <details> [paste the output of ``pd.show_versions()`` here below this line] INSTALLED VERSIONS ------------------ commit: None python: 2.7.15.final.0 python-bits: 64 OS: Linux OS-release: 3.10.0-327.36.3.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: None LOCALE: None.None pandas: 0.22.0 pytest: 3.10.0 pip: 18.1 setuptools: 40.5.0 Cython: 0.28.5 numpy: 1.14.2 scipy: 1.0.1 pyarrow: None xarray: 0.10.9 IPython: 5.8.0 sphinx: 1.8.1 patsy: 0.5.1 dateutil: 2.7.2 pytz: 2018.7 blosc: None bottleneck: 1.2.1 tables: 3.4.4 numexpr: 2.6.7 feather: None matplotlib: 2.2.3 openpyxl: 2.5.9 xlrd: 1.1.0 xlwt: 1.3.0 xlsxwriter: 1.1.2 lxml: 4.2.1 bs4: 4.6.3 html5lib: 1.0.1 sqlalchemy: 1.2.11 pymysql: None psycopg2: 2.7.5 (dt dec pq3 ext lo64) jinja2: 2.10 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None </details>
1.0
Multiindex slicing with NaNs, unexpected results - #### Code Sample, a copy-pastable example if possible ```python import pandas as pd df = pd.DataFrame( pd.np.random.rand(2, 3), columns=pd.MultiIndex.from_tuples([('a', 'foo'), ('b', 'bar'), ('b', pd.np.nan)], names=['first','second']) ) # EXPECTED slicing everything on first level df.loc[:, (['a', 'b'])] Out[35]: first a b second foo bar NaN 0 0.678021 0.383672 0.074164 1 0.738492 0.992545 0.661247 # EXPECTED just slicing one value from first level df.loc[:, (['b'])] Out[29]: first b second bar NaN 0 0.383672 0.074164 1 0.992545 0.661247 # EXPECTED slicing out b, bar df.loc[:, (['b'], ['bar'])] Out[33]: first b second bar 0 0.383672 1 0.992545 # UNEXPECTED slicing out b, nan df.loc[:, (['b'], [pd.np.nan])] Out[36]: Empty DataFrame Columns: [] Index: [0, 1] # UNEXPECTED slicing out b, [nan, 'bar'] df.loc[:, (['b'], ['bar', pd.np.nan])] Out[39]: first b second bar 0 0.383672 1 0.992545 # EXPECTED slicing out b, nan without the index df.loc[:, ('b', pd.np.nan)] Out[37]: 0 0.074164 1 0.661247 Name: (b, nan), dtype: float64 ``` #### Problem description When trying to slice out multiple values from a particular level including levels with a nan value, the levels with nan are not retrieved. #### Expected Output Both of these I expect to work: ```python df.loc[:, (['b'], ['bar', pd.np.nan])] Out[40]: first b second bar NaN 0 0.383672 0.074164 1 0.992545 0.661247 df.loc[:, (['b'], [pd.np.nan])] Out[40]: first b second NaN 0 0.074164 1 0.661247 ``` #### Output of ``pd.show_versions()`` <details> [paste the output of ``pd.show_versions()`` here below this line] INSTALLED VERSIONS ------------------ commit: None python: 2.7.15.final.0 python-bits: 64 OS: Linux OS-release: 3.10.0-327.36.3.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: None LOCALE: None.None pandas: 0.22.0 pytest: 3.10.0 pip: 18.1 setuptools: 40.5.0 Cython: 0.28.5 numpy: 1.14.2 scipy: 1.0.1 pyarrow: None xarray: 0.10.9 IPython: 5.8.0 sphinx: 1.8.1 patsy: 0.5.1 dateutil: 2.7.2 pytz: 2018.7 blosc: None bottleneck: 1.2.1 tables: 3.4.4 numexpr: 2.6.7 feather: None matplotlib: 2.2.3 openpyxl: 2.5.9 xlrd: 1.1.0 xlwt: 1.3.0 xlsxwriter: 1.1.2 lxml: 4.2.1 bs4: 4.6.3 html5lib: 1.0.1 sqlalchemy: 1.2.11 pymysql: None psycopg2: 2.7.5 (dt dec pq3 ext lo64) jinja2: 2.10 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None </details>
test
multiindex slicing with nans unexpected results code sample a copy pastable example if possible python import pandas as pd df pd dataframe pd np random rand columns pd multiindex from tuples names expected slicing everything on first level df loc out first a b second foo bar nan expected just slicing one value from first level df loc out first b second bar nan expected slicing out b bar df loc out first b second bar unexpected slicing out b nan df loc out empty dataframe columns index unexpected slicing out b df loc out first b second bar expected slicing out b nan without the index df loc out name b nan dtype problem description when trying to slice out multiple values from a particular level including levels with a nan value the levels with nan are not retrieved expected output both of these i expect to work python df loc out first b second bar nan df loc out first b second nan output of pd show versions installed versions commit none python final python bits os linux os release machine processor byteorder little lc all none lang none locale none none pandas pytest pip setuptools cython numpy scipy pyarrow none xarray ipython sphinx patsy dateutil pytz blosc none bottleneck tables numexpr feather none matplotlib openpyxl xlrd xlwt xlsxwriter lxml sqlalchemy pymysql none dt dec ext none fastparquet none pandas gbq none pandas datareader none
1
322,490
27,611,549,164
IssuesEvent
2023-03-09 16:18:33
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
Enable Brave VPN feature flag by default on Desktop
release/blocking OS/macOS OS/Windows QA Pass-Win64 QA Pass-macOS QA Pass-Linux QA/Yes release-notes/include OS/Desktop feature/vpn QA/Test-All-Platforms
## Description Before we can go live with VPN, we need to enable the feature by default. It's already live for iOS and Android Currently, folks testing have to enable using `brave://flags/#brave-vpn` We should enable VPN by default using Griffin (https://griffin.brave.com/) ## Overall VPN Test plan This area is a work in progress! It will continue to grow as we get closer to rollout ### Pre-requisites - [x] Finish the Linking page (on accounts.brave.com) and deploy to all environments https://github.com/brave/account-brave-com/issues/24 - [x] Push the Brave VPN SKU to Production (it's already in dev and staging) - [x] Add the "Link account" button on Android https://github.com/brave/brave-browser/issues/21972 - [x] Add the "Link account" button on iOS https://github.com/brave/brave-ios/issues/5175 - [x] Website??? ### Desktop NOTE: VPN only available on macOS and Windows - [x] Original Desktop test plan https://github.com/brave/brave-browser/issues/15804 1. VPN menu should show next to hamburger menu 2. Click the VPN button and it should show a promo 3. Click the promo and it should go to account.brave.com 4. Login to account.brave.com with a new account 5. Once signed in, click `Plans` on menu on left 6. Brave VPN should show as a product 7. Click `Buy now` for Brave VPN and complete checkout 8. You should now be able to click `VPN` menu (next to hamburger menu) and see a server list 9. Choose a server and connect 10. Verify you're connected by visiting https://whatismyipaddress.com/ 11. You can switch profiles and verify the other profiles can connect also - [x] What checks are needed for Linux? - See https://bravesoftware.slack.com/archives/CC5SA8CCB/p1669218929305149 for recent suggestions of what to cover. ### Mobile - [x] Using in-app-purchase on iOS or Android and linking https://github.com/brave/account-brave-com/issues/24 #### iOS - Buying on desktop and redeeming on iOS https://github.com/brave/brave-ios/issues/4805 #### Android - Buying on desktop and redeeming on Android https://github.com/brave/brave-browser/issues/20374 - See test plan at https://github.com/brave/brave-core/pull/14715 ### Known issues with the VPN service itself Brave employees should have access to https://github.com/brave/support-guardian-vpn/projects/1
1.0
Enable Brave VPN feature flag by default on Desktop - ## Description Before we can go live with VPN, we need to enable the feature by default. It's already live for iOS and Android Currently, folks testing have to enable using `brave://flags/#brave-vpn` We should enable VPN by default using Griffin (https://griffin.brave.com/) ## Overall VPN Test plan This area is a work in progress! It will continue to grow as we get closer to rollout ### Pre-requisites - [x] Finish the Linking page (on accounts.brave.com) and deploy to all environments https://github.com/brave/account-brave-com/issues/24 - [x] Push the Brave VPN SKU to Production (it's already in dev and staging) - [x] Add the "Link account" button on Android https://github.com/brave/brave-browser/issues/21972 - [x] Add the "Link account" button on iOS https://github.com/brave/brave-ios/issues/5175 - [x] Website??? ### Desktop NOTE: VPN only available on macOS and Windows - [x] Original Desktop test plan https://github.com/brave/brave-browser/issues/15804 1. VPN menu should show next to hamburger menu 2. Click the VPN button and it should show a promo 3. Click the promo and it should go to account.brave.com 4. Login to account.brave.com with a new account 5. Once signed in, click `Plans` on menu on left 6. Brave VPN should show as a product 7. Click `Buy now` for Brave VPN and complete checkout 8. You should now be able to click `VPN` menu (next to hamburger menu) and see a server list 9. Choose a server and connect 10. Verify you're connected by visiting https://whatismyipaddress.com/ 11. You can switch profiles and verify the other profiles can connect also - [x] What checks are needed for Linux? - See https://bravesoftware.slack.com/archives/CC5SA8CCB/p1669218929305149 for recent suggestions of what to cover. ### Mobile - [x] Using in-app-purchase on iOS or Android and linking https://github.com/brave/account-brave-com/issues/24 #### iOS - Buying on desktop and redeeming on iOS https://github.com/brave/brave-ios/issues/4805 #### Android - Buying on desktop and redeeming on Android https://github.com/brave/brave-browser/issues/20374 - See test plan at https://github.com/brave/brave-core/pull/14715 ### Known issues with the VPN service itself Brave employees should have access to https://github.com/brave/support-guardian-vpn/projects/1
test
enable brave vpn feature flag by default on desktop description before we can go live with vpn we need to enable the feature by default it s already live for ios and android currently folks testing have to enable using brave flags brave vpn we should enable vpn by default using griffin overall vpn test plan this area is a work in progress it will continue to grow as we get closer to rollout pre requisites finish the linking page on accounts brave com and deploy to all environments push the brave vpn sku to production it s already in dev and staging add the link account button on android add the link account button on ios website desktop note vpn only available on macos and windows original desktop test plan vpn menu should show next to hamburger menu click the vpn button and it should show a promo click the promo and it should go to account brave com login to account brave com with a new account once signed in click plans on menu on left brave vpn should show as a product click buy now for brave vpn and complete checkout you should now be able to click vpn menu next to hamburger menu and see a server list choose a server and connect verify you re connected by visiting you can switch profiles and verify the other profiles can connect also what checks are needed for linux see for recent suggestions of what to cover mobile using in app purchase on ios or android and linking ios buying on desktop and redeeming on ios android buying on desktop and redeeming on android see test plan at known issues with the vpn service itself brave employees should have access to
1
309,171
26,654,479,519
IssuesEvent
2023-01-25 15:53:37
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
opened
Manual test run on OSX/macOS for 1.49.x - Nightly (Checking C110)
tests OS/macOS QA/Yes release-notes/exclude OS/Desktop
The following basically checks and ensures that you can install the latest Nightly that includes `C109` on older `macOS` versions without any issues/startup crashes. For reference, we had a startup crash on `C97` as per https://github.com/brave/brave-browser/issues/20351 that wasn't caught before we released. * [ ] macOS High Sierra Note - per https://github.com/brave/brave-browser/issues/23748, Brave with Chromium 104+ will no longer work on the below versions * [ ] macOS Sierra * [ ] OS X El Captain
1.0
Manual test run on OSX/macOS for 1.49.x - Nightly (Checking C110) - The following basically checks and ensures that you can install the latest Nightly that includes `C109` on older `macOS` versions without any issues/startup crashes. For reference, we had a startup crash on `C97` as per https://github.com/brave/brave-browser/issues/20351 that wasn't caught before we released. * [ ] macOS High Sierra Note - per https://github.com/brave/brave-browser/issues/23748, Brave with Chromium 104+ will no longer work on the below versions * [ ] macOS Sierra * [ ] OS X El Captain
test
manual test run on osx macos for x nightly checking the following basically checks and ensures that you can install the latest nightly that includes on older macos versions without any issues startup crashes for reference we had a startup crash on as per that wasn t caught before we released macos high sierra note per brave with chromium will no longer work on the below versions macos sierra os x el captain
1
220,650
17,214,128,298
IssuesEvent
2021-07-19 09:17:24
ukwa/ukwa-ui
https://api.github.com/repos/ukwa/ukwa-ui
closed
Search box squeezed in mobile
Mobile Devices Search priority: high testing: ready to test ux-critical
When using a smaller screen, specifically anything below 448px wide, the search box is squeezed smaller by the search button; in its smaller state the search text is hidden and the ability to type is removed. ![image](https://user-images.githubusercontent.com/18530934/45372999-3caee300-b5e6-11e8-8713-f8001e4751bf.png)
2.0
Search box squeezed in mobile - When using a smaller screen, specifically anything below 448px wide, the search box is squeezed smaller by the search button; in its smaller state the search text is hidden and the ability to type is removed. ![image](https://user-images.githubusercontent.com/18530934/45372999-3caee300-b5e6-11e8-8713-f8001e4751bf.png)
test
search box squeezed in mobile when using a smaller screen specifically anything below wide the search box is squeezed smaller by the search button in its smaller state the search text is hidden and the ability to type is removed
1
249,912
21,215,818,922
IssuesEvent
2022-04-11 07:11:54
Uuvana-Studios/longvinter-windows-client
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
closed
The game that worked yesterday doesn't open today
Bug Not Tested
The game I played well yesterday doesn't open today I played the game for a day and made a turret It doesn't open on all servers I'm so angry ㅠㅠ.; **Desktop (please complete the following information):** - OS: Windows - Game Version 1.0.2 - Steam Version Latest **Additional context** Add any other context about the problem here.
1.0
The game that worked yesterday doesn't open today - The game I played well yesterday doesn't open today I played the game for a day and made a turret It doesn't open on all servers I'm so angry ㅠㅠ.; **Desktop (please complete the following information):** - OS: Windows - Game Version 1.0.2 - Steam Version Latest **Additional context** Add any other context about the problem here.
test
the game that worked yesterday doesn t open today the game i played well yesterday doesn t open today i played the game for a day and made a turret it doesn t open on all servers i m so angry ㅠㅠ desktop please complete the following information os windows game version steam version latest additional context add any other context about the problem here
1
437,893
12,604,182,539
IssuesEvent
2020-06-11 14:35:56
opencollective/opencollective
https://api.github.com/repos/opencollective/opencollective
closed
Error visiting Collective Page after disabling items on Collective Page.
bug frontend priority
After disabling sections of `Collective Page`, going to the Collective Page itself shows this error: ![image](https://user-images.githubusercontent.com/21095/84389623-2e22a080-abab-11ea-84a7-61143d739049.png) **To Reproduce** 0. Uncheck items on `Collective Page` in _Settings_ 0. Click `Save` 0. Click `Visit Collective Page` next to `Save`. 0. See error above, and must reload to see page. **Expected behavior** After disabling sections on the `Collective Page`, visiting the Collective Page ought to be immediate without error, and with/without the sections selected in _Settings_. **Desktop (please complete the following information):** - Mint Linux - Google Chrome - 83.0.4103.97
1.0
Error visiting Collective Page after disabling items on Collective Page. - After disabling sections of `Collective Page`, going to the Collective Page itself shows this error: ![image](https://user-images.githubusercontent.com/21095/84389623-2e22a080-abab-11ea-84a7-61143d739049.png) **To Reproduce** 0. Uncheck items on `Collective Page` in _Settings_ 0. Click `Save` 0. Click `Visit Collective Page` next to `Save`. 0. See error above, and must reload to see page. **Expected behavior** After disabling sections on the `Collective Page`, visiting the Collective Page ought to be immediate without error, and with/without the sections selected in _Settings_. **Desktop (please complete the following information):** - Mint Linux - Google Chrome - 83.0.4103.97
non_test
error visiting collective page after disabling items on collective page after disabling sections of collective page going to the collective page itself shows this error to reproduce uncheck items on collective page in settings click save click visit collective page next to save see error above and must reload to see page expected behavior after disabling sections on the collective page visiting the collective page ought to be immediate without error and with without the sections selected in settings desktop please complete the following information mint linux google chrome
0
183,842
14,959,959,513
IssuesEvent
2021-01-27 04:34:37
tmobile/magtape
https://api.github.com/repos/tmobile/magtape
closed
Enable functional tests to have a descriptive name
ci documentation enhancement important soon yaml
<!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: Allow a descriptive name to be associated with each functional test which can be printed during testing. **Why is this needed**: Enable more meaningful output during functional testing to make it easier to determine exactly what is being tested. **Documentation update**: Once this is implemented and descriptive names are set for each test the [Test Samples Available](https://github.com/tmobile/magtape/tree/master/testing#test-samples-available) table can be removed from readme.md
1.0
Enable functional tests to have a descriptive name - <!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: Allow a descriptive name to be associated with each functional test which can be printed during testing. **Why is this needed**: Enable more meaningful output during functional testing to make it easier to determine exactly what is being tested. **Documentation update**: Once this is implemented and descriptive names are set for each test the [Test Samples Available](https://github.com/tmobile/magtape/tree/master/testing#test-samples-available) table can be removed from readme.md
non_test
enable functional tests to have a descriptive name what would you like to be added allow a descriptive name to be associated with each functional test which can be printed during testing why is this needed enable more meaningful output during functional testing to make it easier to determine exactly what is being tested documentation update once this is implemented and descriptive names are set for each test the table can be removed from readme md
0
82,888
7,855,163,537
IssuesEvent
2018-06-21 00:00:35
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Not able to import an existing GKE Cluster.
area/ui kind/bug status/resolved status/to-test version/2.0
**Rancher versions: Build from master - Jun 15 **Steps to Reproduce:** Tried to import an existing GKE Cluster. It fails to get imported with following errors when running the kubectl command: ```curl --insecure -sfL https://<ip>/v3/import/j7kc2knmwsr2lfvrsw2rvlvk5xrd54zm4hzcs85k7mh6vjmkgm228q.yaml | kubectl apply -f -``` ``` namespace "cattle-system" created serviceaccount "cattle" created clusterrolebinding "cattle-admin-binding" created secret "cattle-credentials-108fedd" created deployment "cattle-cluster-agent" created daemonset "cattle-node-agent" created Error from server (Forbidden): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "cattle-admin" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["*"], APIGroups:["*"], Verbs:["*"]} PolicyRule{NonResourceURLs:["*"], Verbs:["*"]}] user=&{sangee2004@gmail.com [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[] ```
1.0
Not able to import an existing GKE Cluster. - **Rancher versions: Build from master - Jun 15 **Steps to Reproduce:** Tried to import an existing GKE Cluster. It fails to get imported with following errors when running the kubectl command: ```curl --insecure -sfL https://<ip>/v3/import/j7kc2knmwsr2lfvrsw2rvlvk5xrd54zm4hzcs85k7mh6vjmkgm228q.yaml | kubectl apply -f -``` ``` namespace "cattle-system" created serviceaccount "cattle" created clusterrolebinding "cattle-admin-binding" created secret "cattle-credentials-108fedd" created deployment "cattle-cluster-agent" created daemonset "cattle-node-agent" created Error from server (Forbidden): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "cattle-admin" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["*"], APIGroups:["*"], Verbs:["*"]} PolicyRule{NonResourceURLs:["*"], Verbs:["*"]}] user=&{sangee2004@gmail.com [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[] ```
test
not able to import an existing gke cluster rancher versions build from master jun steps to reproduce tried to import an existing gke cluster it fails to get imported with following errors when running the kubectl command curl insecure sfl kubectl apply f namespace cattle system created serviceaccount cattle created clusterrolebinding cattle admin binding created secret cattle credentials created deployment cattle cluster agent created daemonset cattle node agent created error from server forbidden error when creating stdin clusterroles rbac authorization io cattle admin is forbidden attempt to grant extra privileges apigroups verbs policyrule nonresourceurls verbs user gmail com map ownerrules apigroups verbs policyrule nonresourceurls verbs ruleresolutionerrors
1
121,557
17,659,493,414
IssuesEvent
2021-08-21 07:32:16
LaudateCorpus1/vscode-main
https://api.github.com/repos/LaudateCorpus1/vscode-main
closed
CVE-2021-23382 (Medium) detected in postcss-7.0.21.tgz, postcss-7.0.35.tgz - autoclosed
security vulnerability
## CVE-2021-23382 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.21.tgz</b>, <b>postcss-7.0.35.tgz</b></p></summary> <p> <details><summary><b>postcss-7.0.21.tgz</b></p></summary> <p>Tool for transforming styles with JS plugins</p> <p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p> <p>Path to dependency file: vscode-main/vscode-main/package.json</p> <p>Path to vulnerable library: vscode-main/vscode-main/node_modules/postcss</p> <p> Dependency Hierarchy: - gulp-sourcemaps-3.0.0.tgz (Root Library) - identity-map-2.0.1.tgz - :x: **postcss-7.0.21.tgz** (Vulnerable Library) </details> <details><summary><b>postcss-7.0.35.tgz</b></p></summary> <p>Tool for transforming styles with JS plugins</p> <p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p> <p>Path to dependency file: vscode-main/vscode-main/package.json</p> <p>Path to vulnerable library: vscode-main/vscode-main/node_modules/postcss</p> <p> Dependency Hierarchy: - cssnano-4.0.0.tgz (Root Library) - :x: **postcss-7.0.35.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/LaudateCorpus1/vscode-main/commit/dac2792601ad937b8a5e57c01570163810634b94">dac2792601ad937b8a5e57c01570163810634b94</a></p> <p>Found in base branch: <b>dev1</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*). <p>Publish Date: 2021-04-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p> <p>Release Date: 2021-04-26</p> <p>Fix Resolution: postcss - 8.2.13</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23382 (Medium) detected in postcss-7.0.21.tgz, postcss-7.0.35.tgz - autoclosed - ## CVE-2021-23382 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.21.tgz</b>, <b>postcss-7.0.35.tgz</b></p></summary> <p> <details><summary><b>postcss-7.0.21.tgz</b></p></summary> <p>Tool for transforming styles with JS plugins</p> <p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p> <p>Path to dependency file: vscode-main/vscode-main/package.json</p> <p>Path to vulnerable library: vscode-main/vscode-main/node_modules/postcss</p> <p> Dependency Hierarchy: - gulp-sourcemaps-3.0.0.tgz (Root Library) - identity-map-2.0.1.tgz - :x: **postcss-7.0.21.tgz** (Vulnerable Library) </details> <details><summary><b>postcss-7.0.35.tgz</b></p></summary> <p>Tool for transforming styles with JS plugins</p> <p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p> <p>Path to dependency file: vscode-main/vscode-main/package.json</p> <p>Path to vulnerable library: vscode-main/vscode-main/node_modules/postcss</p> <p> Dependency Hierarchy: - cssnano-4.0.0.tgz (Root Library) - :x: **postcss-7.0.35.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/LaudateCorpus1/vscode-main/commit/dac2792601ad937b8a5e57c01570163810634b94">dac2792601ad937b8a5e57c01570163810634b94</a></p> <p>Found in base branch: <b>dev1</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*). <p>Publish Date: 2021-04-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p> <p>Release Date: 2021-04-26</p> <p>Fix Resolution: postcss - 8.2.13</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in postcss tgz postcss tgz autoclosed cve medium severity vulnerability vulnerable libraries postcss tgz postcss tgz postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file vscode main vscode main package json path to vulnerable library vscode main vscode main node modules postcss dependency hierarchy gulp sourcemaps tgz root library identity map tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file vscode main vscode main package json path to vulnerable library vscode main vscode main node modules postcss dependency hierarchy cssnano tgz root library x postcss tgz vulnerable library found in head commit a href found in base branch vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss step up your open source security game with whitesource
0
128,581
10,543,714,402
IssuesEvent
2019-10-02 15:28:28
omeka-s-modules/Mapping
https://api.github.com/repos/omeka-s-modules/Mapping
opened
Look over timeline layout and styles
testing
Test with our supported themes that: * the timeline now sits left of the map in a horizontal layout * the timeline text uses the theme's font selections
1.0
Look over timeline layout and styles - Test with our supported themes that: * the timeline now sits left of the map in a horizontal layout * the timeline text uses the theme's font selections
test
look over timeline layout and styles test with our supported themes that the timeline now sits left of the map in a horizontal layout the timeline text uses the theme s font selections
1
271,478
23,606,733,567
IssuesEvent
2022-08-24 08:53:59
spring-projects/spring-batch
https://api.github.com/repos/spring-projects/spring-batch
closed
ClassNotFound Exception when using AssertFile
in: test type: bug
When using `AssertFile.assertLineCount` with Spring Boot 3.0.0-SNAPSHOT a CNFE is thrown because org.junit.Assert package name has been changed to `org.junit.jupiter.api.Assertions`.
1.0
ClassNotFound Exception when using AssertFile - When using `AssertFile.assertLineCount` with Spring Boot 3.0.0-SNAPSHOT a CNFE is thrown because org.junit.Assert package name has been changed to `org.junit.jupiter.api.Assertions`.
test
classnotfound exception when using assertfile when using assertfile assertlinecount with spring boot snapshot a cnfe is thrown because org junit assert package name has been changed to org junit jupiter api assertions
1
242,584
26,277,738,417
IssuesEvent
2023-01-07 01:04:21
matrix-profile-foundation/matrixprofile-web
https://api.github.com/repos/matrix-profile-foundation/matrixprofile-web
opened
CVE-2021-23382 (High) detected in postcss-7.0.25.tgz
security vulnerability
## CVE-2021-23382 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-7.0.25.tgz</b></p></summary> <p>Tool for transforming styles with JS plugins</p> <p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.25.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.25.tgz</a></p> <p>Path to dependency file: /mpfrontend/package.json</p> <p>Path to vulnerable library: /mpfrontend/node_modules/postcss/package.json</p> <p> Dependency Hierarchy: - cli-service-4.1.2.tgz (Root Library) - optimize-cssnano-plugin-1.0.6.tgz - :x: **postcss-7.0.25.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/matrix-profile-foundation/matrixprofile-web/commit/4fe35a73b9fb9e1e315e1b790b6e2a4edfd75fcf">4fe35a73b9fb9e1e315e1b790b6e2a4edfd75fcf</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*). <p>Publish Date: 2021-04-26 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23382>CVE-2021-23382</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p> <p>Release Date: 2021-04-26</p> <p>Fix Resolution (postcss): 7.0.36</p> <p>Direct dependency fix Resolution (@vue/cli-service): 4.2.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23382 (High) detected in postcss-7.0.25.tgz - ## CVE-2021-23382 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-7.0.25.tgz</b></p></summary> <p>Tool for transforming styles with JS plugins</p> <p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.25.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.25.tgz</a></p> <p>Path to dependency file: /mpfrontend/package.json</p> <p>Path to vulnerable library: /mpfrontend/node_modules/postcss/package.json</p> <p> Dependency Hierarchy: - cli-service-4.1.2.tgz (Root Library) - optimize-cssnano-plugin-1.0.6.tgz - :x: **postcss-7.0.25.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/matrix-profile-foundation/matrixprofile-web/commit/4fe35a73b9fb9e1e315e1b790b6e2a4edfd75fcf">4fe35a73b9fb9e1e315e1b790b6e2a4edfd75fcf</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*). <p>Publish Date: 2021-04-26 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23382>CVE-2021-23382</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p> <p>Release Date: 2021-04-26</p> <p>Fix Resolution (postcss): 7.0.36</p> <p>Direct dependency fix Resolution (@vue/cli-service): 4.2.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in postcss tgz cve high severity vulnerability vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file mpfrontend package json path to vulnerable library mpfrontend node modules postcss package json dependency hierarchy cli service tgz root library optimize cssnano plugin tgz x postcss tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss direct dependency fix resolution vue cli service step up your open source security game with mend
0
611,803
18,981,874,835
IssuesEvent
2021-11-21 02:25:51
ATLauncher/ATLauncher
https://api.github.com/repos/ATLauncher/ATLauncher
closed
Easy use of system installed GLFW/OpenAL
enhancement low-priority
<!-- Have you read the Code of Conduct? By filing an Issue, you are expected to comply with it. Want to ask a question? Looking for support? Our Discord is the best place to get support: https://atl.pw/discord Also please make sure to check for an existing issue: https://github.com/issues?q=is%3Aissue+org%3Aatlauncher --> **Is your feature request related to a problem? Please describe.** Due to [https://github.com/glfw/glfw/issues/1112](GLFW bug) some launchers allow for loading system libraries rather than ones bundled with Minecraft. **Describe the solution you'd like** The solution is already possible, through Java parameters, though it would be appreciated to have easy buttons for enabling/disabling the feature like MultiMC does. **Describe alternatives you've considered** Adding the Java parameters manually. **Additional context** ![image](https://user-images.githubusercontent.com/49594490/142725839-b6ba0f6f-c0a7-4266-92ba-203255aebed3.png) You can just add those 2 options on the bottom of Java/Minecraft section in settings
1.0
Easy use of system installed GLFW/OpenAL - <!-- Have you read the Code of Conduct? By filing an Issue, you are expected to comply with it. Want to ask a question? Looking for support? Our Discord is the best place to get support: https://atl.pw/discord Also please make sure to check for an existing issue: https://github.com/issues?q=is%3Aissue+org%3Aatlauncher --> **Is your feature request related to a problem? Please describe.** Due to [https://github.com/glfw/glfw/issues/1112](GLFW bug) some launchers allow for loading system libraries rather than ones bundled with Minecraft. **Describe the solution you'd like** The solution is already possible, through Java parameters, though it would be appreciated to have easy buttons for enabling/disabling the feature like MultiMC does. **Describe alternatives you've considered** Adding the Java parameters manually. **Additional context** ![image](https://user-images.githubusercontent.com/49594490/142725839-b6ba0f6f-c0a7-4266-92ba-203255aebed3.png) You can just add those 2 options on the bottom of Java/Minecraft section in settings
non_test
easy use of system installed glfw openal have you read the code of conduct by filing an issue you are expected to comply with it want to ask a question looking for support our discord is the best place to get support also please make sure to check for an existing issue is your feature request related to a problem please describe due to glfw bug some launchers allow for loading system libraries rather than ones bundled with minecraft describe the solution you d like the solution is already possible through java parameters though it would be appreciated to have easy buttons for enabling disabling the feature like multimc does describe alternatives you ve considered adding the java parameters manually additional context you can just add those options on the bottom of java minecraft section in settings
0
52,043
10,758,393,758
IssuesEvent
2019-10-31 14:54:45
Cinimex-Informatica/mq-java-exporter
https://api.github.com/repos/Cinimex-Informatica/mq-java-exporter
closed
Metrics endpoint not working
bug code
Hello, I'm using the following: MQ 9.0.4.0 RHEL 7.7 JRE 1.8.0 IBM Linux build pxa6480sr4fp11-20170823_01(SR4 FP11) mq-java-exporter_0.3.1-beta-m.zip Here is my config: > qmgrConnectionParams: qmgrName: IRTEST qmgrHost: servername qmgrPort: 1414 qmgrChannel: SYSTEM.DEF.SVRCONN user: mqm password: XXXXXXX mqscp: false connTimeout: 12000 useTLS: false keystorePath: /opt/mq_exporter/keystores/keystore.jks keystorePassword: testpass2 truststorePath: /opt/mq_exporter/keystores/truststore.jks truststorePassword: testpass2 sslProtocol: TLSv1.2 cipherSuite: TLS_RSA_WITH_AES_256_CBC_SHA256 prometheusEndpointParams: url: /metrics port: 9157 PCFParameters: sendPCFCommands: false usePCFWildcards: false scrapeInterval: 10 queues: listeners: channels: Here is my out file: > 2019-10-30 12:53:57.941 INFO [main] [ru.cinimex.exporter.Config] [<init>] [ru.cinimex.exporter.Config.<init>(Config.java:85)] - Successfully parsed configuration file! 2019-10-30 12:53:58.951 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_current_primary_space_in_use_percentage' from description 'Log - current primary space in use' 2019-10-30 12:53:58.951 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_workload_primary_space_utilization_percentage' from description 'Log - workload primary space utilization' 2019-10-30 12:53:58.952 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_bytes_required_for_media_recovery_megabytes' from description 'Log - bytes required for media recovery' 2019-10-30 12:53:58.952 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_bytes_occupied_by_reusable_extents_megabytes' from description 'Log - bytes occupied by reusable extents' 2019-10-30 12:53:58.952 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_bytes_occupied_by_extents_waiting_to_be_archived_megabytes' from description 'Log - bytes occupied by extents waiting to be archived' 2019-10-30 12:53:58.953 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_write_size_total' from description 'Log - write size' 2019-10-30 12:53:58.959 INFO [main] [ru.cinimex.exporter.prometheus.metrics.MetricsManager] [initMetrics] [ru.cinimex.exporter.prometheus.metrics.MetricsManager.initMetrics(MetricsManager.java:79)] - Successfully initialized 126 metrics! 2019-10-30 12:53:58.960 INFO [main] [ru.cinimex.exporter.mq.MQSubscriberManager] [runSubscribers] [ru.cinimex.exporter.mq.MQSubscriberManager.runSubscribers(MQSubscriberManager.java:46)] - Launching subscribers... 2019-10-30 12:53:59.011 INFO [main] [ru.cinimex.exporter.mq.MQSubscriberManager] [runSubscribers] [ru.cinimex.exporter.mq.MQSubscriberManager.runSubscribers(MQSubscriberManager.java:62)] - Successfully launched 13 subscribers! 2019-10-30 12:53:59.035 INFO [main] [ru.cinimex.exporter.prometheus.HTTPServer] [<init>] [ru.cinimex.exporter.prometheus.HTTPServer.<init>(HTTPServer.java:46)] - Endpoint /metrics on port 9157 successfully expanded! And when I go the http://servername:9157/metrics endpoint I get this: ![image](https://user-images.githubusercontent.com/25136285/67880698-840f7000-fb15-11e9-85e6-96121ca004ca.png) I'm trying to just get basic metrics - not getting queue or channel or listener or anything else, just some basic metrics to show it's working. It's like prometheus doesn't know how to get the data to the endpoint? Or maybe I'm doing something so obviously dumb and I need someone to point it out. Help?!
1.0
Metrics endpoint not working - Hello, I'm using the following: MQ 9.0.4.0 RHEL 7.7 JRE 1.8.0 IBM Linux build pxa6480sr4fp11-20170823_01(SR4 FP11) mq-java-exporter_0.3.1-beta-m.zip Here is my config: > qmgrConnectionParams: qmgrName: IRTEST qmgrHost: servername qmgrPort: 1414 qmgrChannel: SYSTEM.DEF.SVRCONN user: mqm password: XXXXXXX mqscp: false connTimeout: 12000 useTLS: false keystorePath: /opt/mq_exporter/keystores/keystore.jks keystorePassword: testpass2 truststorePath: /opt/mq_exporter/keystores/truststore.jks truststorePassword: testpass2 sslProtocol: TLSv1.2 cipherSuite: TLS_RSA_WITH_AES_256_CBC_SHA256 prometheusEndpointParams: url: /metrics port: 9157 PCFParameters: sendPCFCommands: false usePCFWildcards: false scrapeInterval: 10 queues: listeners: channels: Here is my out file: > 2019-10-30 12:53:57.941 INFO [main] [ru.cinimex.exporter.Config] [<init>] [ru.cinimex.exporter.Config.<init>(Config.java:85)] - Successfully parsed configuration file! 2019-10-30 12:53:58.951 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_current_primary_space_in_use_percentage' from description 'Log - current primary space in use' 2019-10-30 12:53:58.951 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_workload_primary_space_utilization_percentage' from description 'Log - workload primary space utilization' 2019-10-30 12:53:58.952 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_bytes_required_for_media_recovery_megabytes' from description 'Log - bytes required for media recovery' 2019-10-30 12:53:58.952 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_bytes_occupied_by_reusable_extents_megabytes' from description 'Log - bytes occupied by reusable extents' 2019-10-30 12:53:58.952 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_bytes_occupied_by_extents_waiting_to_be_archived_megabytes' from description 'Log - bytes occupied by extents waiting to be archived' 2019-10-30 12:53:58.953 WARN [main] [ru.cinimex.exporter.prometheus.metrics.MetricsReference] [generateMetricName] [ru.cinimex.exporter.prometheus.metrics.MetricsReference.generateMetricName(MetricsReference.java:284)] - Unknown metric name! Generated new name 'mq_log_write_size_total' from description 'Log - write size' 2019-10-30 12:53:58.959 INFO [main] [ru.cinimex.exporter.prometheus.metrics.MetricsManager] [initMetrics] [ru.cinimex.exporter.prometheus.metrics.MetricsManager.initMetrics(MetricsManager.java:79)] - Successfully initialized 126 metrics! 2019-10-30 12:53:58.960 INFO [main] [ru.cinimex.exporter.mq.MQSubscriberManager] [runSubscribers] [ru.cinimex.exporter.mq.MQSubscriberManager.runSubscribers(MQSubscriberManager.java:46)] - Launching subscribers... 2019-10-30 12:53:59.011 INFO [main] [ru.cinimex.exporter.mq.MQSubscriberManager] [runSubscribers] [ru.cinimex.exporter.mq.MQSubscriberManager.runSubscribers(MQSubscriberManager.java:62)] - Successfully launched 13 subscribers! 2019-10-30 12:53:59.035 INFO [main] [ru.cinimex.exporter.prometheus.HTTPServer] [<init>] [ru.cinimex.exporter.prometheus.HTTPServer.<init>(HTTPServer.java:46)] - Endpoint /metrics on port 9157 successfully expanded! And when I go the http://servername:9157/metrics endpoint I get this: ![image](https://user-images.githubusercontent.com/25136285/67880698-840f7000-fb15-11e9-85e6-96121ca004ca.png) I'm trying to just get basic metrics - not getting queue or channel or listener or anything else, just some basic metrics to show it's working. It's like prometheus doesn't know how to get the data to the endpoint? Or maybe I'm doing something so obviously dumb and I need someone to point it out. Help?!
non_test
metrics endpoint not working hello i m using the following mq rhel jre ibm linux build mq java exporter beta m zip here is my config qmgrconnectionparams qmgrname irtest qmgrhost servername qmgrport qmgrchannel system def svrconn user mqm password xxxxxxx mqscp false conntimeout usetls false keystorepath opt mq exporter keystores keystore jks keystorepassword truststorepath opt mq exporter keystores truststore jks truststorepassword sslprotocol ciphersuite tls rsa with aes cbc prometheusendpointparams url metrics port pcfparameters sendpcfcommands false usepcfwildcards false scrapeinterval queues listeners channels here is my out file info successfully parsed configuration file warn unknown metric name generated new name mq log current primary space in use percentage from description log current primary space in use warn unknown metric name generated new name mq log workload primary space utilization percentage from description log workload primary space utilization warn unknown metric name generated new name mq log bytes required for media recovery megabytes from description log bytes required for media recovery warn unknown metric name generated new name mq log bytes occupied by reusable extents megabytes from description log bytes occupied by reusable extents warn unknown metric name generated new name mq log bytes occupied by extents waiting to be archived megabytes from description log bytes occupied by extents waiting to be archived warn unknown metric name generated new name mq log write size total from description log write size info successfully initialized metrics info launching subscribers info successfully launched subscribers info endpoint metrics on port successfully expanded and when i go the endpoint i get this i m trying to just get basic metrics not getting queue or channel or listener or anything else just some basic metrics to show it s working it s like prometheus doesn t know how to get the data to the endpoint or maybe i m doing something so obviously dumb and i need someone to point it out help
0
44,562
5,633,661,510
IssuesEvent
2017-04-05 19:25:52
brave/browser-laptop
https://api.github.com/repos/brave/browser-laptop
opened
Replace data-test-id with testId in <Button>
automated-tests misc/button
**Describe the issue you encountered:** Replace data-test-id with testId in <Button>. - Any related issues:
1.0
Replace data-test-id with testId in <Button> - **Describe the issue you encountered:** Replace data-test-id with testId in <Button>. - Any related issues:
test
replace data test id with testid in describe the issue you encountered replace data test id with testid in any related issues
1
7,193
3,518,952,351
IssuesEvent
2016-01-12 15:09:19
cogneco/ooc-kean
https://api.github.com/repos/cogneco/ooc-kean
closed
Using FloatPoint2DVectorList
code quality
Although we have a specialized `FloatPoint2DVectorList`, there are many places where we use a `VectorList<FloatPoint2D>`. There might be some places that warrant this, but on the whole we should look at switching to the former, which might allow us to reuse some code and clean up the classes that use it. An example is `FloatConvexHull2D`.
1.0
Using FloatPoint2DVectorList - Although we have a specialized `FloatPoint2DVectorList`, there are many places where we use a `VectorList<FloatPoint2D>`. There might be some places that warrant this, but on the whole we should look at switching to the former, which might allow us to reuse some code and clean up the classes that use it. An example is `FloatConvexHull2D`.
non_test
using although we have a specialized there are many places where we use a vectorlist there might be some places that warrant this but on the whole we should look at switching to the former which might allow us to reuse some code and clean up the classes that use it an example is
0
251,625
21,514,294,174
IssuesEvent
2022-04-28 08:28:27
meshery/meshery
https://api.github.com/repos/meshery/meshery
closed
[mesheryctl] `utils` package unit testing
kind/enhancement help wanted issue/stale component/mesheryctl language/go area/tests issue/remind
<!-- Please update the mesheryctl Command Tracker spreadsheet --> _See [mesheryctl Command Tracker](https://bit.ly/3dqXy1q) for current status of commands._ #### Desired Behavior <!-- A brief description of the enhancement. --> Required to enable unit testing support for mesheryctl `utils` package functions, to ensure the accuracy and robustness of each mesheryctl release, [here](https://github.com/meshery/meshery/tree/master/mesheryctl/pkg/utils) #### Mesheryctl Unit Testing - Write tests using Golang’s standard library. - A combination of CodeCov and GitHub Actions are to be used as mainstays in the approach to unit testing - https://github.com/codecov/codecov-action. #### Files to be covered - [ ] auth.go (Blakelist7 ) - [x] healthcheck.go - [x] helpers.go (aayushmau5) - [x] platform.go (krithikvaidya ) - [ ] script.go - [x] sse-client.go (krithikvaidya ) --- #### Contributor Resources [mesheryctl Contributing Guide](https://github.com/meshery/meshery/blob/master/mesheryctl/README.md) [Beginner's guide to contributing to Meshery and mesheryctl](https://www.youtube.com/watch?v=hh_kFLZx3G4&ab_channel=Layer5) [mesheryctl Command Tracker](https://docs.google.com/spreadsheets/d/1q63sIGAuCnIeDs8PeM-0BAkNj8BBgPUXhLbe1Y-318o/edit#gid=0) [Meshery CLI Commands and Documentation](https://docs.google.com/document/d/1xRlFpElRmybJ3WacgPKXgCSiQ2poJl3iCCV1dAalf0k/edit#heading=h.9jjevr1clxv0) [Layer5 Community Slack](https://layer5io.slack.com/ssb/redirect#/shared-invite/email)
1.0
[mesheryctl] `utils` package unit testing - <!-- Please update the mesheryctl Command Tracker spreadsheet --> _See [mesheryctl Command Tracker](https://bit.ly/3dqXy1q) for current status of commands._ #### Desired Behavior <!-- A brief description of the enhancement. --> Required to enable unit testing support for mesheryctl `utils` package functions, to ensure the accuracy and robustness of each mesheryctl release, [here](https://github.com/meshery/meshery/tree/master/mesheryctl/pkg/utils) #### Mesheryctl Unit Testing - Write tests using Golang’s standard library. - A combination of CodeCov and GitHub Actions are to be used as mainstays in the approach to unit testing - https://github.com/codecov/codecov-action. #### Files to be covered - [ ] auth.go (Blakelist7 ) - [x] healthcheck.go - [x] helpers.go (aayushmau5) - [x] platform.go (krithikvaidya ) - [ ] script.go - [x] sse-client.go (krithikvaidya ) --- #### Contributor Resources [mesheryctl Contributing Guide](https://github.com/meshery/meshery/blob/master/mesheryctl/README.md) [Beginner's guide to contributing to Meshery and mesheryctl](https://www.youtube.com/watch?v=hh_kFLZx3G4&ab_channel=Layer5) [mesheryctl Command Tracker](https://docs.google.com/spreadsheets/d/1q63sIGAuCnIeDs8PeM-0BAkNj8BBgPUXhLbe1Y-318o/edit#gid=0) [Meshery CLI Commands and Documentation](https://docs.google.com/document/d/1xRlFpElRmybJ3WacgPKXgCSiQ2poJl3iCCV1dAalf0k/edit#heading=h.9jjevr1clxv0) [Layer5 Community Slack](https://layer5io.slack.com/ssb/redirect#/shared-invite/email)
test
utils package unit testing see for current status of commands desired behavior required to enable unit testing support for mesheryctl utils package functions to ensure the accuracy and robustness of each mesheryctl release mesheryctl unit testing write tests using golang’s standard library a combination of codecov and github actions are to be used as mainstays in the approach to unit testing files to be covered auth go healthcheck go helpers go platform go krithikvaidya script go sse client go krithikvaidya contributor resources
1
9,509
2,906,234,587
IssuesEvent
2015-06-19 08:40:19
ramu2016/SXUMX357FFAWEJHBMNM54FBQ
https://api.github.com/repos/ramu2016/SXUMX357FFAWEJHBMNM54FBQ
closed
makVaeBtdwYd0Tbhd/qCuQRFK7QDsaw0ZvoXV0C7S00l+JOGL/xmRRElxehDhIH/9pSsOlUoyr6CdWixva3dNYs7tIe1pnHP1EHZ20QxGo5DHf0oEt3GhKphHIjtkmRqxRbdP6Ux1V5oQqlpHwjL5RDJEtUX5oxEUip5ra13heo=
design
6833zypfacinqTzoxeoS0VRw+4Huv6G/mKlYHqSeBbZu+HNzFwSxDk1pw/z4LFlqMNPHhTmUVCrEcFvXnD/H0CE46aueE5GNeBTsrO3oTVRDx5hdMJKkTkoCLzKi/qX+swlE+6yNrV9ZI5moGFfoTNWkWdZ2dwRA2kvXb3XxqUc3kYWBRef0H6nwEtGRZ85cbF8h+6+qYJAlO9MzbejkZdgFPWL25j0UUHYp2sFivlzLFSMmYi+LZqN/eeOLyGtc73KVQ1pDvNVohcYnLC4n1CTQ03EecHUrJSErqBYEQncxK8B6Mh8b/EcFBDBsQLbP7puohorj/StkEPYFQYrkxgK16hZTA1RYJ0y+vsm3ufGkjCkSJTcYeBp8LsvuzROplZ+yhEtBry+mTKsUrJX0ZIvxsSjGzIElP2qkS5O85BbEzC06eal8cy0VVzn++sMD8NkF/nOeGU4v7YRB+/q9Sfoz6U4nFRaLw0SLnZSn6/iIyWbWN511IyyGHgAg/nAi38BeRC5mf2ZsE6zb0THsNwR0XuXB7coXs9f0d2FsoY4FDHhwMydZdcaF9R/wq7Uoeq2Jg03GORJD6Afz7Ij4hw==
1.0
makVaeBtdwYd0Tbhd/qCuQRFK7QDsaw0ZvoXV0C7S00l+JOGL/xmRRElxehDhIH/9pSsOlUoyr6CdWixva3dNYs7tIe1pnHP1EHZ20QxGo5DHf0oEt3GhKphHIjtkmRqxRbdP6Ux1V5oQqlpHwjL5RDJEtUX5oxEUip5ra13heo= - 6833zypfacinqTzoxeoS0VRw+4Huv6G/mKlYHqSeBbZu+HNzFwSxDk1pw/z4LFlqMNPHhTmUVCrEcFvXnD/H0CE46aueE5GNeBTsrO3oTVRDx5hdMJKkTkoCLzKi/qX+swlE+6yNrV9ZI5moGFfoTNWkWdZ2dwRA2kvXb3XxqUc3kYWBRef0H6nwEtGRZ85cbF8h+6+qYJAlO9MzbejkZdgFPWL25j0UUHYp2sFivlzLFSMmYi+LZqN/eeOLyGtc73KVQ1pDvNVohcYnLC4n1CTQ03EecHUrJSErqBYEQncxK8B6Mh8b/EcFBDBsQLbP7puohorj/StkEPYFQYrkxgK16hZTA1RYJ0y+vsm3ufGkjCkSJTcYeBp8LsvuzROplZ+yhEtBry+mTKsUrJX0ZIvxsSjGzIElP2qkS5O85BbEzC06eal8cy0VVzn++sMD8NkF/nOeGU4v7YRB+/q9Sfoz6U4nFRaLw0SLnZSn6/iIyWbWN511IyyGHgAg/nAi38BeRC5mf2ZsE6zb0THsNwR0XuXB7coXs9f0d2FsoY4FDHhwMydZdcaF9R/wq7Uoeq2Jg03GORJD6Afz7Ij4hw==
non_test
jogl xmrrelxehdhih mklyhqsebbzu qx swle lzqn yhetbry
0
460,324
13,208,189,110
IssuesEvent
2020-08-15 02:59:25
TeamSTEP/Catch.ioProjectBoard
https://api.github.com/repos/TeamSTEP/Catch.ioProjectBoard
opened
create Village map area
Mid Priority add feature
## Feature Subtasks - [ ] create a village base on a new game scene - [ ] add village house walls and doors - [ ] add roofs on a different layer (so you can disable to roof layer to see the house interior) - [ ] design interiors (it should be big enough for the player character to move inside it) - [ ] ## Description As described here https://hoonsubin.gitbook.io/catch-io-design-doc/game-design-and-balancing#map-layout-and-areas, the main game stage will be made out of focused areas. This task is concerned with adding the village area. Every house should be big enough for the player to go inside and villages should also have a lot of light sources. ![7b2a1c72919e7768f272f88ea557bafd](https://user-images.githubusercontent.com/40356749/90304036-b5afc700-deee-11ea-86d8-033353d0db47.jpg) ![7z744701ji721](https://user-images.githubusercontent.com/40356749/90304039-b8122100-deee-11ea-8d70-f471b1d09171.png) ![333121](https://user-images.githubusercontent.com/40356749/90304042-ba747b00-deee-11ea-891d-b73377ecda37.jpg) ![dFslcZ](https://user-images.githubusercontent.com/40356749/90304044-be080200-deee-11ea-8d2e-9038a0ee501a.png) ## Difficulty 3/10 ## Estimated Implementation Time - Optimistic - 4 days - Normal - 1 week - Pessimistic - 2 weeks ## Work Start Date August 16th.
1.0
create Village map area - ## Feature Subtasks - [ ] create a village base on a new game scene - [ ] add village house walls and doors - [ ] add roofs on a different layer (so you can disable to roof layer to see the house interior) - [ ] design interiors (it should be big enough for the player character to move inside it) - [ ] ## Description As described here https://hoonsubin.gitbook.io/catch-io-design-doc/game-design-and-balancing#map-layout-and-areas, the main game stage will be made out of focused areas. This task is concerned with adding the village area. Every house should be big enough for the player to go inside and villages should also have a lot of light sources. ![7b2a1c72919e7768f272f88ea557bafd](https://user-images.githubusercontent.com/40356749/90304036-b5afc700-deee-11ea-86d8-033353d0db47.jpg) ![7z744701ji721](https://user-images.githubusercontent.com/40356749/90304039-b8122100-deee-11ea-8d70-f471b1d09171.png) ![333121](https://user-images.githubusercontent.com/40356749/90304042-ba747b00-deee-11ea-891d-b73377ecda37.jpg) ![dFslcZ](https://user-images.githubusercontent.com/40356749/90304044-be080200-deee-11ea-8d2e-9038a0ee501a.png) ## Difficulty 3/10 ## Estimated Implementation Time - Optimistic - 4 days - Normal - 1 week - Pessimistic - 2 weeks ## Work Start Date August 16th.
non_test
create village map area feature subtasks create a village base on a new game scene add village house walls and doors add roofs on a different layer so you can disable to roof layer to see the house interior design interiors it should be big enough for the player character to move inside it description as described here the main game stage will be made out of focused areas this task is concerned with adding the village area every house should be big enough for the player to go inside and villages should also have a lot of light sources difficulty estimated implementation time optimistic days normal week pessimistic weeks work start date august
0
5,776
8,221,346,542
IssuesEvent
2018-09-06 01:25:59
Railcraft/Railcraft
https://api.github.com/repos/Railcraft/Railcraft
closed
Feed Station & Modded animals compatibility
mod compatibility not railcraft
I have encountered a problem with breeding modified animals from [ImprovingMinecraft](https://minecraft.curseforge.com/projects/improving-minecraft) modification (with enabled `animals_genetic_evolution` option): all newborn entities lose their information about the generation and go wild (aggressive and unprofitable). This is an important point of the mechanics of modification and this behavior creates difficulties. Having little experience in programming, I decided to understand what was happening and to examine the Feed station code, after which I came to the conclusion that all processes are only simulated and no vanilla reproduction actually takes place. Spawns a completely new entity with standard values, just from the same instance. Consequently, is not on the IMC side therefore I am writing here. IMC 1.12.1 Railcraft 10.2.0 Manually: ![2018-03-14_02 42 55](https://user-images.githubusercontent.com/8511068/37368802-762beef0-2731-11e8-86a1-6d7df262d296.png) From feed station: ![2018-03-14_02 42 49](https://user-images.githubusercontent.com/8511068/37368801-75fe1aca-2731-11e8-9c43-ea08f175224e.png)
True
Feed Station & Modded animals compatibility - I have encountered a problem with breeding modified animals from [ImprovingMinecraft](https://minecraft.curseforge.com/projects/improving-minecraft) modification (with enabled `animals_genetic_evolution` option): all newborn entities lose their information about the generation and go wild (aggressive and unprofitable). This is an important point of the mechanics of modification and this behavior creates difficulties. Having little experience in programming, I decided to understand what was happening and to examine the Feed station code, after which I came to the conclusion that all processes are only simulated and no vanilla reproduction actually takes place. Spawns a completely new entity with standard values, just from the same instance. Consequently, is not on the IMC side therefore I am writing here. IMC 1.12.1 Railcraft 10.2.0 Manually: ![2018-03-14_02 42 55](https://user-images.githubusercontent.com/8511068/37368802-762beef0-2731-11e8-86a1-6d7df262d296.png) From feed station: ![2018-03-14_02 42 49](https://user-images.githubusercontent.com/8511068/37368801-75fe1aca-2731-11e8-9c43-ea08f175224e.png)
non_test
feed station modded animals compatibility i have encountered a problem with breeding modified animals from modification with enabled animals genetic evolution option all newborn entities lose their information about the generation and go wild aggressive and unprofitable this is an important point of the mechanics of modification and this behavior creates difficulties having little experience in programming i decided to understand what was happening and to examine the feed station code after which i came to the conclusion that all processes are only simulated and no vanilla reproduction actually takes place spawns a completely new entity with standard values just from the same instance consequently is not on the imc side therefore i am writing here imc railcraft manually from feed station
0
2,261
2,712,622,305
IssuesEvent
2015-04-09 14:48:24
interfasys/galleryplus
https://api.github.com/repos/interfasys/galleryplus
opened
Don't reset the public gallery position when finished watching the slideshow
bug coder wanted
As a logged in user, when you quit the slideshow, you're back to where you were in the gallery when you clicked on an image. Unfortunately, this is not the case for public galleries. Since they use different templates, there must be something missing from the public template.
1.0
Don't reset the public gallery position when finished watching the slideshow - As a logged in user, when you quit the slideshow, you're back to where you were in the gallery when you clicked on an image. Unfortunately, this is not the case for public galleries. Since they use different templates, there must be something missing from the public template.
non_test
don t reset the public gallery position when finished watching the slideshow as a logged in user when you quit the slideshow you re back to where you were in the gallery when you clicked on an image unfortunately this is not the case for public galleries since they use different templates there must be something missing from the public template
0
447,786
12,893,252,809
IssuesEvent
2020-07-13 21:13:50
bengibaykal/swe574group1
https://api.github.com/repos/bengibaykal/swe574group1
opened
Azure subscription disabled and server stopped working - should be restarted
Backend Priority : High
1. Azure server stopped working. Issue has reported. ![image](https://user-images.githubusercontent.com/3228918/87353973-75af9a00-c566-11ea-8de8-5d5e9710ebc9.png) ![image](https://user-images.githubusercontent.com/3228918/87354129-b8717200-c566-11ea-951b-be78c8be1721.png) ![image](https://user-images.githubusercontent.com/3228918/87354029-8e1fb480-c566-11ea-9842-bd56c2299fe9.png) Subscription disabled: ![image](https://user-images.githubusercontent.com/3228918/87354073-9d066700-c566-11ea-8f43-00d151f72690.png) Ticket has opened ![image](https://user-images.githubusercontent.com/3228918/87354102-ab548300-c566-11ea-9ca2-9bf28fc8943d.png)
1.0
Azure subscription disabled and server stopped working - should be restarted - 1. Azure server stopped working. Issue has reported. ![image](https://user-images.githubusercontent.com/3228918/87353973-75af9a00-c566-11ea-8de8-5d5e9710ebc9.png) ![image](https://user-images.githubusercontent.com/3228918/87354129-b8717200-c566-11ea-951b-be78c8be1721.png) ![image](https://user-images.githubusercontent.com/3228918/87354029-8e1fb480-c566-11ea-9842-bd56c2299fe9.png) Subscription disabled: ![image](https://user-images.githubusercontent.com/3228918/87354073-9d066700-c566-11ea-8f43-00d151f72690.png) Ticket has opened ![image](https://user-images.githubusercontent.com/3228918/87354102-ab548300-c566-11ea-9ca2-9bf28fc8943d.png)
non_test
azure subscription disabled and server stopped working should be restarted azure server stopped working issue has reported subscription disabled ticket has opened
0
228,613
18,244,721,631
IssuesEvent
2021-10-01 16:49:18
ValveSoftware/Proton
https://api.github.com/repos/ValveSoftware/Proton
closed
Doom (2016) has no audio after id Software logo (379720)
Need Retest XAudio2 Whitelist Update Request
# Compatibility Report - Name of the game with compatibility issues: Doom (2016) - Steam AppID of the game: 379720 ## System Information - GPU: GTX 1060 6GB - Driver/LLVM version: nvidia 440.44-3 - Kernel version: 5.4.4.arch1-1 - [Link to full system information report as Gist](https://gist.github.com/PopeRigby/52e226ed161b375a7ce2881ae6762854) - Proton version: 4.11-9 <!-- Please add `PROTON_LOG=1 %command% ` to the game's launch options and drag and drop the generated `$HOME/steam-$APPID.log` into this issue report --> ## Symptoms <!-- What's the problem? --> After starting Doom (2016) the Bethesda logo and it's accompanying sound play, after that the id Software logo with its accompanying sound plays about half-way through, before the audio cuts out. Then the audio never comes back, even in-game. <!-- 1. You can find the Steam AppID in the URL of the shop page of the game. e.g. for `The Witcher 3: Wild Hunt` the AppID is `292030`. 2. You can find your driver and Linux version, as well as your graphics processor's name in the system information report of Steam. 3. You can retrieve a full system information report by clicking `Help` > `System Information` in the Steam client on your machine. 4. Please copy it to your clipboard by pressing `Ctrl+A` and then `Ctrl+C`. Then paste it in a [Gist](https://gist.github.com/) and post the link in this issue. 5. Please search for open issues and pull requests by the name of the game and find out whether they are relevant and should be referenced above. --> [Proton log](https://github.com/PopeRigby/protonlog)
1.0
Doom (2016) has no audio after id Software logo (379720) - # Compatibility Report - Name of the game with compatibility issues: Doom (2016) - Steam AppID of the game: 379720 ## System Information - GPU: GTX 1060 6GB - Driver/LLVM version: nvidia 440.44-3 - Kernel version: 5.4.4.arch1-1 - [Link to full system information report as Gist](https://gist.github.com/PopeRigby/52e226ed161b375a7ce2881ae6762854) - Proton version: 4.11-9 <!-- Please add `PROTON_LOG=1 %command% ` to the game's launch options and drag and drop the generated `$HOME/steam-$APPID.log` into this issue report --> ## Symptoms <!-- What's the problem? --> After starting Doom (2016) the Bethesda logo and it's accompanying sound play, after that the id Software logo with its accompanying sound plays about half-way through, before the audio cuts out. Then the audio never comes back, even in-game. <!-- 1. You can find the Steam AppID in the URL of the shop page of the game. e.g. for `The Witcher 3: Wild Hunt` the AppID is `292030`. 2. You can find your driver and Linux version, as well as your graphics processor's name in the system information report of Steam. 3. You can retrieve a full system information report by clicking `Help` > `System Information` in the Steam client on your machine. 4. Please copy it to your clipboard by pressing `Ctrl+A` and then `Ctrl+C`. Then paste it in a [Gist](https://gist.github.com/) and post the link in this issue. 5. Please search for open issues and pull requests by the name of the game and find out whether they are relevant and should be referenced above. --> [Proton log](https://github.com/PopeRigby/protonlog)
test
doom has no audio after id software logo compatibility report name of the game with compatibility issues doom steam appid of the game system information gpu gtx driver llvm version nvidia kernel version proton version please add proton log command to the game s launch options and drag and drop the generated home steam appid log into this issue report symptoms after starting doom the bethesda logo and it s accompanying sound play after that the id software logo with its accompanying sound plays about half way through before the audio cuts out then the audio never comes back even in game you can find the steam appid in the url of the shop page of the game e g for the witcher wild hunt the appid is you can find your driver and linux version as well as your graphics processor s name in the system information report of steam you can retrieve a full system information report by clicking help system information in the steam client on your machine please copy it to your clipboard by pressing ctrl a and then ctrl c then paste it in a and post the link in this issue please search for open issues and pull requests by the name of the game and find out whether they are relevant and should be referenced above
1
77,865
9,634,332,083
IssuesEvent
2019-05-15 20:57:32
MetaMask/metamask-extension
https://api.github.com/repos/MetaMask/metamask-extension
closed
Improve ENS Address Input
L03-UI L20-ENS N00-needsDesign
## Current behavior: If you enter an ENS name that is valid, it is instantly replaced with the actual address. ## Expected behavior: Correct names should be detected & resolved, but should not replace the displayed typed name, the same way the URL bar doesn't replace the domain with the IP address. ## Reproduction: - Enter a valid ENS name like `dinodan.eth`. - Notice that it is instantly replaced with the resolved address, and you can't easily see what name you had entered. - Try entering a name like `dan.eth.myspecialaddress.eth`. You can't, because it eager-resolves at the first `.eth`. ## Desired results: - Like in old-UI, correct resolution should indicate correct resolution, and should make the resolved address available to the user (either visually or via copy). - The user entered text should be left alone.
1.0
Improve ENS Address Input - ## Current behavior: If you enter an ENS name that is valid, it is instantly replaced with the actual address. ## Expected behavior: Correct names should be detected & resolved, but should not replace the displayed typed name, the same way the URL bar doesn't replace the domain with the IP address. ## Reproduction: - Enter a valid ENS name like `dinodan.eth`. - Notice that it is instantly replaced with the resolved address, and you can't easily see what name you had entered. - Try entering a name like `dan.eth.myspecialaddress.eth`. You can't, because it eager-resolves at the first `.eth`. ## Desired results: - Like in old-UI, correct resolution should indicate correct resolution, and should make the resolved address available to the user (either visually or via copy). - The user entered text should be left alone.
non_test
improve ens address input current behavior if you enter an ens name that is valid it is instantly replaced with the actual address expected behavior correct names should be detected resolved but should not replace the displayed typed name the same way the url bar doesn t replace the domain with the ip address reproduction enter a valid ens name like dinodan eth notice that it is instantly replaced with the resolved address and you can t easily see what name you had entered try entering a name like dan eth myspecialaddress eth you can t because it eager resolves at the first eth desired results like in old ui correct resolution should indicate correct resolution and should make the resolved address available to the user either visually or via copy the user entered text should be left alone
0
251,843
18,976,697,182
IssuesEvent
2021-11-20 04:54:28
shift-dominicana/WaltCommerce
https://api.github.com/repos/shift-dominicana/WaltCommerce
opened
Diagram Entity - Relation Extended
documentation
This is the same diagram ER but with the fields, atributes of the table.
1.0
Diagram Entity - Relation Extended - This is the same diagram ER but with the fields, atributes of the table.
non_test
diagram entity relation extended this is the same diagram er but with the fields atributes of the table
0
38,127
2,839,324,433
IssuesEvent
2015-05-27 13:13:00
handsontable/handsontable
https://api.github.com/repos/handsontable/handsontable
closed
OnChange Event Issue
Assistance needed Priority: low
Hi Team, Firstly thanks for the wonderful tool. I have a query regarding the handsontable.I have two dropdowns/autocomplete controls in a table onchange of any item in any of the dropdown i would like to populate the other same vice versa i want to do for the other.Can you please help me on how to do that,I wasted almost 2 days to find it out but no improvement. Thanks in Advance Regards, Sriram
1.0
OnChange Event Issue - Hi Team, Firstly thanks for the wonderful tool. I have a query regarding the handsontable.I have two dropdowns/autocomplete controls in a table onchange of any item in any of the dropdown i would like to populate the other same vice versa i want to do for the other.Can you please help me on how to do that,I wasted almost 2 days to find it out but no improvement. Thanks in Advance Regards, Sriram
non_test
onchange event issue hi team firstly thanks for the wonderful tool i have a query regarding the handsontable i have two dropdowns autocomplete controls in a table onchange of any item in any of the dropdown i would like to populate the other same vice versa i want to do for the other can you please help me on how to do that i wasted almost days to find it out but no improvement thanks in advance regards sriram
0
23,527
7,341,732,438
IssuesEvent
2018-03-07 03:45:06
savoirfairelinux/opendht
https://api.github.com/repos/savoirfairelinux/opendht
closed
./configure: line 17172: syntax error near unexpected token `PKG_CHECK_MODULES'
build question
When I run "./configure". There is a error: ./configure: line 17172: syntax error near unexpected token `PKG_CHECK_MODULES' ./configure: line 17172: `PKG_CHECK_MODULES(Nettle, nettle >= 2.4)' And I have installed pkg-config. I compile it on ubuntu16.04.
1.0
./configure: line 17172: syntax error near unexpected token `PKG_CHECK_MODULES' - When I run "./configure". There is a error: ./configure: line 17172: syntax error near unexpected token `PKG_CHECK_MODULES' ./configure: line 17172: `PKG_CHECK_MODULES(Nettle, nettle >= 2.4)' And I have installed pkg-config. I compile it on ubuntu16.04.
non_test
configure line syntax error near unexpected token pkg check modules when i run configure there is a error configure line syntax error near unexpected token pkg check modules configure line pkg check modules nettle nettle and i have installed pkg config i compile it on
0
62,463
6,797,427,972
IssuesEvent
2017-11-01 22:51:31
learn-co-curriculum/react-rendering
https://api.github.com/repos/learn-co-curriculum/react-rendering
closed
ReferenceError: Unknown plugin "transform-react-inline-elements"
Product Test
**The tests no run when i write npm test.**
1.0
ReferenceError: Unknown plugin "transform-react-inline-elements" - **The tests no run when i write npm test.**
test
referenceerror unknown plugin transform react inline elements the tests no run when i write npm test
1
87,148
8,065,519,911
IssuesEvent
2018-08-04 02:37:30
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
go-bindata library issue
lifecycle/rotten sig/testing
It was noted on twitter (https://twitter.com/francesc/status/961249107020001280) that the author of go-bindata (https://github.com/jteeuwen/go-bindata) deleted their account, and someone new created an account under the same name to re-prop up the repo. We should probably review this as the status of this library (currently vendored in k/k), and if we want to keep using it, fork it, etc. https://github.com/kubernetes/kubernetes/commits/master/vendor/github.com/jteeuwen/go-bindata cc: @sttts @thockin @mikedanese
1.0
go-bindata library issue - It was noted on twitter (https://twitter.com/francesc/status/961249107020001280) that the author of go-bindata (https://github.com/jteeuwen/go-bindata) deleted their account, and someone new created an account under the same name to re-prop up the repo. We should probably review this as the status of this library (currently vendored in k/k), and if we want to keep using it, fork it, etc. https://github.com/kubernetes/kubernetes/commits/master/vendor/github.com/jteeuwen/go-bindata cc: @sttts @thockin @mikedanese
test
go bindata library issue it was noted on twitter that the author of go bindata deleted their account and someone new created an account under the same name to re prop up the repo we should probably review this as the status of this library currently vendored in k k and if we want to keep using it fork it etc cc sttts thockin mikedanese
1
71,927
8,690,680,699
IssuesEvent
2018-12-03 22:18:39
LLK/scratch-www
https://api.github.com/repos/LLK/scratch-www
closed
UX— Comment actions for Regular Scratch users
design
## Overview These drawings show how **Regular Scratch users** use REPORT, DELETE on comments. Right now, I'm showing how behavior currently works in 2.0. These UX flows might change. **Related links:** [Comment states overview ](https://github.com/LLK/scratch-www/issues/2063)[Comment actions for Scratch admins](https://github.com/LLK/scratch-www/issues/2080) ## Questions **Things to design:** * An interstitial way to communicate that your comment has been reported, even if it looks normal on refresh? **Edge cases to discuss:** * Users reporting a comment and understanding that it
won’t immediately be hidden (unless they are the 3rd reporter) --- ## Designs (in progress) > **Note:** In these drawings, "Pizzacat" represents the regular Scratcher who is logged in. <img width="60" alt="screen shot 2018-09-13 at 1 40 01 pm" src="https://user-images.githubusercontent.com/8203939/45505406-8d474d00-b75a-11e8-9a0d-9586a9c81769.png"> > **Note:** In reality, actions should only appear on hover. (In mockups, they appear on every comment) <img width="200" alt="screen shot 2018-09-13 at 1 41 24 pm" src="https://user-images.githubusercontent.com/8203939/45505530-d0092500-b75a-11e8-93f1-ffe5edbe2f41.png"> --- ### 1. A regular user can report all comments in any space. (except for their own comment) 
• does NOT change comment visibility
 • creates ticket in moderation queue > Note: Reported comment stays when it's not your own profile or project. ![regular report 2](https://user-images.githubusercontent.com/8203939/45506166-743f9b80-b75c-11e8-91d2-fcb6f8af5186.png) ![regular report 3](https://user-images.githubusercontent.com/8203939/45506167-743f9b80-b75c-11e8-9639-551dfe9067ff.png) ![regular report 3-1](https://user-images.githubusercontent.com/8203939/45629330-267ca900-ba64-11e8-8064-9dc30a9f479b.png) ![regular report 9](https://user-images.githubusercontent.com/8203939/45629478-85422280-ba64-11e8-86b3-50b637d7295f.png) When the page is refreshed, the comment will still be visible to the community. Styling returns to normal. Not the most ideal behavior... Still, changing comment state after page refresh is not something we can address right now. ![regular report 5](https://user-images.githubusercontent.com/8203939/45506169-74d83200-b75c-11e8-99dc-2a518296826e.png) --- ### 2. A regular user can report **_+ remove_** all comments in their own space. (except for their own comment) • changes visibility: community can't see 
• creates ticket in moderation queue > Note: Reported comment DISAPPEARS when it is your profile or project. ![regular report 7](https://user-images.githubusercontent.com/8203939/45626437-9176b180-ba5d-11e8-9e75-c6976be69115.png) ![regular report 8](https://user-images.githubusercontent.com/8203939/45506878-84587a80-b75e-11e8-9863-6660c2acf93f.png) Messaging is different here, because the comment is going to be automatically removed. Is this too much overhead/messaging? ![regular report 8-1](https://user-images.githubusercontent.com/8203939/45629486-88d5a980-ba64-11e8-934c-78ed96bb751e.png) I also wonder if we should just remove the comment, like we do for delete. That could be misleading? But keeping the comment around might also be confusing... ![regular report 9](https://user-images.githubusercontent.com/8203939/45629478-85422280-ba64-11e8-86b3-50b637d7295f.png) When the page is refreshed.... the comment will be hidden from the community. ![regular report 10](https://user-images.githubusercontent.com/8203939/45506880-84587a80-b75e-11e8-95f4-5f25c9fb0413.png) --- ### 3. A regular user can delete all comments in their own space. • changes visibility: community can't see ![regular delete 2](https://user-images.githubusercontent.com/8203939/45506651-e9f83700-b75d-11e8-92d7-10f43cf90fe3.png) ![regular delete 3](https://user-images.githubusercontent.com/8203939/45506652-e9f83700-b75d-11e8-9da0-73b079efb99a.png) ![regular delete 4](https://user-images.githubusercontent.com/8203939/45506653-e9f83700-b75d-11e8-948b-6fbe50471363.png)
1.0
UX— Comment actions for Regular Scratch users - ## Overview These drawings show how **Regular Scratch users** use REPORT, DELETE on comments. Right now, I'm showing how behavior currently works in 2.0. These UX flows might change. **Related links:** [Comment states overview ](https://github.com/LLK/scratch-www/issues/2063)[Comment actions for Scratch admins](https://github.com/LLK/scratch-www/issues/2080) ## Questions **Things to design:** * An interstitial way to communicate that your comment has been reported, even if it looks normal on refresh? **Edge cases to discuss:** * Users reporting a comment and understanding that it
won’t immediately be hidden (unless they are the 3rd reporter) --- ## Designs (in progress) > **Note:** In these drawings, "Pizzacat" represents the regular Scratcher who is logged in. <img width="60" alt="screen shot 2018-09-13 at 1 40 01 pm" src="https://user-images.githubusercontent.com/8203939/45505406-8d474d00-b75a-11e8-9a0d-9586a9c81769.png"> > **Note:** In reality, actions should only appear on hover. (In mockups, they appear on every comment) <img width="200" alt="screen shot 2018-09-13 at 1 41 24 pm" src="https://user-images.githubusercontent.com/8203939/45505530-d0092500-b75a-11e8-93f1-ffe5edbe2f41.png"> --- ### 1. A regular user can report all comments in any space. (except for their own comment) 
• does NOT change comment visibility
 • creates ticket in moderation queue > Note: Reported comment stays when it's not your own profile or project. ![regular report 2](https://user-images.githubusercontent.com/8203939/45506166-743f9b80-b75c-11e8-91d2-fcb6f8af5186.png) ![regular report 3](https://user-images.githubusercontent.com/8203939/45506167-743f9b80-b75c-11e8-9639-551dfe9067ff.png) ![regular report 3-1](https://user-images.githubusercontent.com/8203939/45629330-267ca900-ba64-11e8-8064-9dc30a9f479b.png) ![regular report 9](https://user-images.githubusercontent.com/8203939/45629478-85422280-ba64-11e8-86b3-50b637d7295f.png) When the page is refreshed, the comment will still be visible to the community. Styling returns to normal. Not the most ideal behavior... Still, changing comment state after page refresh is not something we can address right now. ![regular report 5](https://user-images.githubusercontent.com/8203939/45506169-74d83200-b75c-11e8-99dc-2a518296826e.png) --- ### 2. A regular user can report **_+ remove_** all comments in their own space. (except for their own comment) • changes visibility: community can't see 
• creates ticket in moderation queue > Note: Reported comment DISAPPEARS when it is your profile or project. ![regular report 7](https://user-images.githubusercontent.com/8203939/45626437-9176b180-ba5d-11e8-9e75-c6976be69115.png) ![regular report 8](https://user-images.githubusercontent.com/8203939/45506878-84587a80-b75e-11e8-9863-6660c2acf93f.png) Messaging is different here, because the comment is going to be automatically removed. Is this too much overhead/messaging? ![regular report 8-1](https://user-images.githubusercontent.com/8203939/45629486-88d5a980-ba64-11e8-934c-78ed96bb751e.png) I also wonder if we should just remove the comment, like we do for delete. That could be misleading? But keeping the comment around might also be confusing... ![regular report 9](https://user-images.githubusercontent.com/8203939/45629478-85422280-ba64-11e8-86b3-50b637d7295f.png) When the page is refreshed.... the comment will be hidden from the community. ![regular report 10](https://user-images.githubusercontent.com/8203939/45506880-84587a80-b75e-11e8-95f4-5f25c9fb0413.png) --- ### 3. A regular user can delete all comments in their own space. • changes visibility: community can't see ![regular delete 2](https://user-images.githubusercontent.com/8203939/45506651-e9f83700-b75d-11e8-92d7-10f43cf90fe3.png) ![regular delete 3](https://user-images.githubusercontent.com/8203939/45506652-e9f83700-b75d-11e8-9da0-73b079efb99a.png) ![regular delete 4](https://user-images.githubusercontent.com/8203939/45506653-e9f83700-b75d-11e8-948b-6fbe50471363.png)
non_test
ux— comment actions for regular scratch users overview these drawings show how regular scratch users use report delete on comments right now i m showing how behavior currently works in these ux flows might change related links comment states overview questions things to design an interstitial way to communicate that your comment has been reported even if it looks normal on refresh edge cases to discuss users reporting a comment and understanding that it
won’t immediately be hidden unless they are the reporter designs in progress note in these drawings pizzacat represents the regular scratcher who is logged in img width alt screen shot at pm src note in reality actions should only appear on hover in mockups they appear on every comment img width alt screen shot at pm src a regular user can report all comments in any space except for their own comment 
• does not change comment visibility
 • creates ticket in moderation queue note reported comment stays when it s not your own profile or project when the page is refreshed the comment will still be visible to the community styling returns to normal not the most ideal behavior still changing comment state after page refresh is not something we can address right now a regular user can report remove all comments in their own space except for their own comment • changes visibility community can t see 
• creates ticket in moderation queue note reported comment disappears when it is your profile or project messaging is different here because the comment is going to be automatically removed is this too much overhead messaging i also wonder if we should just remove the comment like we do for delete that could be misleading but keeping the comment around might also be confusing when the page is refreshed the comment will be hidden from the community a regular user can delete all comments in their own space • changes visibility community can t see
0
563,324
16,680,312,786
IssuesEvent
2021-06-07 22:20:34
bounswe/2021SpringGroup10
https://api.github.com/repos/bounswe/2021SpringGroup10
opened
Opening pull requests ASAP
Priority: High
Hi folks, please finish up your code and open a pull request so we can move on. I hope you have a nice day.
1.0
Opening pull requests ASAP - Hi folks, please finish up your code and open a pull request so we can move on. I hope you have a nice day.
non_test
opening pull requests asap hi folks please finish up your code and open a pull request so we can move on i hope you have a nice day
0
163,697
12,741,879,469
IssuesEvent
2020-06-26 07:15:37
Vachok/ftpplus
https://api.github.com/repos/Vachok/ftpplus
closed
testToString
TestQuality bug
Execute DataSynchronizerTest::testToString\*\*testToString\*\* \*DataSynchronizerTest\* \*expected [DataSynchronizer[ dbToSync = 'velkom.velkompc', columnName = 'idrec', dataConnectTo = {"class":"MySqlLocalSRVInetStat","hash":-474293888,"timestamp":1592563575075,"dbName":"velkom","tableName":"velkom"}, colNames = {}, columnsNum = 0 ]] but found [DataSynchronizer[ dbToSync = 'velkom.velkompc', columnName = 'idrec', dataConnectTo = {"class":"MySqlLocalSRVInetStat","hash":-474293888,"timestamp":1592732995753,"dbName":"velkom","tableName":"velkom"}, colNames = {}, columnsNum = 0 ]]\* \*java.lang.AssertionError\*
1.0
testToString - Execute DataSynchronizerTest::testToString\*\*testToString\*\* \*DataSynchronizerTest\* \*expected [DataSynchronizer[ dbToSync = 'velkom.velkompc', columnName = 'idrec', dataConnectTo = {"class":"MySqlLocalSRVInetStat","hash":-474293888,"timestamp":1592563575075,"dbName":"velkom","tableName":"velkom"}, colNames = {}, columnsNum = 0 ]] but found [DataSynchronizer[ dbToSync = 'velkom.velkompc', columnName = 'idrec', dataConnectTo = {"class":"MySqlLocalSRVInetStat","hash":-474293888,"timestamp":1592732995753,"dbName":"velkom","tableName":"velkom"}, colNames = {}, columnsNum = 0 ]]\* \*java.lang.AssertionError\*
test
testtostring execute datasynchronizertest testtostring testtostring datasynchronizertest expected but found java lang assertionerror
1
3,538
9,741,982,810
IssuesEvent
2019-06-02 13:34:54
Bledhard/GreenShop
https://api.github.com/repos/Bledhard/GreenShop
closed
Redesign WBS API Gateway using Ocelot
architecture
I've constructed WBS in a wrong way, duplicating the code from Catalog API. That makes our gateway not fit the purpose for which it was created. After some investigation, I found out that we can arrange this Facade-ish logic using lightweight API gateway middleware - Ocelot. Later on, when we will need some specific logic created on the gateway side we will be able to create aggregating microservice, that will contain all these cross-cutting algorithms.
1.0
Redesign WBS API Gateway using Ocelot - I've constructed WBS in a wrong way, duplicating the code from Catalog API. That makes our gateway not fit the purpose for which it was created. After some investigation, I found out that we can arrange this Facade-ish logic using lightweight API gateway middleware - Ocelot. Later on, when we will need some specific logic created on the gateway side we will be able to create aggregating microservice, that will contain all these cross-cutting algorithms.
non_test
redesign wbs api gateway using ocelot i ve constructed wbs in a wrong way duplicating the code from catalog api that makes our gateway not fit the purpose for which it was created after some investigation i found out that we can arrange this facade ish logic using lightweight api gateway middleware ocelot later on when we will need some specific logic created on the gateway side we will be able to create aggregating microservice that will contain all these cross cutting algorithms
0
190,810
6,822,733,463
IssuesEvent
2017-11-07 21:07:52
Microsoft/AdaptiveCards
https://api.github.com/repos/Microsoft/AdaptiveCards
closed
UWP Renderer considerations for fixed height containers like Tiles, Timeline
Area-Renderers Platform-UWP Priority-Later
Dropping elements that don't fit, etc.
1.0
UWP Renderer considerations for fixed height containers like Tiles, Timeline - Dropping elements that don't fit, etc.
non_test
uwp renderer considerations for fixed height containers like tiles timeline dropping elements that don t fit etc
0
73,209
31,988,069,367
IssuesEvent
2023-09-21 02:06:06
vmware/singleton
https://api.github.com/repos/vmware/singleton
closed
[Security][Service] Please upgrade dependency libraries to latest version (guava)
area/java-service kind/security priority/high
In Singleton Service build, below dependency library is out of date, please upgrade it to latest version: - guava(30.1.1-jre)
1.0
[Security][Service] Please upgrade dependency libraries to latest version (guava) - In Singleton Service build, below dependency library is out of date, please upgrade it to latest version: - guava(30.1.1-jre)
non_test
please upgrade dependency libraries to latest version guava in singleton service build below dependency library is out of date please upgrade it to latest version guava jre
0
70,381
7,186,920,494
IssuesEvent
2018-02-02 01:51:23
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Test: System.Linq.Parallel.Tests.FirstFirstOrDefaultTests/First_NoMatch_Longrunning failed with "System.AggregateException"
area-System.Linq.Parallel test bug test-run-core
Opened on behalf of @Jiayili1 The test `System.Linq.Parallel.Tests.FirstFirstOrDefaultTests/First_NoMatch_Longrunning(labeled: Enumerable.Range-Ordered, count: 65536, position: 0)` has failed. Assert.Throws() Failure Expected: typeof(System.InvalidOperationException) Actual: typeof(System.AggregateException): One or more errors occurred. (One or more errors occurred. (Object reference not set to an instance of an object.)) (One or more errors occurred. (Object reference not set to an instance of an object.)) (One or more errors occurred. (Object reference not set to an instance of an object.)) (One or more errors occurred. (Object reference not set to an instance of an object.)) (Object reference not set to an instance of an object.) (Object reference not set to an instance of an object.) (Object reference not set to an instance of an object.) (Object reference not set to an instance of an object.) Stack Trace: at System.Linq.Parallel.QueryTaskGroupState.QueryEnd(Boolean userInitiatedDispose) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/Scheduling/QueryTaskGroupState.cs:line 132 at System.Linq.Parallel.OrderPreservingSpoolingTask`2.Spool(QueryTaskGroupState groupState, PartitionedStream`2 partitions, Shared`1 results, TaskScheduler taskScheduler) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/Scheduling/OrderPreservingSpoolingTask.cs:line 123 at System.Linq.Parallel.MergeExecutor`1.Execute[TKey](PartitionedStream`2 partitions, Boolean ignoreOutput, ParallelMergeOptions options, TaskScheduler taskScheduler, Boolean isOrdered, CancellationState cancellationState, Int32 queryId) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/Merging/MergeExecutor.cs:line 93 at System.Linq.Parallel.PartitionedStreamMerger`1.Receive[TKey](PartitionedStream`2 partitionedStream) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/PartitionedStreamMerger.cs:line 61 at System.Linq.Parallel.FirstQueryOperator`1.WrapHelper[TKey](PartitionedStream`2 inputStream, IPartitionedStreamRecipient`1 recipient, QuerySettings settings) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/Unary/FirstQueryOperator.cs:line 92 at System.Linq.Parallel.FirstQueryOperator`1.WrapPartitionedStream[TKey](PartitionedStream`2 inputStream, IPartitionedStreamRecipient`1 recipient, Boolean preferStriping, QuerySettings settings) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/Unary/FirstQueryOperator.cs:line 70 at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.ChildResultsRecipient.Receive[TKey](PartitionedStream`2 inputStream) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/UnaryQueryOperator.cs:line 162 at System.Linq.Parallel.ScanQueryOperator`1.ScanEnumerableQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/ScanQueryOperator.cs:line 138 at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/UnaryQueryOperator.cs:line 132 at System.Linq.Parallel.QueryOperator`1.GetOpenedEnumerator(Nullable`1 mergeOptions, Boolean suppressOrder, Boolean forEffect, QuerySettings querySettings) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/QueryOperator.cs:line 166 at System.Linq.Parallel.QueryOpeningEnumerator`1.OpenQuery() in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/QueryOpeningEnumerator.cs:line 164 at System.Linq.Parallel.QueryOpeningEnumerator`1.MoveNext() in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/QueryOpeningEnumerator.cs:line 111 at System.Linq.ParallelEnumerable.GetOneWithPossibleDefault[TSource](QueryOperator`1 queryOp, Boolean throwIfTwo, Boolean defaultIfEmpty) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/ParallelEnumerable.cs:line 5349 at System.Linq.ParallelEnumerable.First[TSource](ParallelQuery`1 source, Func`2 predicate) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/ParallelEnumerable.cs:line 5467 at System.Linq.Parallel.Tests.FirstFirstOrDefaultTests.<>c__DisplayClass9_0.<First_NoMatch>b__0() in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/tests/QueryOperators/FirstFirstOrDefaultTests.cs:line 100 Build : Master - 20170821.01 (Core Tests) Failing configurations: - OSX.1012.Amd64-x64 - Release Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20170821.01/workItem/System.Linq.Parallel.Tests/analysis/xunit/System.Linq.Parallel.Tests.FirstFirstOrDefaultTests~2FFirst_NoMatch_Longrunning(labeled:%20Enumerable.Range-Ordered,%20count:%2065536,%20position:%200)
2.0
Test: System.Linq.Parallel.Tests.FirstFirstOrDefaultTests/First_NoMatch_Longrunning failed with "System.AggregateException" - Opened on behalf of @Jiayili1 The test `System.Linq.Parallel.Tests.FirstFirstOrDefaultTests/First_NoMatch_Longrunning(labeled: Enumerable.Range-Ordered, count: 65536, position: 0)` has failed. Assert.Throws() Failure Expected: typeof(System.InvalidOperationException) Actual: typeof(System.AggregateException): One or more errors occurred. (One or more errors occurred. (Object reference not set to an instance of an object.)) (One or more errors occurred. (Object reference not set to an instance of an object.)) (One or more errors occurred. (Object reference not set to an instance of an object.)) (One or more errors occurred. (Object reference not set to an instance of an object.)) (Object reference not set to an instance of an object.) (Object reference not set to an instance of an object.) (Object reference not set to an instance of an object.) (Object reference not set to an instance of an object.) Stack Trace: at System.Linq.Parallel.QueryTaskGroupState.QueryEnd(Boolean userInitiatedDispose) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/Scheduling/QueryTaskGroupState.cs:line 132 at System.Linq.Parallel.OrderPreservingSpoolingTask`2.Spool(QueryTaskGroupState groupState, PartitionedStream`2 partitions, Shared`1 results, TaskScheduler taskScheduler) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/Scheduling/OrderPreservingSpoolingTask.cs:line 123 at System.Linq.Parallel.MergeExecutor`1.Execute[TKey](PartitionedStream`2 partitions, Boolean ignoreOutput, ParallelMergeOptions options, TaskScheduler taskScheduler, Boolean isOrdered, CancellationState cancellationState, Int32 queryId) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/Merging/MergeExecutor.cs:line 93 at System.Linq.Parallel.PartitionedStreamMerger`1.Receive[TKey](PartitionedStream`2 partitionedStream) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/PartitionedStreamMerger.cs:line 61 at System.Linq.Parallel.FirstQueryOperator`1.WrapHelper[TKey](PartitionedStream`2 inputStream, IPartitionedStreamRecipient`1 recipient, QuerySettings settings) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/Unary/FirstQueryOperator.cs:line 92 at System.Linq.Parallel.FirstQueryOperator`1.WrapPartitionedStream[TKey](PartitionedStream`2 inputStream, IPartitionedStreamRecipient`1 recipient, Boolean preferStriping, QuerySettings settings) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/Unary/FirstQueryOperator.cs:line 70 at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.ChildResultsRecipient.Receive[TKey](PartitionedStream`2 inputStream) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/UnaryQueryOperator.cs:line 162 at System.Linq.Parallel.ScanQueryOperator`1.ScanEnumerableQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/ScanQueryOperator.cs:line 138 at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/UnaryQueryOperator.cs:line 132 at System.Linq.Parallel.QueryOperator`1.GetOpenedEnumerator(Nullable`1 mergeOptions, Boolean suppressOrder, Boolean forEffect, QuerySettings querySettings) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/QueryOperator.cs:line 166 at System.Linq.Parallel.QueryOpeningEnumerator`1.OpenQuery() in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/QueryOpeningEnumerator.cs:line 164 at System.Linq.Parallel.QueryOpeningEnumerator`1.MoveNext() in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/Parallel/QueryOperators/QueryOpeningEnumerator.cs:line 111 at System.Linq.ParallelEnumerable.GetOneWithPossibleDefault[TSource](QueryOperator`1 queryOp, Boolean throwIfTwo, Boolean defaultIfEmpty) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/ParallelEnumerable.cs:line 5349 at System.Linq.ParallelEnumerable.First[TSource](ParallelQuery`1 source, Func`2 predicate) in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/src/System/Linq/ParallelEnumerable.cs:line 5467 at System.Linq.Parallel.Tests.FirstFirstOrDefaultTests.<>c__DisplayClass9_0.<First_NoMatch>b__0() in /Users/buildagent/agent/_work/30/s/corefx/src/System.Linq.Parallel/tests/QueryOperators/FirstFirstOrDefaultTests.cs:line 100 Build : Master - 20170821.01 (Core Tests) Failing configurations: - OSX.1012.Amd64-x64 - Release Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20170821.01/workItem/System.Linq.Parallel.Tests/analysis/xunit/System.Linq.Parallel.Tests.FirstFirstOrDefaultTests~2FFirst_NoMatch_Longrunning(labeled:%20Enumerable.Range-Ordered,%20count:%2065536,%20position:%200)
test
test system linq parallel tests firstfirstordefaulttests first nomatch longrunning failed with system aggregateexception opened on behalf of the test system linq parallel tests firstfirstordefaulttests first nomatch longrunning labeled enumerable range ordered count position has failed assert throws failure expected typeof system invalidoperationexception actual typeof system aggregateexception one or more errors occurred one or more errors occurred object reference not set to an instance of an object one or more errors occurred object reference not set to an instance of an object one or more errors occurred object reference not set to an instance of an object one or more errors occurred object reference not set to an instance of an object object reference not set to an instance of an object object reference not set to an instance of an object object reference not set to an instance of an object object reference not set to an instance of an object stack trace at system linq parallel querytaskgroupstate queryend boolean userinitiateddispose in users buildagent agent work s corefx src system linq parallel src system linq parallel scheduling querytaskgroupstate cs line at system linq parallel orderpreservingspoolingtask spool querytaskgroupstate groupstate partitionedstream partitions shared results taskscheduler taskscheduler in users buildagent agent work s corefx src system linq parallel src system linq parallel scheduling orderpreservingspoolingtask cs line at system linq parallel mergeexecutor execute partitionedstream partitions boolean ignoreoutput parallelmergeoptions options taskscheduler taskscheduler boolean isordered cancellationstate cancellationstate queryid in users buildagent agent work s corefx src system linq parallel src system linq parallel merging mergeexecutor cs line at system linq parallel partitionedstreammerger receive partitionedstream partitionedstream in users buildagent agent work s corefx src system linq parallel src system linq parallel queryoperators partitionedstreammerger cs line at system linq parallel firstqueryoperator wraphelper partitionedstream inputstream ipartitionedstreamrecipient recipient querysettings settings in users buildagent agent work s corefx src system linq parallel src system linq parallel queryoperators unary firstqueryoperator cs line at system linq parallel firstqueryoperator wrappartitionedstream partitionedstream inputstream ipartitionedstreamrecipient recipient boolean preferstriping querysettings settings in users buildagent agent work s corefx src system linq parallel src system linq parallel queryoperators unary firstqueryoperator cs line at system linq parallel unaryqueryoperator unaryqueryoperatorresults childresultsrecipient receive partitionedstream inputstream in users buildagent agent work s corefx src system linq parallel src system linq parallel queryoperators unaryqueryoperator cs line at system linq parallel scanqueryoperator scanenumerablequeryoperatorresults givepartitionedstream ipartitionedstreamrecipient recipient in users buildagent agent work s corefx src system linq parallel src system linq parallel queryoperators scanqueryoperator cs line at system linq parallel unaryqueryoperator unaryqueryoperatorresults givepartitionedstream ipartitionedstreamrecipient recipient in users buildagent agent work s corefx src system linq parallel src system linq parallel queryoperators unaryqueryoperator cs line at system linq parallel queryoperator getopenedenumerator nullable mergeoptions boolean suppressorder boolean foreffect querysettings querysettings in users buildagent agent work s corefx src system linq parallel src system linq parallel queryoperators queryoperator cs line at system linq parallel queryopeningenumerator openquery in users buildagent agent work s corefx src system linq parallel src system linq parallel queryoperators queryopeningenumerator cs line at system linq parallel queryopeningenumerator movenext in users buildagent agent work s corefx src system linq parallel src system linq parallel queryoperators queryopeningenumerator cs line at system linq parallelenumerable getonewithpossibledefault queryoperator queryop boolean throwiftwo boolean defaultifempty in users buildagent agent work s corefx src system linq parallel src system linq parallelenumerable cs line at system linq parallelenumerable first parallelquery source func predicate in users buildagent agent work s corefx src system linq parallel src system linq parallelenumerable cs line at system linq parallel tests firstfirstordefaulttests c b in users buildagent agent work s corefx src system linq parallel tests queryoperators firstfirstordefaulttests cs line build master core tests failing configurations osx release detail
1
75,443
14,448,558,860
IssuesEvent
2020-12-08 06:29:54
numbersprotocol/capture-lite
https://api.github.com/repos/numbersprotocol/capture-lite
closed
Refactor: Numbers DIA Backend
code
Decouple the dependencies for DIA backend. ``` PushNotification, HttpClient ^ DiaBackend ^ Local Database Tables ^ Repositories ``` - [x] Extract `/auth` endpoints to a standalone service. - [x] Extract `/api/v2/assets` endpoints to a standalone service with `Asset` repository. - [x] Extract `/api/v2/transactions` endpoints to a standalone service with `Inbox` and `Transaction` repository. - [x] Remove `Publisher` interface and `PublisherAlert` service. - [x] Reimplement `IgnoredTransactionRepository`.
1.0
Refactor: Numbers DIA Backend - Decouple the dependencies for DIA backend. ``` PushNotification, HttpClient ^ DiaBackend ^ Local Database Tables ^ Repositories ``` - [x] Extract `/auth` endpoints to a standalone service. - [x] Extract `/api/v2/assets` endpoints to a standalone service with `Asset` repository. - [x] Extract `/api/v2/transactions` endpoints to a standalone service with `Inbox` and `Transaction` repository. - [x] Remove `Publisher` interface and `PublisherAlert` service. - [x] Reimplement `IgnoredTransactionRepository`.
non_test
refactor numbers dia backend decouple the dependencies for dia backend pushnotification httpclient diabackend local database tables repositories extract auth endpoints to a standalone service extract api assets endpoints to a standalone service with asset repository extract api transactions endpoints to a standalone service with inbox and transaction repository remove publisher interface and publisheralert service reimplement ignoredtransactionrepository
0
183,711
14,247,322,940
IssuesEvent
2020-11-19 11:17:12
filecoin-project/rust-fil-proofs
https://api.github.com/repos/filecoin-project/rust-fil-proofs
closed
Improve CI speed
tests
We can improve the CI speed by splitting up the test per crate and as such have more jobs be executed in parallel. Eg ``` job1: cargo test -p storage-proofs-core job2: cargo test -p storage-proofs-porep job3: cargo test -p storage-proofs-post ```
1.0
Improve CI speed - We can improve the CI speed by splitting up the test per crate and as such have more jobs be executed in parallel. Eg ``` job1: cargo test -p storage-proofs-core job2: cargo test -p storage-proofs-porep job3: cargo test -p storage-proofs-post ```
test
improve ci speed we can improve the ci speed by splitting up the test per crate and as such have more jobs be executed in parallel eg cargo test p storage proofs core cargo test p storage proofs porep cargo test p storage proofs post
1
96,122
16,113,230,678
IssuesEvent
2021-04-28 01:52:34
jgeraigery/cloud-native-starter
https://api.github.com/repos/jgeraigery/cloud-native-starter
closed
CVE-2020-11112 (High) detected in multiple libraries - autoclosed
security vulnerability
## CVE-2020-11112 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.10.2.jar</b>, <b>jackson-databind-2.9.10.1.jar</b>, <b>jackson-databind-2.9.8.jar</b>, <b>jackson-databind-2.9.9.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.9.10.2.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: cloud-native-starter/reactive/articles-reactive/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.10.2/jackson-databind-2.9.10.2.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.10.2/jackson-databind-2.9.10.2.jar</p> <p> Dependency Hierarchy: - quarkus-smallrye-openapi-1.1.1.Final.jar (Root Library) - smallrye-open-api-1.1.20.jar - :x: **jackson-databind-2.9.10.2.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.10.1.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: cloud-native-starter/reactive/web-api-reactive/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.10.1/jackson-databind-2.9.10.1.jar</p> <p> Dependency Hierarchy: - quarkus-smallrye-reactive-messaging-kafka-1.0.1.Final.jar (Root Library) - quarkus-jackson-1.0.1.Final.jar - :x: **jackson-databind-2.9.10.1.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.8.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: cloud-native-starter/authors-java-spring-boot/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,cloud-native-starter/authors-java-spring-boot/target/liberty/wlp/usr/shared/resources/lib.index.cache/23/51c3eba73a545db9079f5d6d768347ad72666537362c8220fe3e950a55a864/jackson-databind-2.9.8.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.8.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.9.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: cloud-native-starter/articles-java-spring-boot/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.1.6.RELEASE.jar (Root Library) - spring-boot-actuator-autoconfigure-2.1.6.RELEASE.jar - :x: **jackson-databind-2.9.9.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/cloud-native-starter/commit/9c841ea96590f71f0a576c3f6e007612cc9dea4e">9c841ea96590f71f0a576c3f6e007612cc9dea4e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.proxy.provider.remoting.RmiProvider (aka apache/commons-proxy). <p>Publish Date: 2020-03-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11112>CVE-2020-11112</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112</a></p> <p>Release Date: 2020-03-31</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10.2","packageFilePaths":["/reactive/articles-reactive/pom.xml","/reactive/articles-synch/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"io.quarkus:quarkus-smallrye-openapi:1.1.1.Final;io.smallrye:smallrye-open-api:1.1.20;com.fasterxml.jackson.core:jackson-databind:2.9.10.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10.1","packageFilePaths":["/reactive/web-api-reactive/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"io.quarkus:quarkus-smallrye-reactive-messaging-kafka:1.0.1.Final;io.quarkus:quarkus-jackson:1.0.1.Final;com.fasterxml.jackson.core:jackson-databind:2.9.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/authors-java-spring-boot/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9","packageFilePaths":["/articles-java-spring-boot/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-actuator:2.1.6.RELEASE;org.springframework.boot:spring-boot-actuator-autoconfigure:2.1.6.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-11112","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.proxy.provider.remoting.RmiProvider (aka apache/commons-proxy).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11112","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-11112 (High) detected in multiple libraries - autoclosed - ## CVE-2020-11112 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.10.2.jar</b>, <b>jackson-databind-2.9.10.1.jar</b>, <b>jackson-databind-2.9.8.jar</b>, <b>jackson-databind-2.9.9.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.9.10.2.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: cloud-native-starter/reactive/articles-reactive/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.10.2/jackson-databind-2.9.10.2.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.10.2/jackson-databind-2.9.10.2.jar</p> <p> Dependency Hierarchy: - quarkus-smallrye-openapi-1.1.1.Final.jar (Root Library) - smallrye-open-api-1.1.20.jar - :x: **jackson-databind-2.9.10.2.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.10.1.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: cloud-native-starter/reactive/web-api-reactive/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.10.1/jackson-databind-2.9.10.1.jar</p> <p> Dependency Hierarchy: - quarkus-smallrye-reactive-messaging-kafka-1.0.1.Final.jar (Root Library) - quarkus-jackson-1.0.1.Final.jar - :x: **jackson-databind-2.9.10.1.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.8.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: cloud-native-starter/authors-java-spring-boot/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,cloud-native-starter/authors-java-spring-boot/target/liberty/wlp/usr/shared/resources/lib.index.cache/23/51c3eba73a545db9079f5d6d768347ad72666537362c8220fe3e950a55a864/jackson-databind-2.9.8.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.8.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.9.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: cloud-native-starter/articles-java-spring-boot/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.1.6.RELEASE.jar (Root Library) - spring-boot-actuator-autoconfigure-2.1.6.RELEASE.jar - :x: **jackson-databind-2.9.9.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/cloud-native-starter/commit/9c841ea96590f71f0a576c3f6e007612cc9dea4e">9c841ea96590f71f0a576c3f6e007612cc9dea4e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.proxy.provider.remoting.RmiProvider (aka apache/commons-proxy). <p>Publish Date: 2020-03-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11112>CVE-2020-11112</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112</a></p> <p>Release Date: 2020-03-31</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10.2","packageFilePaths":["/reactive/articles-reactive/pom.xml","/reactive/articles-synch/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"io.quarkus:quarkus-smallrye-openapi:1.1.1.Final;io.smallrye:smallrye-open-api:1.1.20;com.fasterxml.jackson.core:jackson-databind:2.9.10.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10.1","packageFilePaths":["/reactive/web-api-reactive/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"io.quarkus:quarkus-smallrye-reactive-messaging-kafka:1.0.1.Final;io.quarkus:quarkus-jackson:1.0.1.Final;com.fasterxml.jackson.core:jackson-databind:2.9.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/authors-java-spring-boot/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9","packageFilePaths":["/articles-java-spring-boot/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-actuator:2.1.6.RELEASE;org.springframework.boot:spring-boot-actuator-autoconfigure:2.1.6.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-11112","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.proxy.provider.remoting.RmiProvider (aka apache/commons-proxy).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11112","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_test
cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file cloud native starter reactive articles reactive pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy quarkus smallrye openapi final jar root library smallrye open api jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file cloud native starter reactive web api reactive pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy quarkus smallrye reactive messaging kafka final jar root library quarkus jackson final jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file cloud native starter authors java spring boot pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar cloud native starter authors java spring boot target liberty wlp usr shared resources lib index cache jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file cloud native starter articles java spring boot pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter actuator release jar root library spring boot actuator autoconfigure release jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons proxy provider remoting rmiprovider aka apache commons proxy publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree io quarkus quarkus smallrye openapi final io smallrye smallrye open api com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree io quarkus quarkus smallrye reactive messaging kafka final io quarkus quarkus jackson final com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter actuator release org springframework boot spring boot actuator autoconfigure release com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons proxy provider remoting rmiprovider aka apache commons proxy vulnerabilityurl
0
268,381
23,364,981,630
IssuesEvent
2022-08-10 14:41:07
backend-br/vagas
https://api.github.com/repos/backend-br/vagas
closed
[REMOTO] Back-end Developer Java @Invillia
CLT Pleno Java Remoto AWS NoSQL Spring Testes Unitários SQL Git
### Nossa empresa Aproxime-se. A Invillia não promoveu as empresas como empresas únicas que fazem parte do mundo criativos, como empresas criativas e digitais. Inovou a maneira como as pessoas apaixonadas por tecnologia, podem interagir de qualquer lugar, evoluir também do planeta que nunca. Para a Invillia, não importa onde você está. Se é um país grande. Ou uma cidade pequena. E sim a sua vontade. Como suas ideias. O seu potencial. O tamanho do seu talento_ ### Responsabilidades e Atribuições O profissional será responsável em prover soluções técnicas para novas features e dar o suporte necessário as features já existentes, afinal, nem tudo são flores. Esperamos também que essa pessoa auxilie os outros membros do time em questões técnicas não esquecendo de fornecer a melhor solução para o negócio. Algo que prezamos bastante é qualidade, isso inclui um código limpo e legível (clean code). Também é desejável que o mesmo tenha um perfil intra-empreendedor, onde seus objetivos estejam alinhados com os objetivos da empresa, afinal, temos muito orgulho do que fazemos aqui! ### Requisitos e Qualificações Experiência em desenvolvimento com Java; Definição de Arquitetura exercendo o papel de Referência Técnica; Experiência em desenvolvimento com Spring (Boot, Data, Cache, etc); Conhecimentos em Java 8 (mínimo); Conhecimento em Kafka; Conhecimentos em AWS (SNS, SQS, S3); Conhecimentos em Git e Git-Flow; Experiência com bancos de dados SQL e NoSQL; Desenvolvimento com foco em qualidade: testes unitários e Sonar(métricas); Experiência em micro serviços e sistemas concorrentes; Contínuos delivery (Jenkins). ### Tipo de contratação CLT, Remoto ### Como se candidatar Por favor envie um email para [pamela.moreira@invillia.com] com seu CV anexado e pretensão salarial - enviar no assunto: Vaga JAVA. ### Alocação Remoto ### Regime CLT ### Nível Pleno Sênior
1.0
[REMOTO] Back-end Developer Java @Invillia - ### Nossa empresa Aproxime-se. A Invillia não promoveu as empresas como empresas únicas que fazem parte do mundo criativos, como empresas criativas e digitais. Inovou a maneira como as pessoas apaixonadas por tecnologia, podem interagir de qualquer lugar, evoluir também do planeta que nunca. Para a Invillia, não importa onde você está. Se é um país grande. Ou uma cidade pequena. E sim a sua vontade. Como suas ideias. O seu potencial. O tamanho do seu talento_ ### Responsabilidades e Atribuições O profissional será responsável em prover soluções técnicas para novas features e dar o suporte necessário as features já existentes, afinal, nem tudo são flores. Esperamos também que essa pessoa auxilie os outros membros do time em questões técnicas não esquecendo de fornecer a melhor solução para o negócio. Algo que prezamos bastante é qualidade, isso inclui um código limpo e legível (clean code). Também é desejável que o mesmo tenha um perfil intra-empreendedor, onde seus objetivos estejam alinhados com os objetivos da empresa, afinal, temos muito orgulho do que fazemos aqui! ### Requisitos e Qualificações Experiência em desenvolvimento com Java; Definição de Arquitetura exercendo o papel de Referência Técnica; Experiência em desenvolvimento com Spring (Boot, Data, Cache, etc); Conhecimentos em Java 8 (mínimo); Conhecimento em Kafka; Conhecimentos em AWS (SNS, SQS, S3); Conhecimentos em Git e Git-Flow; Experiência com bancos de dados SQL e NoSQL; Desenvolvimento com foco em qualidade: testes unitários e Sonar(métricas); Experiência em micro serviços e sistemas concorrentes; Contínuos delivery (Jenkins). ### Tipo de contratação CLT, Remoto ### Como se candidatar Por favor envie um email para [pamela.moreira@invillia.com] com seu CV anexado e pretensão salarial - enviar no assunto: Vaga JAVA. ### Alocação Remoto ### Regime CLT ### Nível Pleno Sênior
test
back end developer java invillia nossa empresa aproxime se a invillia não promoveu as empresas como empresas únicas que fazem parte do mundo criativos como empresas criativas e digitais inovou a maneira como as pessoas apaixonadas por tecnologia podem interagir de qualquer lugar evoluir também do planeta que nunca para a invillia não importa onde você está se é um país grande ou uma cidade pequena e sim a sua vontade como suas ideias o seu potencial o tamanho do seu talento responsabilidades e atribuições o profissional será responsável em prover soluções técnicas para novas features e dar o suporte necessário as features já existentes afinal nem tudo são flores esperamos também que essa pessoa auxilie os outros membros do time em questões técnicas não esquecendo de fornecer a melhor solução para o negócio algo que prezamos bastante é qualidade isso inclui um código limpo e legível clean code também é desejável que o mesmo tenha um perfil intra empreendedor onde seus objetivos estejam alinhados com os objetivos da empresa afinal temos muito orgulho do que fazemos aqui requisitos e qualificações experiência em desenvolvimento com java definição de arquitetura exercendo o papel de referência técnica experiência em desenvolvimento com spring boot data cache etc conhecimentos em java mínimo conhecimento em kafka conhecimentos em aws sns sqs conhecimentos em git e git flow experiência com bancos de dados sql e nosql desenvolvimento com foco em qualidade testes unitários e sonar métricas experiência em micro serviços e sistemas concorrentes contínuos delivery jenkins tipo de contratação clt remoto como se candidatar por favor envie um email para com seu cv anexado e pretensão salarial enviar no assunto vaga java alocação remoto regime clt nível pleno sênior
1
173,839
21,182,105,620
IssuesEvent
2022-04-08 08:58:27
MikeSPtr/Android-Coroutines-CleanArchitecture-MVI
https://api.github.com/repos/MikeSPtr/Android-Coroutines-CleanArchitecture-MVI
reopened
hilt-android-compiler-2.38.1.jar: 1 vulnerabilities (highest severity is: 3.3)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hilt-android-compiler-2.38.1.jar</b></p></summary> <p></p> <p>Path to dependency file: /app/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.guava/guava/27.1-jre/e47b59c893079b87743cdcfb6f17ca95c08c592c/guava-27.1-jre.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.guava/guava/27.1-jre/e47b59c893079b87743cdcfb6f17ca95c08c592c/guava-27.1-jre.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/MikeSPtr/Android-Coroutines-CleanArchitecture-MVI/commit/65c7cc095d28830758375c274c0cad72d56b95eb">65c7cc095d28830758375c274c0cad72d56b95eb</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2020-8908](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8908) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.3 | guava-27.1-jre.jar | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2020-8908</summary> ### Vulnerable Library - <b>guava-27.1-jre.jar</b></p> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more.</p> <p>Library home page: <a href="https://github.com/google/guava">https://github.com/google/guava</a></p> <p>Path to dependency file: /common-android/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.guava/guava/27.1-jre/e47b59c893079b87743cdcfb6f17ca95c08c592c/guava-27.1-jre.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.guava/guava/27.1-jre/e47b59c893079b87743cdcfb6f17ca95c08c592c/guava-27.1-jre.jar</p> <p> Dependency Hierarchy: - hilt-android-compiler-2.38.1.jar (Root Library) - :x: **guava-27.1-jre.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/MikeSPtr/Android-Coroutines-CleanArchitecture-MVI/commit/65c7cc095d28830758375c274c0cad72d56b95eb">65c7cc095d28830758375c274c0cad72d56b95eb</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime's java.io.tmpdir system property to point to a location whose permissions are appropriately configured. <p>Publish Date: 2020-12-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8908>CVE-2020-8908</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>3.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908</a></p> <p>Release Date: 2020-12-10</p> <p>Fix Resolution: v30.0</p> </p> <p></p> Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details> <!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.google.guava","packageName":"guava","packageVersion":"27.1-jre","packageFilePaths":["/common-android/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.google.dagger:hilt-android-compiler:2.38.1;com.google.guava:guava:27.1-jre","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v30.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-8908","vulnerabilityDetails":"A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime\u0027s java.io.tmpdir system property to point to a location whose permissions are appropriately configured.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8908","cvss3Severity":"low","cvss3Score":"3.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"Low","UI":"None","AV":"Local","I":"None"},"extraData":{}}]</REMEDIATE> -->
True
hilt-android-compiler-2.38.1.jar: 1 vulnerabilities (highest severity is: 3.3) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hilt-android-compiler-2.38.1.jar</b></p></summary> <p></p> <p>Path to dependency file: /app/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.guava/guava/27.1-jre/e47b59c893079b87743cdcfb6f17ca95c08c592c/guava-27.1-jre.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.guava/guava/27.1-jre/e47b59c893079b87743cdcfb6f17ca95c08c592c/guava-27.1-jre.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/MikeSPtr/Android-Coroutines-CleanArchitecture-MVI/commit/65c7cc095d28830758375c274c0cad72d56b95eb">65c7cc095d28830758375c274c0cad72d56b95eb</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2020-8908](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8908) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.3 | guava-27.1-jre.jar | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2020-8908</summary> ### Vulnerable Library - <b>guava-27.1-jre.jar</b></p> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more.</p> <p>Library home page: <a href="https://github.com/google/guava">https://github.com/google/guava</a></p> <p>Path to dependency file: /common-android/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.guava/guava/27.1-jre/e47b59c893079b87743cdcfb6f17ca95c08c592c/guava-27.1-jre.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.guava/guava/27.1-jre/e47b59c893079b87743cdcfb6f17ca95c08c592c/guava-27.1-jre.jar</p> <p> Dependency Hierarchy: - hilt-android-compiler-2.38.1.jar (Root Library) - :x: **guava-27.1-jre.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/MikeSPtr/Android-Coroutines-CleanArchitecture-MVI/commit/65c7cc095d28830758375c274c0cad72d56b95eb">65c7cc095d28830758375c274c0cad72d56b95eb</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime's java.io.tmpdir system property to point to a location whose permissions are appropriately configured. <p>Publish Date: 2020-12-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8908>CVE-2020-8908</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>3.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908</a></p> <p>Release Date: 2020-12-10</p> <p>Fix Resolution: v30.0</p> </p> <p></p> Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details> <!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.google.guava","packageName":"guava","packageVersion":"27.1-jre","packageFilePaths":["/common-android/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.google.dagger:hilt-android-compiler:2.38.1;com.google.guava:guava:27.1-jre","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v30.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-8908","vulnerabilityDetails":"A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime\u0027s java.io.tmpdir system property to point to a location whose permissions are appropriately configured.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8908","cvss3Severity":"low","cvss3Score":"3.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"Low","UI":"None","AV":"Local","I":"None"},"extraData":{}}]</REMEDIATE> -->
non_test
hilt android compiler jar vulnerabilities highest severity is vulnerable library hilt android compiler jar path to dependency file app build gradle path to vulnerable library home wss scanner gradle caches modules files com google guava guava jre guava jre jar home wss scanner gradle caches modules files com google guava guava jre guava jre jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available low guava jre jar transitive n a details cve vulnerable library guava jre jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more library home page a href path to dependency file common android build gradle path to vulnerable library home wss scanner gradle caches modules files com google guava guava jre guava jre jar home wss scanner gradle caches modules files com google guava guava jre guava jre jar dependency hierarchy hilt android compiler jar root library x guava jre jar vulnerable library found in head commit a href found in base branch main vulnerability details a temp directory creation vulnerability exists in all versions of guava allowing an attacker with access to the machine to potentially access data in a temporary directory created by the guava api com google common io files createtempdir by default on unix like systems the created directory is world readable readable by an attacker with access to the system the method in question has been marked deprecated in versions and later and should not be used for android developers we recommend choosing a temporary directory api provided by android such as context getcachedir for other java developers we recommend migrating to the java api java nio file files createtempdirectory which explicitly configures permissions of or configuring the java runtime s java io tmpdir system property to point to a location whose permissions are appropriately configured publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource istransitivedependency true dependencytree com google dagger hilt android compiler com google guava guava jre isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails a temp directory creation vulnerability exists in all versions of guava allowing an attacker with access to the machine to potentially access data in a temporary directory created by the guava api com google common io files createtempdir by default on unix like systems the created directory is world readable readable by an attacker with access to the system the method in question has been marked deprecated in versions and later and should not be used for android developers we recommend choosing a temporary directory api provided by android such as context getcachedir for other java developers we recommend migrating to the java api java nio file files createtempdirectory which explicitly configures permissions of or configuring the java runtime java io tmpdir system property to point to a location whose permissions are appropriately configured vulnerabilityurl
0
25,485
12,248,562,829
IssuesEvent
2020-05-05 17:43:05
terraform-providers/terraform-provider-aws
https://api.github.com/repos/terraform-providers/terraform-provider-aws
closed
aws_cloudwatch_metric_alarm to support StatusCheckFailed without operator and statistic
enhancement service/cloudwatch stale
_This issue was originally opened by @rmldsky as hashicorp/terraform#6088. It was migrated here as part of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._ <hr> Not sure, but I think something changed on AWS CloudWatch console. Previously when setting up alarms for `StatusCheckFailed`. I am pretty sure there were operators to pick from for this metric. Now it is predefined for user, hence no operator to choose from. <img width="449" alt="screenshot 2016-04-08 14 05 57" src="https://cloud.githubusercontent.com/assets/153871/14383116/39be5ddc-fd93-11e5-8d51-ba9c6d456d90.png"> Not sure how to map this using Terraform resource as `comparison_operator` is required as well as `statistic` one. Is this something Terraform should be aware of, or user need to "guess" those required attributes?
1.0
aws_cloudwatch_metric_alarm to support StatusCheckFailed without operator and statistic - _This issue was originally opened by @rmldsky as hashicorp/terraform#6088. It was migrated here as part of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._ <hr> Not sure, but I think something changed on AWS CloudWatch console. Previously when setting up alarms for `StatusCheckFailed`. I am pretty sure there were operators to pick from for this metric. Now it is predefined for user, hence no operator to choose from. <img width="449" alt="screenshot 2016-04-08 14 05 57" src="https://cloud.githubusercontent.com/assets/153871/14383116/39be5ddc-fd93-11e5-8d51-ba9c6d456d90.png"> Not sure how to map this using Terraform resource as `comparison_operator` is required as well as `statistic` one. Is this something Terraform should be aware of, or user need to "guess" those required attributes?
non_test
aws cloudwatch metric alarm to support statuscheckfailed without operator and statistic this issue was originally opened by rmldsky as hashicorp terraform it was migrated here as part of the the original body of the issue is below not sure but i think something changed on aws cloudwatch console previously when setting up alarms for statuscheckfailed i am pretty sure there were operators to pick from for this metric now it is predefined for user hence no operator to choose from img width alt screenshot src not sure how to map this using terraform resource as comparison operator is required as well as statistic one is this something terraform should be aware of or user need to guess those required attributes
0
265,997
23,215,203,279
IssuesEvent
2022-08-02 13:35:14
eclipse-iceoryx/iceoryx
https://api.github.com/repos/eclipse-iceoryx/iceoryx
closed
Automate check for test cases to have UUIDs
enhancement test
## Brief feature description It's quite error prone to manually check if new test cases have unique UUIDs. The commit hooks and CI already check for unique UUIDs and the similarly it could be checked whether the count of `TEST`, `TYPED_TEST` and `TIMING_TEST` strings matches the count of `::testing::Test::RecordProperty` strings. ## Detailed information It seems like the following simple grep commands can be used for a sanity check ```console grep -rn --include="*.cpp" -e "^\(TEST\|TYPED_TEST\|TIMING_TEST\)" iceoryx_* | wc -l grep -rn --include="*.cpp" TYPED_TEST_SUITE iceoryx_* | wc -l grep -rn --include="*.cpp" "::testing::Test::RecordProperty" iceoryx_* | wc -l ``` The number of the first grep call minus the one from the second does match the number of the third one. ## Tasks - [ ] add this check to the CI - [ ] add this check to the git hooks - [ ] remove the task to check for unique UUIDs from the PR template
1.0
Automate check for test cases to have UUIDs - ## Brief feature description It's quite error prone to manually check if new test cases have unique UUIDs. The commit hooks and CI already check for unique UUIDs and the similarly it could be checked whether the count of `TEST`, `TYPED_TEST` and `TIMING_TEST` strings matches the count of `::testing::Test::RecordProperty` strings. ## Detailed information It seems like the following simple grep commands can be used for a sanity check ```console grep -rn --include="*.cpp" -e "^\(TEST\|TYPED_TEST\|TIMING_TEST\)" iceoryx_* | wc -l grep -rn --include="*.cpp" TYPED_TEST_SUITE iceoryx_* | wc -l grep -rn --include="*.cpp" "::testing::Test::RecordProperty" iceoryx_* | wc -l ``` The number of the first grep call minus the one from the second does match the number of the third one. ## Tasks - [ ] add this check to the CI - [ ] add this check to the git hooks - [ ] remove the task to check for unique UUIDs from the PR template
test
automate check for test cases to have uuids brief feature description it s quite error prone to manually check if new test cases have unique uuids the commit hooks and ci already check for unique uuids and the similarly it could be checked whether the count of test typed test and timing test strings matches the count of testing test recordproperty strings detailed information it seems like the following simple grep commands can be used for a sanity check console grep rn include cpp e test typed test timing test iceoryx wc l grep rn include cpp typed test suite iceoryx wc l grep rn include cpp testing test recordproperty iceoryx wc l the number of the first grep call minus the one from the second does match the number of the third one tasks add this check to the ci add this check to the git hooks remove the task to check for unique uuids from the pr template
1
58,498
7,156,833,017
IssuesEvent
2018-01-26 17:38:03
Munish7986/Laplap-Customer
https://api.github.com/repos/Munish7986/Laplap-Customer
closed
Sign up with google>fb>>> Firstname, lastname, email should be prefetched
As Designed
Sign up with google>fb>>> Firstname, lastname, email should be prefetched. currently it is not ![screenshot_2018-01-07-20-13-03-387_com lapalap 1](https://user-images.githubusercontent.com/19631677/34650430-45bd473c-f3e7-11e7-83fa-135841225344.png)
1.0
Sign up with google>fb>>> Firstname, lastname, email should be prefetched - Sign up with google>fb>>> Firstname, lastname, email should be prefetched. currently it is not ![screenshot_2018-01-07-20-13-03-387_com lapalap 1](https://user-images.githubusercontent.com/19631677/34650430-45bd473c-f3e7-11e7-83fa-135841225344.png)
non_test
sign up with google fb firstname lastname email should be prefetched sign up with google fb firstname lastname email should be prefetched currently it is not
0
402,051
27,349,384,527
IssuesEvent
2023-02-27 08:25:29
dan-koller/Spring-Anti-Fraud-System
https://api.github.com/repos/dan-koller/Spring-Anti-Fraud-System
closed
Separate api usage examples from main README
documentation
Improve readability by putting the api usage examples in an extra document in a `docs` folder. So everyone that is interested can look these up. The main readme then becomes cleaner and much shorter.
1.0
Separate api usage examples from main README - Improve readability by putting the api usage examples in an extra document in a `docs` folder. So everyone that is interested can look these up. The main readme then becomes cleaner and much shorter.
non_test
separate api usage examples from main readme improve readability by putting the api usage examples in an extra document in a docs folder so everyone that is interested can look these up the main readme then becomes cleaner and much shorter
0
3,085
3,329,418,403
IssuesEvent
2015-11-11 01:53:51
zerotier/ZeroTierOne
https://api.github.com/repos/zerotier/ZeroTierOne
closed
Diagnostic improvements / diagnostic system
usability
An automated diagnostic system will eventually be needed. It should be able to test basic connectivity and examine OS configuration to determine if there are any obvious problems like: (1) Problem with IP configuration (2) Conflicts in routing table (3) Firewall rules preventing communication (4) Network interface error or installation problem Note that this shouldn't be like Microsoft's diagnostics wizards in that it should actually work or at least tell the user something useful.
True
Diagnostic improvements / diagnostic system - An automated diagnostic system will eventually be needed. It should be able to test basic connectivity and examine OS configuration to determine if there are any obvious problems like: (1) Problem with IP configuration (2) Conflicts in routing table (3) Firewall rules preventing communication (4) Network interface error or installation problem Note that this shouldn't be like Microsoft's diagnostics wizards in that it should actually work or at least tell the user something useful.
non_test
diagnostic improvements diagnostic system an automated diagnostic system will eventually be needed it should be able to test basic connectivity and examine os configuration to determine if there are any obvious problems like problem with ip configuration conflicts in routing table firewall rules preventing communication network interface error or installation problem note that this shouldn t be like microsoft s diagnostics wizards in that it should actually work or at least tell the user something useful
0
743,418
25,897,834,526
IssuesEvent
2022-12-15 00:58:09
nabu-catalog/nabu
https://api.github.com/repos/nabu-catalog/nabu
closed
adding more fields to search function
waiting-for-feedback Priority search
We should be able to search on any field to allow us (in Advanced search) to locate all audio files or video files, etc
1.0
adding more fields to search function - We should be able to search on any field to allow us (in Advanced search) to locate all audio files or video files, etc
non_test
adding more fields to search function we should be able to search on any field to allow us in advanced search to locate all audio files or video files etc
0
31,799
4,725,714,854
IssuesEvent
2016-10-18 07:48:41
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
Vagrant Test Failure: qa:vagrant:vagrantFedora22#up
test
Build: (https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+packaging-tests/344/console) Failure: :qa:vagrant:vagrantFedora22#up (Thread[main,5,main]) started. :qa:vagrant:vagrantFedora22#up Executing task ':qa:vagrant:vagrantFedora22#up' (up-to-date check took 0.0 secs) due to: Task has not declared any outputs. Starting process 'command 'vagrant''. Working directory: /var/lib/jenkins/workspace/elastic+elasticsearch+master+packaging-tests/qa/vagrant Command: vagrant up fedora-22 --provision --provider virtualbox Successfully started process 'command 'vagrant'' The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong. :qa:vagrant:vagrantFedora22#up FAILED :qa:vagrant:vagrantFedora22#up (Thread[main,5,main]) completed. Took 50.772 secs. FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':qa:vagrant:vagrantFedora22#up'. > Process 'command 'vagrant'' finished with non-zero exit value 1 BUILD FAILED Total time: 21 mins 5.763 secs Stopped 0 compiler daemon(s). * Try: Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. <<<<<<<<<<<< SCRIPT EXECUTION END <<<<<<<<<<<< DURATION: 1274811ms STDOUT: 180704 bytes STDERR: 1069 bytes WRAPPED PROCESS: FAILURE (1) BUILD: https://5086a1f436ee16623a447bdf25881bbc.us-east-1.aws.found.io:9243/build-1453259317148/t/20160503184620-1C840FDD NOTIFYING SLACK MAILING: dev+build-elasticsearch@e***.co Build step 'Execute shell' marked build as failure Sending e-mails to: infra-root+build@elastic.co Finished: FAILURE
1.0
Vagrant Test Failure: qa:vagrant:vagrantFedora22#up - Build: (https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+packaging-tests/344/console) Failure: :qa:vagrant:vagrantFedora22#up (Thread[main,5,main]) started. :qa:vagrant:vagrantFedora22#up Executing task ':qa:vagrant:vagrantFedora22#up' (up-to-date check took 0.0 secs) due to: Task has not declared any outputs. Starting process 'command 'vagrant''. Working directory: /var/lib/jenkins/workspace/elastic+elasticsearch+master+packaging-tests/qa/vagrant Command: vagrant up fedora-22 --provision --provider virtualbox Successfully started process 'command 'vagrant'' The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong. :qa:vagrant:vagrantFedora22#up FAILED :qa:vagrant:vagrantFedora22#up (Thread[main,5,main]) completed. Took 50.772 secs. FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':qa:vagrant:vagrantFedora22#up'. > Process 'command 'vagrant'' finished with non-zero exit value 1 BUILD FAILED Total time: 21 mins 5.763 secs Stopped 0 compiler daemon(s). * Try: Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. <<<<<<<<<<<< SCRIPT EXECUTION END <<<<<<<<<<<< DURATION: 1274811ms STDOUT: 180704 bytes STDERR: 1069 bytes WRAPPED PROCESS: FAILURE (1) BUILD: https://5086a1f436ee16623a447bdf25881bbc.us-east-1.aws.found.io:9243/build-1453259317148/t/20160503184620-1C840FDD NOTIFYING SLACK MAILING: dev+build-elasticsearch@e***.co Build step 'Execute shell' marked build as failure Sending e-mails to: infra-root+build@elastic.co Finished: FAILURE
test
vagrant test failure qa vagrant up build failure qa vagrant up thread started qa vagrant up executing task qa vagrant up up to date check took secs due to task has not declared any outputs starting process command vagrant working directory var lib jenkins workspace elastic elasticsearch master packaging tests qa vagrant command vagrant up fedora provision provider virtualbox successfully started process command vagrant the ssh command responded with a non zero exit status vagrant assumes that this means the command failed the output for this command should be in the log above please read the output to determine what went wrong qa vagrant up failed qa vagrant up thread completed took secs failure build failed with an exception what went wrong execution failed for task qa vagrant up process command vagrant finished with non zero exit value build failed total time mins secs stopped compiler daemon s try run with stacktrace option to get the stack trace run with debug option to get more log output script execution end duration stdout bytes stderr bytes wrapped process failure build notifying slack mailing dev build elasticsearch e co build step execute shell marked build as failure sending e mails to infra root build elastic co finished failure
1
5,617
20,241,248,315
IssuesEvent
2022-02-14 09:29:50
keptn/keptn
https://api.github.com/repos/keptn/keptn
closed
Use docker registry manifests instead of full images for re-tagging
type:chore automation next-sprint estimate: 2 performance area:devops
Currently, unchanged docker images from 'build-everything' builds are just pulled, retagged and pushed. This could be simplified by directly interfacing with the Docker Registry API. For retagging, just the respective docker image's manifest would need to be downloaded and then uploaded again with a new tag. Manifests are only kilobytes and therefore much smaller that full docker images. This would speed up the pipeline for all the unchanged images in every 'build-everything' build. Depends on #4684 References: https://github.com/keptn/keptn/blob/master/gh-actions-scripts/cleanup_docker_images.sh#L44 https://dille.name/blog/2018/09/20/how-to-tag-docker-images-without-pulling-them/ https://github.com/estesp/manifest-tool
1.0
Use docker registry manifests instead of full images for re-tagging - Currently, unchanged docker images from 'build-everything' builds are just pulled, retagged and pushed. This could be simplified by directly interfacing with the Docker Registry API. For retagging, just the respective docker image's manifest would need to be downloaded and then uploaded again with a new tag. Manifests are only kilobytes and therefore much smaller that full docker images. This would speed up the pipeline for all the unchanged images in every 'build-everything' build. Depends on #4684 References: https://github.com/keptn/keptn/blob/master/gh-actions-scripts/cleanup_docker_images.sh#L44 https://dille.name/blog/2018/09/20/how-to-tag-docker-images-without-pulling-them/ https://github.com/estesp/manifest-tool
non_test
use docker registry manifests instead of full images for re tagging currently unchanged docker images from build everything builds are just pulled retagged and pushed this could be simplified by directly interfacing with the docker registry api for retagging just the respective docker image s manifest would need to be downloaded and then uploaded again with a new tag manifests are only kilobytes and therefore much smaller that full docker images this would speed up the pipeline for all the unchanged images in every build everything build depends on references
0
24,801
17,787,454,706
IssuesEvent
2021-08-31 12:48:47
deckhouse/deckhouse
https://api.github.com/repos/deckhouse/deckhouse
opened
"remove_csi_taints" did not execute even though it ought to
type/bug area/cluster-and-infrastructure
[Hook](https://github.com/deckhouse/deckhouse/blob/main/modules/040-node-manager/hooks/remove_csi_taints.go) did not execute. After deleting the deckhouse Pod, the problem went away. It could be a problem with filtering or with ExecuteHookOnEvents/Execution parameters.
1.0
"remove_csi_taints" did not execute even though it ought to - [Hook](https://github.com/deckhouse/deckhouse/blob/main/modules/040-node-manager/hooks/remove_csi_taints.go) did not execute. After deleting the deckhouse Pod, the problem went away. It could be a problem with filtering or with ExecuteHookOnEvents/Execution parameters.
non_test
remove csi taints did not execute even though it ought to did not execute after deleting the deckhouse pod the problem went away it could be a problem with filtering or with executehookonevents execution parameters
0
233,976
19,088,900,005
IssuesEvent
2021-11-29 09:52:18
openshift/odo
https://api.github.com/repos/openshift/odo
closed
Refactor test-cmd-devfile-watch
area/testing
/kind tests ## Acceptance Criteria - [ ] test-cmd-devfile-watch should use new test approach and run successfully.
1.0
Refactor test-cmd-devfile-watch - /kind tests ## Acceptance Criteria - [ ] test-cmd-devfile-watch should use new test approach and run successfully.
test
refactor test cmd devfile watch kind tests acceptance criteria test cmd devfile watch should use new test approach and run successfully
1
76,292
7,524,060,802
IssuesEvent
2018-04-13 05:01:02
EyeSeeTea/CNMApp
https://api.github.com/repos/EyeSeeTea/CNMApp
closed
Add Tested or RDT Stock out to first VMW screen
complexity - med (1-5hr) testing type - feature
## User Report ## ScreenShot ![file](https://raw.githubusercontent.com/EyeSeeTeaBotTest/snapshots/master/android_screenshot267883270.jpg) ## Device Info ``` Time Stamp: 2018-03-30T11:33:04 UTC App Version: 0.3.0 (52) Install Source: Package Installer Android Version: 7.0 (24) Device Manufacturer: SAMSUNG Device Model: SM-G930F Display Resolution: 1920x1080 Display Density (Actual): 480dpi Display Density (Bucket) xxhdpi --------------------- ```
1.0
Add Tested or RDT Stock out to first VMW screen - ## User Report ## ScreenShot ![file](https://raw.githubusercontent.com/EyeSeeTeaBotTest/snapshots/master/android_screenshot267883270.jpg) ## Device Info ``` Time Stamp: 2018-03-30T11:33:04 UTC App Version: 0.3.0 (52) Install Source: Package Installer Android Version: 7.0 (24) Device Manufacturer: SAMSUNG Device Model: SM-G930F Display Resolution: 1920x1080 Display Density (Actual): 480dpi Display Density (Bucket) xxhdpi --------------------- ```
test
add tested or rdt stock out to first vmw screen user report screenshot device info time stamp utc app version install source package installer android version device manufacturer samsung device model sm display resolution display density actual display density bucket xxhdpi
1
254,068
27,343,209,046
IssuesEvent
2023-02-27 01:03:59
MidnightBSD/src
https://api.github.com/repos/MidnightBSD/src
reopened
CVE-2019-17594 (Medium) detected in buffalo-gplncurses-5.9
security vulnerability
## CVE-2019-17594 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>buffalo-gplncurses-5.9</b></p></summary> <p> <p>ARM framework - IDE</p> <p>Library home page: <a href=https://sourceforge.net/projects/buffalo-gpl/>https://sourceforge.net/projects/buffalo-gpl/</a></p> <p>Found in HEAD commit: <a href="https://github.com/MidnightBSD/src/commit/816463d989cc5839c1cca2efb5bf2503408507fb">816463d989cc5839c1cca2efb5bf2503408507fb</a></p> <p>Found in base branches: <b>stable/2.1, stable/2.2</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/contrib/ncurses/ncurses/tinfo/comp_hash.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/contrib/ncurses/ncurses/tinfo/comp_hash.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/contrib/ncurses/ncurses/tinfo/comp_hash.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> There is a heap-based buffer over-read in the _nc_find_entry function in tinfo/comp_hash.c in the terminfo library in ncurses before 6.1-20191012. <p>Publish Date: 2019-10-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-17594>CVE-2019-17594</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-17594">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-17594</a></p> <p>Release Date: 2019-10-14</p> <p>Fix Resolution: 6.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-17594 (Medium) detected in buffalo-gplncurses-5.9 - ## CVE-2019-17594 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>buffalo-gplncurses-5.9</b></p></summary> <p> <p>ARM framework - IDE</p> <p>Library home page: <a href=https://sourceforge.net/projects/buffalo-gpl/>https://sourceforge.net/projects/buffalo-gpl/</a></p> <p>Found in HEAD commit: <a href="https://github.com/MidnightBSD/src/commit/816463d989cc5839c1cca2efb5bf2503408507fb">816463d989cc5839c1cca2efb5bf2503408507fb</a></p> <p>Found in base branches: <b>stable/2.1, stable/2.2</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/contrib/ncurses/ncurses/tinfo/comp_hash.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/contrib/ncurses/ncurses/tinfo/comp_hash.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/contrib/ncurses/ncurses/tinfo/comp_hash.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> There is a heap-based buffer over-read in the _nc_find_entry function in tinfo/comp_hash.c in the terminfo library in ncurses before 6.1-20191012. <p>Publish Date: 2019-10-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-17594>CVE-2019-17594</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-17594">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-17594</a></p> <p>Release Date: 2019-10-14</p> <p>Fix Resolution: 6.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in buffalo gplncurses cve medium severity vulnerability vulnerable library buffalo gplncurses arm framework ide library home page a href found in head commit a href found in base branches stable stable vulnerable source files contrib ncurses ncurses tinfo comp hash c contrib ncurses ncurses tinfo comp hash c contrib ncurses ncurses tinfo comp hash c vulnerability details there is a heap based buffer over read in the nc find entry function in tinfo comp hash c in the terminfo library in ncurses before publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
220,259
24,564,791,306
IssuesEvent
2022-10-13 01:13:06
snowdensb/nifi
https://api.github.com/repos/snowdensb/nifi
opened
CVE-2022-37599 (Medium) detected in loader-utils-1.2.3.tgz
security vulnerability
## CVE-2022-37599 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loader-utils-1.2.3.tgz</b></p></summary> <p>utils for webpack loaders</p> <p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz</a></p> <p> Dependency Hierarchy: - babel-loader-8.0.5.tgz (Root Library) - :x: **loader-utils-1.2.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snowdensb/nifi/commit/d9bab7423d2f0a27e478e0a225fccf352baa0cf2">d9bab7423d2f0a27e478e0a225fccf352baa0cf2</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the resourcePath variable in interpolateName.js. <p>Publish Date: 2022-10-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37599>CVE-2022-37599</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p>
True
CVE-2022-37599 (Medium) detected in loader-utils-1.2.3.tgz - ## CVE-2022-37599 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loader-utils-1.2.3.tgz</b></p></summary> <p>utils for webpack loaders</p> <p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz</a></p> <p> Dependency Hierarchy: - babel-loader-8.0.5.tgz (Root Library) - :x: **loader-utils-1.2.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snowdensb/nifi/commit/d9bab7423d2f0a27e478e0a225fccf352baa0cf2">d9bab7423d2f0a27e478e0a225fccf352baa0cf2</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the resourcePath variable in interpolateName.js. <p>Publish Date: 2022-10-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37599>CVE-2022-37599</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p>
non_test
cve medium detected in loader utils tgz cve medium severity vulnerability vulnerable library loader utils tgz utils for webpack loaders library home page a href dependency hierarchy babel loader tgz root library x loader utils tgz vulnerable library found in head commit a href found in base branch main vulnerability details a regular expression denial of service redos flaw was found in function interpolatename in interpolatename js in webpack loader utils via the resourcepath variable in interpolatename js publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
0
632,350
20,193,237,659
IssuesEvent
2022-02-11 08:13:41
markusa/ietf-multipath-dccp
https://api.github.com/repos/markusa/ietf-multipath-dccp
closed
Define scheduling and reordering not being part of the spec
Priority -03
... but there are mechanisms like sequencing schemes which facilitates these
1.0
Define scheduling and reordering not being part of the spec - ... but there are mechanisms like sequencing schemes which facilitates these
non_test
define scheduling and reordering not being part of the spec but there are mechanisms like sequencing schemes which facilitates these
0
14,949
3,437,305,964
IssuesEvent
2015-12-13 03:24:13
scikit-beam/scikit-beam
https://api.github.com/repos/scikit-beam/scikit-beam
closed
Direct test for skxray.constants.xrs.Reflection
Needs Test Coverage New Contributors
skxray.constants.xrs.Reflection needs a direct test. Right now it has only an indirect test through the calibration_standards dictionary
1.0
Direct test for skxray.constants.xrs.Reflection - skxray.constants.xrs.Reflection needs a direct test. Right now it has only an indirect test through the calibration_standards dictionary
test
direct test for skxray constants xrs reflection skxray constants xrs reflection needs a direct test right now it has only an indirect test through the calibration standards dictionary
1
249,815
21,192,965,284
IssuesEvent
2022-04-08 19:45:46
Slimefun/Slimefun4
https://api.github.com/repos/Slimefun/Slimefun4
opened
Flask of Knowledge can fill up water
🐞 Bug Report 🎯 Needs testing
### ❗ Checklist - [X] I am using the official english version of Slimefun and did not modify the jar. - [X] I am using an up to date "DEV" (not "RC") version of Slimefun. - [X] I am aware that issues related to Slimefun addons need to be reported on their bug trackers and not here. - [X] I searched for similar open issues and could not find an existing bug report on this. ### 📍 Description If you don't have any exp level, you can use Flask of Knowledge like normal bottle can do. Fill up water. ### 📑 Reproduction Steps 1. Don't have any exp level on you. 2. Take the Flask of Knowledge, right click on water. 3. See it fill with water. ### 💡 Expected Behavior Don't fill up water. ### 📷 Screenshots / Videos https://user-images.githubusercontent.com/26039249/162515971-6b177d52-9fae-4bc9-a7da-23d5d0279c24.mp4 ### 📜 Server Log _No response_ ### 📂 `/error-reports/` folder _No response_ ### 💻 Server Software Paper ### 🎮 Minecraft Version 1.18.x ### ⭐ Slimefun version ![image](https://user-images.githubusercontent.com/26039249/162513799-779ea737-c6a7-4231-9d0a-549911da409b.png) ### 🧭 Other plugins _No response_
1.0
Flask of Knowledge can fill up water - ### ❗ Checklist - [X] I am using the official english version of Slimefun and did not modify the jar. - [X] I am using an up to date "DEV" (not "RC") version of Slimefun. - [X] I am aware that issues related to Slimefun addons need to be reported on their bug trackers and not here. - [X] I searched for similar open issues and could not find an existing bug report on this. ### 📍 Description If you don't have any exp level, you can use Flask of Knowledge like normal bottle can do. Fill up water. ### 📑 Reproduction Steps 1. Don't have any exp level on you. 2. Take the Flask of Knowledge, right click on water. 3. See it fill with water. ### 💡 Expected Behavior Don't fill up water. ### 📷 Screenshots / Videos https://user-images.githubusercontent.com/26039249/162515971-6b177d52-9fae-4bc9-a7da-23d5d0279c24.mp4 ### 📜 Server Log _No response_ ### 📂 `/error-reports/` folder _No response_ ### 💻 Server Software Paper ### 🎮 Minecraft Version 1.18.x ### ⭐ Slimefun version ![image](https://user-images.githubusercontent.com/26039249/162513799-779ea737-c6a7-4231-9d0a-549911da409b.png) ### 🧭 Other plugins _No response_
test
flask of knowledge can fill up water ❗ checklist i am using the official english version of slimefun and did not modify the jar i am using an up to date dev not rc version of slimefun i am aware that issues related to slimefun addons need to be reported on their bug trackers and not here i searched for similar open issues and could not find an existing bug report on this 📍 description if you don t have any exp level you can use flask of knowledge like normal bottle can do fill up water 📑 reproduction steps don t have any exp level on you take the flask of knowledge right click on water see it fill with water 💡 expected behavior don t fill up water 📷 screenshots videos 📜 server log no response 📂 error reports folder no response 💻 server software paper 🎮 minecraft version x ⭐ slimefun version 🧭 other plugins no response
1
40,871
5,320,413,399
IssuesEvent
2017-02-14 10:25:14
khartec/waltz
https://api.github.com/repos/khartec/waltz
closed
Survey: Questions and responses DDL
DDL Change fixed (test & close)
* Table to store questions - linked to survey template * Responses linked to survey instance
1.0
Survey: Questions and responses DDL - * Table to store questions - linked to survey template * Responses linked to survey instance
test
survey questions and responses ddl table to store questions linked to survey template responses linked to survey instance
1
299,345
9,205,386,194
IssuesEvent
2019-03-08 10:25:31
qissue-bot/QGIS
https://api.github.com/repos/qissue-bot/QGIS
closed
unexpected line creation behaviour
Category: Digitising Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report
--- Author Name: **Redmine Admin** (Redmine Admin) Original Redmine Issue: 742, https://issues.qgis.org/issues/742 Original Assignee: nobody - --- In versions prior to 0.8.1, when creating a line a right-click was used to complete the line. This would terminate the line at the last point the user left-clicked. In the current svn however, the line is terminated wherever the mouse is when the right-click occurs. If snapping is on, and after a left-click a right-click occurs without the mouse moving much then the line may have two vertices in the same location.
1.0
unexpected line creation behaviour - --- Author Name: **Redmine Admin** (Redmine Admin) Original Redmine Issue: 742, https://issues.qgis.org/issues/742 Original Assignee: nobody - --- In versions prior to 0.8.1, when creating a line a right-click was used to complete the line. This would terminate the line at the last point the user left-clicked. In the current svn however, the line is terminated wherever the mouse is when the right-click occurs. If snapping is on, and after a left-click a right-click occurs without the mouse moving much then the line may have two vertices in the same location.
non_test
unexpected line creation behaviour author name redmine admin redmine admin original redmine issue original assignee nobody in versions prior to when creating a line a right click was used to complete the line this would terminate the line at the last point the user left clicked in the current svn however the line is terminated wherever the mouse is when the right click occurs if snapping is on and after a left click a right click occurs without the mouse moving much then the line may have two vertices in the same location
0
22,721
11,714,687,832
IssuesEvent
2020-03-09 12:51:32
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
self-contained binary executables generated by dart2native is much slower.
area-vm type-performance
### Dart vertion: ```bash dart --version ``` ** Dart VM version: 2.7.1 (Thu Jan 23 13:02:26 2020 +0100) on "linux_x64"** ```bash uname -a ``` **Linux PC 5.5.8-arch1-1 #1 SMP PREEMPT Fri, 06 Mar 2020 00:57:33 +0000 x86_64 GNU/Linux** ### Test example: ```dart // filename: fib.dart int fib(int n) { if (n <= 1) return 1; return fib(n - 1) + fib(n - 2); } main() { print(fib(46)); } ``` ### Run directly: ```bash time dart fib.dart ``` **dart fib.dart 8.58s user 0.04s system 101% cpu 8.517 total** ### Run executable: ```bash dart2native fib.dart -o ./fib time ./fib ``` **./fib 17.22s user 0.01s system 99% cpu 17.238 total**
True
self-contained binary executables generated by dart2native is much slower. - ### Dart vertion: ```bash dart --version ``` ** Dart VM version: 2.7.1 (Thu Jan 23 13:02:26 2020 +0100) on "linux_x64"** ```bash uname -a ``` **Linux PC 5.5.8-arch1-1 #1 SMP PREEMPT Fri, 06 Mar 2020 00:57:33 +0000 x86_64 GNU/Linux** ### Test example: ```dart // filename: fib.dart int fib(int n) { if (n <= 1) return 1; return fib(n - 1) + fib(n - 2); } main() { print(fib(46)); } ``` ### Run directly: ```bash time dart fib.dart ``` **dart fib.dart 8.58s user 0.04s system 101% cpu 8.517 total** ### Run executable: ```bash dart2native fib.dart -o ./fib time ./fib ``` **./fib 17.22s user 0.01s system 99% cpu 17.238 total**
non_test
self contained binary executables generated by is much slower dart vertion bash dart version dart vm version thu jan on linux bash uname a linux pc smp preempt fri mar gnu linux test example dart filename fib dart int fib int n if n return return fib n fib n main print fib run directly bash time dart fib dart dart fib dart user system cpu total run executable bash fib dart o fib time fib fib user system cpu total
0
48,235
5,950,088,796
IssuesEvent
2017-05-26 15:48:47
MarcioJales/Sperf
https://api.github.com/repos/MarcioJales/Sperf
opened
Enviar Resultados apenas no final
test
criar uma função no final do programa que envia todos os dados
1.0
Enviar Resultados apenas no final - criar uma função no final do programa que envia todos os dados
test
enviar resultados apenas no final criar uma função no final do programa que envia todos os dados
1
347,922
24,903,950,608
IssuesEvent
2022-10-29 02:41:22
NixOS/nixpkgs
https://api.github.com/repos/NixOS/nixpkgs
opened
boot.plymouth.theme available options missing
0.kind: bug 3.skill: good-first-bug 6.topic: documentation
### Describe the bug Here you see all official themes: https://gitlab.freedesktop.org/plymouth/plymouth/-/tree/main/themes ### Steps To Reproduce Steps to reproduce the behavior: 1. ... 2. ... 3. ... ### Expected behavior A clear and concise description of what you expected to happen. ### Screenshots ![Screenshot from 2022-10-29 04-39-14](https://user-images.githubusercontent.com/91113/198782622-6fe1cf2f-ff48-4c81-8a99-2ed03b290d45.png) ### Additional context Add any other context about the problem here. ### Notify maintainers <!-- Please @ people who are in the `meta.maintainers` list of the offending package or module. If in doubt, check `git blame` for whoever last touched something. --> ### Metadata
1.0
boot.plymouth.theme available options missing - ### Describe the bug Here you see all official themes: https://gitlab.freedesktop.org/plymouth/plymouth/-/tree/main/themes ### Steps To Reproduce Steps to reproduce the behavior: 1. ... 2. ... 3. ... ### Expected behavior A clear and concise description of what you expected to happen. ### Screenshots ![Screenshot from 2022-10-29 04-39-14](https://user-images.githubusercontent.com/91113/198782622-6fe1cf2f-ff48-4c81-8a99-2ed03b290d45.png) ### Additional context Add any other context about the problem here. ### Notify maintainers <!-- Please @ people who are in the `meta.maintainers` list of the offending package or module. If in doubt, check `git blame` for whoever last touched something. --> ### Metadata
non_test
boot plymouth theme available options missing describe the bug here you see all official themes steps to reproduce steps to reproduce the behavior expected behavior a clear and concise description of what you expected to happen screenshots additional context add any other context about the problem here notify maintainers please people who are in the meta maintainers list of the offending package or module if in doubt check git blame for whoever last touched something metadata
0
96,802
8,632,896,253
IssuesEvent
2018-11-22 12:13:09
SME-Issues/issues
https://api.github.com/repos/SME-Issues/issues
closed
Intent Errors (5004) - 22/11/2018
NLP Api pulse_tests
|Expression|Result| |---|---| | _// BALANCES_ |expected intent to be `query_payment` but found `confirmation`| | _// PAYMENTS_ |expected intent to be `query_payment` but found `confirmation`| | _How are we doing for cash_ |expected intent to be `query_balance` but found `query_payment`| | _How much money do we have?_ |expected intent to be `query_balance` but found `query_payment`|
1.0
Intent Errors (5004) - 22/11/2018 - |Expression|Result| |---|---| | _// BALANCES_ |expected intent to be `query_payment` but found `confirmation`| | _// PAYMENTS_ |expected intent to be `query_payment` but found `confirmation`| | _How are we doing for cash_ |expected intent to be `query_balance` but found `query_payment`| | _How much money do we have?_ |expected intent to be `query_balance` but found `query_payment`|
test
intent errors expression result balances expected intent to be query payment but found confirmation payments expected intent to be query payment but found confirmation how are we doing for cash expected intent to be query balance but found query payment how much money do we have expected intent to be query balance but found query payment
1
77,586
7,581,772,288
IssuesEvent
2018-04-25 00:01:35
rook/rook
https://api.github.com/repos/rook/rook
closed
CI pipeline failures have become more frequent recently
test
We've seen more failures recently in our CI pipelines. This may potentially be related to resetting the Jenkins agents recently, but that's not certain. It appears that the tests are not cleaning up some state between runs. The most common failure seen is: ``` 2018-02-27 00:29:27.792713 E | testutil: Failed to execute kubectl [create -f -] -- Failed to run stdin: kubectl [create -f -] : Error from server (AlreadyExists): error when creating "STDIN": thirdpartyresourcedatas.extensions "default" already exists --- FAIL: TestObjectStoreOnRookInstalledViaHelm (0.13s) ``` We should figure out what the root cause for the failures is and make the tests more reliable.
1.0
CI pipeline failures have become more frequent recently - We've seen more failures recently in our CI pipelines. This may potentially be related to resetting the Jenkins agents recently, but that's not certain. It appears that the tests are not cleaning up some state between runs. The most common failure seen is: ``` 2018-02-27 00:29:27.792713 E | testutil: Failed to execute kubectl [create -f -] -- Failed to run stdin: kubectl [create -f -] : Error from server (AlreadyExists): error when creating "STDIN": thirdpartyresourcedatas.extensions "default" already exists --- FAIL: TestObjectStoreOnRookInstalledViaHelm (0.13s) ``` We should figure out what the root cause for the failures is and make the tests more reliable.
test
ci pipeline failures have become more frequent recently we ve seen more failures recently in our ci pipelines this may potentially be related to resetting the jenkins agents recently but that s not certain it appears that the tests are not cleaning up some state between runs the most common failure seen is e testutil failed to execute kubectl failed to run stdin kubectl error from server alreadyexists error when creating stdin thirdpartyresourcedatas extensions default already exists fail testobjectstoreonrookinstalledviahelm we should figure out what the root cause for the failures is and make the tests more reliable
1
143,590
19,187,091,699
IssuesEvent
2021-12-05 11:42:39
World2000/Home-assignment
https://api.github.com/repos/World2000/Home-assignment
opened
CVE-2020-14062 (High) detected in jackson-databind-2.8.7.jar
security vulnerability
## CVE-2020-14062 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.7.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: Home-assignment/pom.xml</p> <p>Path to vulnerable library: itory/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.7.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/World2000/Home-assignment/commit/1c2029dc2cd8fc5d43406f3c94fdf5244270326f">1c2029dc2cd8fc5d43406f3c94fdf5244270326f</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to com.sun.org.apache.xalan.internal.lib.sql.JNDIConnectionPool (aka xalan2). <p>Publish Date: 2020-06-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14062>CVE-2020-14062</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062</a></p> <p>Release Date: 2020-06-14</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-14062 (High) detected in jackson-databind-2.8.7.jar - ## CVE-2020-14062 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.7.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: Home-assignment/pom.xml</p> <p>Path to vulnerable library: itory/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.7.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/World2000/Home-assignment/commit/1c2029dc2cd8fc5d43406f3c94fdf5244270326f">1c2029dc2cd8fc5d43406f3c94fdf5244270326f</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to com.sun.org.apache.xalan.internal.lib.sql.JNDIConnectionPool (aka xalan2). <p>Publish Date: 2020-06-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14062>CVE-2020-14062</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062</a></p> <p>Release Date: 2020-06-14</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file home assignment pom xml path to vulnerable library itory com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com sun org apache xalan internal lib sql jndiconnectionpool aka publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
0
214,667
16,603,723,000
IssuesEvent
2021-06-01 23:41:54
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
opened
[docdb] Several tests are flaky in our gcp infra
kind/failing-test
https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&class=TestUser&name=TestNonEmpty https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&class=RpcStubTest&name=TestDefaultCredentialsPropagated Seem to trace back to some `GetLoggedInUser` function failing to find the local user. I'll add some debug logging to get at least the UID info.
1.0
[docdb] Several tests are flaky in our gcp infra - https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&class=TestUser&name=TestNonEmpty https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&class=RpcStubTest&name=TestDefaultCredentialsPropagated Seem to trace back to some `GetLoggedInUser` function failing to find the local user. I'll add some debug logging to get at least the UID info.
test
several tests are flaky in our gcp infra seem to trace back to some getloggedinuser function failing to find the local user i ll add some debug logging to get at least the uid info
1
53,001
6,289,254,390
IssuesEvent
2017-07-19 18:47:54
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
teamcity: failed tests on master: testrace/TestBackupRestoreSystemJobsProgress
Robot test-failure
The following tests appear to have failed: [#300397](https://teamcity.cockroachdb.com/viewLog.html?buildId=300397): ``` --- FAIL: testrace/TestBackupRestoreSystemJobsProgress (0.000s) Race detected! ------- Stdout: ------- W170719 18:40:40.838101 50313 server/server.go:299 [n?] all stores are configured as in-memory stores, so not setting up a temporary store. Queries with working set larger than memory will fail W170719 18:40:40.839384 50313 server/status/runtime.go:111 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I170719 18:40:40.868472 50313 server/config.go:534 [n?] 1 storage engine initialized I170719 18:40:40.868705 50313 server/config.go:536 [n?] RocksDB cache size: 512 MiB I170719 18:40:40.868755 50313 server/config.go:536 [n?] store 0: in-memory, size 100 MiB I170719 18:40:40.870446 50313 server/node.go:434 [n?] store [n0,s0] not bootstrapped I170719 18:40:40.897017 50313 server/node.go:369 [n?] **** cluster a185a4c7-e0ed-40fa-a7cd-3d3d21284208 has been created I170719 18:40:40.897146 50313 server/node.go:370 [n?] **** add additional nodes by specifying --join=127.0.0.1:60206 I170719 18:40:40.917239 50313 storage/store.go:1260 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available I170719 18:40:40.940628 50313 server/node.go:447 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:1 WritesPerSecond:47.95243003854344} I170719 18:40:40.940942 50313 server/node.go:331 [n1] node ID 1 initialized I170719 18:40:40.941190 50313 gossip/gossip.go:297 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:60206" > attrs:<> locality:<> I170719 18:40:40.942073 50313 storage/stores.go:295 [n1] read 0 node addresses from persistent storage I170719 18:40:40.942377 50313 server/node.go:588 [n1] connecting to gossip network to verify cluster ID... I170719 18:40:40.942524 50313 server/node.go:613 [n1] node connected via gossip and verified as part of cluster "a185a4c7-e0ed-40fa-a7cd-3d3d21284208" I170719 18:40:40.947400 50313 server/node.go:385 [n1] node=1: started with [=] engine(s) and attributes [] I170719 18:40:40.980329 50411 storage/replica_command.go:2673 [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2] E170719 18:40:41.016572 50412 storage/queue.go:658 [replicate,n1,s1,r1/1:/{Min-System/}] range requires a replication change, but lacks a quorum of live replicas (0/1) I170719 18:40:41.020409 50411 storage/replica_command.go:2673 [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/tsd [r3] I170719 18:40:41.022860 50313 sql/executor.go:364 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:60206} I170719 18:40:41.070753 50313 server/server.go:815 [n1] starting https server at 127.0.0.1:34285 I170719 18:40:41.070878 50313 server/server.go:816 [n1] starting grpc/postgres server at 127.0.0.1:60206 I170719 18:40:41.070923 50313 server/server.go:817 [n1] advertising CockroachDB node at 127.0.0.1:60206 E170719 18:40:41.102462 50412 storage/queue.go:658 [replicate,n1,s1,r2/1:/System/{-tsd}] range requires a replication change, but lacks a quorum of live replicas (0/1) I170719 18:40:41.121193 50411 storage/replica_command.go:2673 [split,n1,s1,r3/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r4] E170719 18:40:41.219859 50543 storage/replica_proposal.go:522 [n1,s1,r3/1:/{System/tsd-Max}] could not load SystemConfig span: must retry later due to intent on SystemConfigSpan I170719 18:40:41.229615 50411 storage/replica_command.go:2673 [split,n1,s1,r4/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r5] I170719 18:40:41.290782 50313 sql/event_log.go:101 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN "uniqueID" SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]} I170719 18:40:41.349143 50411 storage/replica_command.go:2673 [split,n1,s1,r5/1:/{Table/System���-Max}] initiating a split of this range at key /Table/11 [r6] I170719 18:40:41.417176 50313 sql/lease.go:367 [n1] publish: descID=12 (eventlog) version=2 mtime=2017-07-19 18:40:41.417022165 +0000 UTC I170719 18:40:41.464897 50411 storage/replica_command.go:2673 [split,n1,s1,r6/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r7] I170719 18:40:41.570660 50411 storage/replica_command.go:2673 [split,n1,s1,r7/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r8] I170719 18:40:41.641366 50313 server/server.go:951 [n1] done ensuring all necessary migrations have run I170719 18:40:41.642584 50313 server/server.go:953 [n1] serving sql connections I170719 18:40:41.683176 50411 storage/replica_command.go:2673 [split,n1,s1,r8/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r9] I170719 18:40:41.705747 50828 sql/event_log.go:101 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:60206} Attrs: Locality:} ClusterID:a185a4c7-e0ed-40fa-a7cd-3d3d21284208 StartedAt:1500489640942566356 LastUp:1500489640942566356} I170719 18:40:41.818014 50411 storage/replica_command.go:2673 [split,n1,s1,r9/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r10] W170719 18:40:42.013245 50313 server/server.go:299 [n?] all stores are configured as in-memory stores, so not setting up a temporary store. Queries with working set larger than memory will fail W170719 18:40:42.026415 50313 server/status/runtime.go:111 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" W170719 18:40:42.061493 50313 gossip/gossip.go:1196 [n?] no incoming or outgoing connections I170719 18:40:42.088867 50313 server/config.go:534 [n?] 1 storage engine initialized I170719 18:40:42.090230 50313 server/config.go:536 [n?] RocksDB cache size: 512 MiB I170719 18:40:42.091016 50313 server/config.go:536 [n?] store 0: in-memory, size 100 MiB I170719 18:40:42.093164 50313 server/node.go:434 [n?] store [n0,s0] not bootstrapped I170719 18:40:42.093284 50313 storage/stores.go:295 [n?] read 0 node addresses from persistent storage I170719 18:40:42.093533 50313 server/node.go:588 [n?] connecting to gossip network to verify cluster ID... I170719 18:40:42.167213 50913 gossip/client.go:131 [n?] started gossip client to 127.0.0.1:60206 I170719 18:40:42.169102 50974 gossip/server.go:234 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:38533} I170719 18:40:42.172362 51000 storage/stores.go:314 [n?] wrote 1 node addresses to persistent storage I170719 18:40:42.182534 50313 server/node.go:613 [n?] node connected via gossip and verified as part of cluster "a185a4c7-e0ed-40fa-a7cd-3d3d21284208" I170719 18:40:42.199955 50313 kv/dist_sender.go:370 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I170719 18:40:42.210559 50313 server/node.go:324 [n?] new node allocated ID 2 I170719 18:40:42.210920 50313 gossip/gossip.go:297 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:38533" > attrs:<> locality:<> I170719 18:40:42.211876 50313 server/node.go:385 [n2] node=2: started with [=] engine(s) and attributes [] I170719 18:40:42.227580 51013 storage/stores.go:314 [n1] wrote 1 node addresses to persistent storage I170719 18:40:42.240254 50313 sql/executor.go:364 [n2] creating distSQLPlanner with address {tcp 127.0.0.1:38533} I170719 18:40:42.266098 50313 server/server.go:815 [n2] starting https server at 127.0.0.1:52863 I170719 18:40:42.266240 50313 server/server.go:816 [n2] starting grpc/postgres server at 127.0.0.1:38533 I170719 18:40:42.266396 50313 server/server.go:817 [n2] advertising CockroachDB node at 127.0.0.1:38533 I170719 18:40:42.282476 50313 server/server.go:951 [n2] done ensuring all necessary migrations have run I170719 18:40:42.282663 50313 server/server.go:953 [n2] serving sql connections I170719 18:40:42.291133 51008 server/node.go:569 [n2] bootstrapped store [n2,s2] I170719 18:40:42.410001 51160 sql/event_log.go:101 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:38533} Attrs: Locality:} ClusterID:a185a4c7-e0ed-40fa-a7cd-3d3d21284208 StartedAt:1500489642211586320 LastUp:1500489642211586320} W170719 18:40:42.430964 50313 server/server.go:299 [n?] all stores are configured as in-memory stores, so not setting up a temporary store. Queries with working set larger than memory will fail W170719 18:40:42.442067 50313 server/status/runtime.go:111 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" W170719 18:40:42.453081 50313 gossip/gossip.go:1196 [n?] no incoming or outgoing connections I170719 18:40:42.530473 50313 server/config.go:534 [n?] 1 storage engine initialized I170719 18:40:42.530584 50313 server/config.go:536 [n?] RocksDB cache size: 512 MiB I170719 18:40:42.530619 50313 server/config.go:536 [n?] store 0: in-memory, size 100 MiB I170719 18:40:42.532003 50313 server/node.go:434 [n?] store [n0,s0] not bootstrapped I170719 18:40:42.532096 50313 storage/stores.go:295 [n?] read 0 node addresses from persistent storage I170719 18:40:42.532213 50313 server/node.go:588 [n?] connecting to gossip network to verify cluster ID... I170719 18:40:42.724378 51175 gossip/client.go:131 [n?] started gossip client to 127.0.0.1:60206 I170719 18:40:42.726603 51252 gossip/server.go:234 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:58350} I170719 18:40:42.740877 51238 storage/stores.go:314 [n?] wrote 1 node addresses to persistent storage I170719 18:40:42.741205 50313 server/node.go:613 [n?] node connected via gossip and verified as part of cluster "a185a4c7-e0ed-40fa-a7cd-3d3d21284208" I170719 18:40:42.747986 51239 storage/stores.go:314 [n?] wrote 2 node addresses to persistent storage I170719 18:40:42.754604 50313 kv/dist_sender.go:370 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I170719 18:40:42.760330 50313 server/node.go:324 [n?] new node allocated ID 3 I170719 18:40:42.760679 50313 gossip/gossip.go:297 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:58350" > attrs:<> locality:<> I170719 18:40:42.761468 50313 server/node.go:385 [n3] node=3: started with [=] engine(s) and attributes [] I170719 18:40:42.763950 50313 sql/executor.go:364 [n3] creating distSQLPlanner with address {tcp 127.0.0.1:58350} I170719 18:40:42.803862 51168 storage/stores.go:314 [n1] wrote 2 node addresses to persistent storage I170719 18:40:42.806366 51210 storage/stores.go:314 [n2] wrote 2 node addresses to persistent storage I170719 18:40:42.807662 50313 server/server.go:815 [n3] starting https server at 127.0.0.1:47309 I170719 18:40:42.807788 50313 server/server.go:816 [n3] starting grpc/postgres server at 127.0.0.1:58350 I170719 18:40:42.807836 50313 server/server.go:817 [n3] advertising CockroachDB node at 127.0.0.1:58350 I170719 18:40:42.814891 50313 server/server.go:951 [n3] done ensuring all necessary migrations have run I170719 18:40:42.815060 50313 server/server.go:953 [n3] serving sql connections I170719 18:40:42.898843 51211 sql/event_log.go:101 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:58350} Attrs: Locality:} ClusterID:a185a4c7-e0ed-40fa-a7cd-3d3d21284208 StartedAt:1500489642761147272 LastUp:1500489642761147272} I170719 18:40:42.915698 51249 server/node.go:569 [n3] bootstrapped store [n3,s3] I170719 18:40:42.924260 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r6/1:/Table/1{1-2}] generated preemptive snapshot 81e55aca at index 16 I170719 18:40:43.053543 50669 storage/store.go:3479 [replicate,n1,s1,r6/1:/Table/1{1-2}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 6, rate-limit: 8.0 MiB/sec, 5ms I170719 18:40:43.054856 51431 storage/replica_raftstorage.go:705 [n2,s2,r6/?:{-}] applying preemptive snapshot at index 16 (id=81e55aca, encoded size=5443, 1 rocksdb batches, 6 log entries) I170719 18:40:43.056503 51431 storage/replica_raftstorage.go:713 [n2,s2,r6/?:/Table/1{1-2}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms] I170719 18:40:43.063141 50669 storage/replica_command.go:3606 [replicate,n1,s1,r6/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r6:/Table/1{1-2} [(n1,s1):1, next=2] I170719 18:40:43.090168 51197 storage/replica.go:2947 [n1,s1,r6/1:/Table/1{1-2}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2] I170719 18:40:43.103760 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r8/1:/Table/1{3-4}] generated preemptive snapshot a29c2bdc at index 26 I170719 18:40:43.141603 51434 storage/raft_transport.go:453 [n2] raft transport stream to node 1 established I170719 18:40:43.268943 50669 storage/store.go:3479 [replicate,n1,s1,r8/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 69, log entries: 16, rate-limit: 8.0 MiB/sec, 9ms I170719 18:40:43.271393 51272 storage/replica_raftstorage.go:705 [n3,s3,r8/?:{-}] applying preemptive snapshot at index 26 (id=a29c2bdc, encoded size=21465, 1 rocksdb batches, 16 log entries) I170719 18:40:43.276257 51272 storage/replica_raftstorage.go:713 [n3,s3,r8/?:/Table/1{3-4}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=1ms] I170719 18:40:43.292527 50669 storage/replica_command.go:3606 [replicate,n1,s1,r8/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{3-4} [(n1,s1):1, next=2] I170719 18:40:43.390219 51514 storage/replica.go:2947 [n1,s1,r8/1:/Table/1{3-4}] proposing ADD_REPLICA (n3,s3):2: [(n1,s1):1 (n3,s3):2] I170719 18:40:43.401263 51517 storage/raft_transport.go:453 [n3] raft transport stream to node 1 established I170719 18:40:43.403979 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot 30b10ab4 at index 46 I170719 18:40:43.419086 50669 storage/store.go:3479 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 34, log entries: 36, rate-limit: 8.0 MiB/sec, 13ms I170719 18:40:43.449252 51275 storage/replica_raftstorage.go:705 [n3,s3,r1/?:{-}] applying preemptive snapshot at index 46 (id=30b10ab4, encoded size=21450, 1 rocksdb batches, 36 log entries) I170719 18:40:43.453086 51275 storage/replica_raftstorage.go:713 [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms] I170719 18:40:43.462641 50669 storage/replica_command.go:3606 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2] I170719 18:40:43.502298 51540 storage/replica.go:2947 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA (n3,s3):2: [(n1,s1):1 (n3,s3):2] I170719 18:40:43.506769 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r3/1:/System/ts{d-e}] generated preemptive snapshot 91832340 at index 24 I170719 18:40:43.550884 50669 storage/store.go:3479 [replicate,n1,s1,r3/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 911, log entries: 3, rate-limit: 8.0 MiB/sec, 43ms I170719 18:40:43.552716 51542 storage/replica_raftstorage.go:705 [n2,s2,r3/?:{-}] applying preemptive snapshot at index 24 (id=91832340, encoded size=150432, 1 rocksdb batches, 3 log entries) I170719 18:40:43.555772 51542 storage/replica_raftstorage.go:713 [n2,s2,r3/?:/System/ts{d-e}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=1ms] I170719 18:40:43.567043 50669 storage/replica_command.go:3606 [replicate,n1,s1,r3/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r3:/System/ts{d-e} [(n1,s1):1, next=2] I170719 18:40:43.624562 51537 storage/replica.go:2947 [n1,s1,r3/1:/System/ts{d-e}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2] I170719 18:40:43.628535 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] generated preemptive snapshot 3675bf7f at index 25 I170719 18:40:43.638838 50669 storage/store.go:3479 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 15, rate-limit: 8.0 MiB/sec, 9ms I170719 18:40:43.639680 51549 storage/replica_raftstorage.go:705 [n3,s3,r4/?:{-}] applying preemptive snapshot at index 25 (id=3675bf7f, encoded size=11686, 1 rocksdb batches, 15 log entries) I170719 18:40:43.641407 51549 storage/replica_raftstorage.go:713 [n3,s3,r4/?:/{System/tse-Table/System���}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=1ms commit=0ms] I170719 18:40:43.646391 50669 storage/replica_command.go:3606 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r4:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2] I170719 18:40:43.756963 51621 storage/replica.go:2947 [n1,s1,r4/1:/{System/tse-Table/System���}] proposing ADD_REPLICA (n3,s3):2: [(n1,s1):1 (n3,s3):2] I170719 18:40:43.764345 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] generated preemptive snapshot 6eeaa04a at index 27 I170719 18:40:43.787622 50669 storage/store.go:3479 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] streamed snapshot to (n2,s2):?: kv pairs: 40, log entries: 17, rate-limit: 8.0 MiB/sec, 21ms I170719 18:40:43.789740 51551 storage/replica_raftstorage.go:705 [n2,s2,r5/?:{-}] applying preemptive snapshot at index 27 (id=6eeaa04a, encoded size=19426, 1 rocksdb batches, 17 log entries) I170719 18:40:43.791715 51551 storage/replica_raftstorage.go:713 [n2,s2,r5/?:/Table/{SystemCon���-11}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms] I170719 18:40:43.797140 50669 storage/replica_command.go:3606 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r5:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2] I170719 18:40:43.866899 51595 storage/replica.go:2947 [n1,s1,r5/1:/Table/{SystemCon���-11}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2] I170719 18:40:43.888076 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r9/1:/Table/1{4-5}] generated preemptive snapshot 938f0c8c at index 19 I170719 18:40:43.891766 50669 storage/store.go:3479 [replicate,n1,s1,r9/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 3ms I170719 18:40:43.893571 51666 storage/replica_raftstorage.go:705 [n3,s3,r9/?:{-}] applying preemptive snapshot at index 19 (id=938f0c8c, encoded size=5870, 1 rocksdb batches, 9 log entries) I170719 18:40:43.895901 51666 storage/replica_raftstorage.go:713 [n3,s3,r9/?:/Table/1{4-5}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=1ms] I170719 18:40:43.899156 50669 storage/replica_command.go:3606 [replicate,n1,s1,r9/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r9:/Table/1{4-5} [(n1,s1):1, next=2] I170719 18:40:44.010797 51701 storage/replica.go:2947 [n1,s1,r9/1:/Table/1{4-5}] proposing ADD_REPLICA (n3,s3):2: [(n1,s1):1 (n3,s3):2] I170719 18:40:44.017768 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r7/1:/Table/1{2-3}] generated preemptive snapshot 69186943 at index 25 I170719 18:40:44.093159 50669 storage/store.go:3479 [replicate,n1,s1,r7/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 31, log entries: 15, rate-limit: 8.0 MiB/sec, 48ms I170719 18:40:44.097616 51706 storage/replica_raftstorage.go:705 [n2,s2,r7/?:{-}] applying preemptive snapshot at index 25 (id=69186943, encoded size=16662, 1 rocksdb batches, 15 log entries) I170719 18:40:44.099452 51706 storage/replica_raftstorage.go:713 [n2,s2,r7/?:/Table/1{2-3}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms] I170719 18:40:44.105901 50669 storage/replica_command.go:3606 [replicate,n1,s1,r7/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/1{2-3} [(n1,s1):1, next=2] I170719 18:40:44.160416 51612 storage/replica.go:2947 [n1,s1,r7/1:/Table/1{2-3}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2] I170719 18:40:44.175010 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r10/1:/{Table/15-Max}] generated preemptive snapshot 5ea961de at index 11 I170719 18:40:44.178578 50669 storage/store.go:3479 [replicate,n1,s1,r10/1:/{Table/15-Max}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 1, rate-limit: 8.0 MiB/sec, 3ms I170719 18:40:44.180008 51600 storage/replica_raftstorage.go:705 [n3,s3,r10/?:{-}] applying preemptive snapshot at index 11 (id=5ea961de, encoded size=548, 1 rocksdb batches, 1 log entries) I170719 18:40:44.181387 51600 storage/replica_raftstorage.go:713 [n3,s3,r10/?:/{Table/15-Max}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms] I170719 18:40:44.192858 50669 storage/replica_command.go:3606 [replicate,n1,s1,r10/1:/{Table/15-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/{Table/15-Max} [(n1,s1):1, next=2] I170719 18:40:44.272391 51763 storage/replica.go:2947 [n1,s1,r10/1:/{Table/15-Max}] proposing ADD_REPLICA (n3,s3):2: [(n1,s1):1 (n3,s3):2] I170719 18:40:44.276880 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r2/1:/System/{-tsd}] generated preemptive snapshot 10b563e8 at index 40 I170719 18:40:44.292607 50669 storage/store.go:3479 [replicate,n1,s1,r2/1:/System/{-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 32, log entries: 3, rate-limit: 8.0 MiB/sec, 12ms I170719 18:40:44.294495 51665 storage/replica_raftstorage.go:705 [n2,s2,r2/?:{-}] applying preemptive snapshot at index 40 (id=10b563e8, encoded size=74635, 1 rocksdb batches, 3 log entries) I170719 18:40:44.296820 51665 storage/replica_raftstorage.go:713 [n2,s2,r2/?:/System/{-tsd}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=1ms] I170719 18:40:44.300526 50669 storage/replica_command.go:3606 [replicate,n1,s1,r2/1:/System/{-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-tsd} [(n1,s1):1, next=2] I170719 18:40:44.368251 51737 storage/replica.go:2947 [n1,s1,r2/1:/System/{-tsd}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2] I170719 18:40:44.377803 50669 storage/queue.go:725 [n1,replicate] purgatory is now empty I170719 18:40:44.382176 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r6/1:/Table/1{1-2}] generated preemptive snapshot 4bea8438 at index 21 I170719 18:40:44.392734 50412 storage/store.go:3479 [replicate,n1,s1,r6/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 11, rate-limit: 8.0 MiB/sec, 8ms I170719 18:40:44.401913 51768 storage/replica_raftstorage.go:705 [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=4bea8438, encoded size=8431, 1 rocksdb batches, 11 log entries) I170719 18:40:44.405612 51768 storage/replica_raftstorage.go:713 [n3,s3,r6/?:/Table/1{1-2}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=0ms commit=0ms] I170719 18:40:44.426634 50412 storage/replica_command.go:3606 [replicate,n1,s1,r6/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r6:/Table/1{1-2} [(n1,s1):1, (n2,s2):2, next=3] I170719 18:40:44.522097 51756 storage/replica.go:2947 [n1,s1,r6/1:/Table/1{1-2}] proposing ADD_REPLICA (n3,s3):3: [(n1,s1):1 (n2,s2):2 (n3,s3):3] I170719 18:40:44.538790 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r3/1:/System/ts{d-e}] generated preemptive snapshot dedc3551 at index 27 I170719 18:40:44.560014 50412 storage/store.go:3479 [replicate,n1,s1,r3/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 912, log entries: 6, rate-limit: 8.0 MiB/sec, 20ms I170719 18:40:44.583255 51678 storage/replica_raftstorage.go:705 [n3,s3,r3/?:{-}] applying preemptive snapshot at index 27 (id=dedc3551, encoded size=152532, 1 rocksdb batches, 6 log entries) I170719 18:40:44.602153 51678 storage/replica_raftstorage.go:713 [n3,s3,r3/?:/System/ts{d-e}] applied preemptive snapshot in 19ms [clear=0ms batch=0ms entries=16ms commit=1ms] I170719 18:40:44.610959 50412 storage/replica_command.go:3606 [replicate,n1,s1,r3/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r3:/System/ts{d-e} [(n1,s1):1, (n2,s2):2, next=3] I170719 18:40:44.683592 51831 storage/replica.go:2947 [n1,s1,r3/1:/System/ts{d-e}] proposing ADD_REPLICA (n3,s3):3: [(n1,s1):1 (n2,s2):2 (n3,s3):3] I170719 18:40:44.692645 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r8/1:/Table/1{3-4}] generated preemptive snapshot 1e9608b1 at index 51 I170719 18:40:44.699159 50412 storage/store.go:3479 [replicate,n1,s1,r8/1:/Table/1{3-4}] streamed snapshot to (n2,s2):?: kv pairs: 131, log entries: 41, rate-limit: 8.0 MiB/sec, 6ms I170719 18:40:44.702296 51772 storage/replica_raftstorage.go:705 [n2,s2,r8/?:{-}] applying preemptive snapshot at index 51 (id=1e9608b1, encoded size=51779, 1 rocksdb batches, 41 log entries) I170719 18:40:44.704948 51772 storage/replica_raftstorage.go:713 [n2,s2,r8/?:/Table/1{3-4}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms] I170719 18:40:44.713856 50412 storage/replica_command.go:3606 [replicate,n1,s1,r8/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r8:/Table/1{3-4} [(n1,s1):1, (n3,s3):2, next=3] I170719 18:40:44.775630 51803 storage/replica.go:2947 [n1,s1,r8/1:/Table/1{3-4}] proposing ADD_REPLICA (n2,s2):3: [(n1,s1):1 (n3,s3):2 (n2,s2):3] I170719 18:40:44.808240 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r7/1:/Table/1{2-3}] generated preemptive snapshot 5b1246fb at index 28 I170719 18:40:44.814480 50412 storage/store.go:3479 [replicate,n1,s1,r7/1:/Table/1{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 32, log entries: 18, rate-limit: 8.0 MiB/sec, 5ms I170719 18:40:44.816750 51878 storage/replica_raftstorage.go:705 [n3,s3,r7/?:{-}] applying preemptive snapshot at index 28 (id=5b1246fb, encoded size=18693, 1 rocksdb batches, 18 log entries) I170719 18:40:44.820136 51878 storage/replica_raftstorage.go:713 [n3,s3,r7/?:/Table/1{2-3}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms] I170719 18:40:44.831633 50412 storage/replica_command.go:3606 [replicate,n1,s1,r7/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/1{2-3} [(n1,s1):1, (n2,s2):2, next=3] I170719 18:40:44.909251 51909 storage/replica.go:2947 [n1,s1,r7/1:/Table/1{2-3}] proposing ADD_REPLICA (n3,s3):3: [(n1,s1):1 (n2,s2):2 (n3,s3):3] I170719 18:40:44.920047 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] generated preemptive snapshot b72fcd3d at index 30 I170719 18:40:44.933548 50412 storage/store.go:3479 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] streamed snapshot to (n3,s3):?: kv pairs: 41, log entries: 20, rate-limit: 8.0 MiB/sec, 11ms I170719 18:40:44.935452 51924 storage/replica_raftstorage.go:705 [n3,s3,r5/?:{-}] applying preemptive snapshot at index 30 (id=b72fcd3d, encoded size=21457, 1 rocksdb batches, 20 log entries) I170719 18:40:44.941615 51924 storage/replica_raftstorage.go:713 [n3,s3,r5/?:/Table/{SystemCon���-11}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=1ms] I170719 18:40:44.955497 50412 storage/replica_command.go:3606 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r5:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3] I170719 18:40:45.029256 51918 storage/replica.go:2947 [n1,s1,r5/1:/Table/{SystemCon���-11}] proposing ADD_REPLICA (n3,s3):3: [(n1,s1):1 (n2,s2):2 (n3,s3):3] I170719 18:40:45.040591 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r2/1:/System/{-tsd}] generated preemptive snapshot a3676c17 at index 43 I170719 18:40:45.060519 50412 storage/store.go:3479 [replicate,n1,s1,r2/1:/System/{-tsd}] streamed snapshot to (n3,s3):?: kv pairs: 33, log entries: 6, rate-limit: 8.0 MiB/sec, 15ms I170719 18:40:45.083606 51863 storage/replica_raftstorage.go:705 [n3,s3,r2/?:{-}] applying preemptive snapshot at index 43 (id=a3676c17, encoded size=76678, 1 rocksdb batches, 6 log entries) I170719 18:40:45.144547 51863 storage/replica_raftstorage.go:713 [n3,s3,r2/?:/System/{-tsd}] applied preemptive snapshot in 61ms [clear=0ms batch=29ms entries=27ms commit=3ms] I170719 18:40:45.156995 50412 storage/replica_command.go:3606 [replicate,n1,s1,r2/1:/System/{-tsd}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:/System/{-tsd} [(n1,s1):1, (n2,s2):2, next=3] I170719 18:40:45.228832 51884 storage/replica.go:2947 [n1,s1,r2/1:/System/{-tsd}] proposing ADD_REPLICA (n3,s3):3: [(n1,s1):1 (n2,s2):2 (n3,s3):3] I170719 18:40:45.240561 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] generated preemptive snapshot b2348277 at index 28 I170719 18:40:45.255761 50412 storage/store.go:3479 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] streamed snapshot to (n2,s2):?: kv pairs: 13, log entries: 18, rate-limit: 8.0 MiB/sec, 11ms I170719 18:40:45.257762 51988 storage/replica_raftstorage.go:705 [n2,s2,r4/?:{-}] applying preemptive snapshot at index 28 (id=b2348277, encoded size=13774, 1 rocksdb batches, 18 log entries) I170719 18:40:45.260475 51988 storage/replica_raftstorage.go:713 [n2,s2,r4/?:/{System/tse-Table/System���}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=1ms] I170719 18:40:45.266517 50412 storage/replica_command.go:3606 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r4:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3] I170719 18:40:45.336241 51959 storage/replica.go:2947 [n1,s1,r4/1:/{System/tse-Table/System���}] proposing ADD_REPLICA (n2,s2):3: [(n1,s1):1 (n3,s3):2 (n2,s2):3] I170719 18:40:45.357539 51386 storage/replica_proposal.go:449 [n3,s3,r6/3:/Table/1{1-2}] new range lease repl=(n3,s3):3 start=1500489645.346104402,0 epo=1 pro=1500489645.346113302,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:45.363770 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot ecbcb4c9 at index 78 I170719 18:40:45.369321 50412 storage/store.go:3479 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 50, log entries: 68, rate-limit: 8.0 MiB/sec, 5ms I170719 18:40:45.372201 52006 storage/replica_raftstorage.go:705 [n2,s2,r1/?:{-}] applying preemptive snapshot at index 78 (id=ecbcb4c9, encoded size=37979, 1 rocksdb batches, 68 log entries) I170719 18:40:45.377124 52006 storage/replica_raftstorage.go:713 [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms] I170719 18:40:45.399349 50412 storage/replica_command.go:3606 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n3,s3):2, next=3] I170719 18:40:45.481218 51871 storage/replica.go:2947 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA (n2,s2):3: [(n1,s1):1 (n3,s3):2 (n2,s2):3] I170719 18:40:45.507428 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r10/1:/{Table/15-Max}] generated preemptive snapshot 146490fd at index 16 I170719 18:40:45.511341 50412 storage/store.go:3479 [replicate,n1,s1,r10/1:/{Table/15-Max}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms I170719 18:40:45.513151 51993 storage/replica_raftstorage.go:705 [n2,s2,r10/?:{-}] applying preemptive snapshot at index 16 (id=146490fd, encoded size=3542, 1 rocksdb batches, 6 log entries) I170719 18:40:45.516354 51993 storage/replica_raftstorage.go:713 [n2,s2,r10/?:/{Table/15-Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=0ms commit=2ms] I170719 18:40:45.539440 50412 storage/replica_command.go:3606 [replicate,n1,s1,r10/1:/{Table/15-Max}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r10:/{Table/15-Max} [(n1,s1):1, (n3,s3):2, next=3] I170719 18:40:45.594948 52021 storage/replica.go:2947 [n1,s1,r10/1:/{Table/15-Max}] proposing ADD_REPLICA (n2,s2):3: [(n1,s1):1 (n3,s3):2 (n2,s2):3] I170719 18:40:45.626592 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r9/1:/Table/1{4-5}] generated preemptive snapshot 2eeed046 at index 24 I170719 18:40:45.675668 52046 storage/replica_raftstorage.go:705 [n2,s2,r9/?:{-}] applying preemptive snapshot at index 24 (id=2eeed046, encoded size=8858, 1 rocksdb batches, 14 log entries) I170719 18:40:45.677265 52046 storage/replica_raftstorage.go:713 [n2,s2,r9/?:/Table/1{4-5}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms] I170719 18:40:45.678460 50412 storage/store.go:3479 [replicate,n1,s1,r9/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 14, rate-limit: 8.0 MiB/sec, 45ms I170719 18:40:45.682760 52056 storage/raft_transport.go:453 [n3] raft transport stream to node 2 established I170719 18:40:45.697634 50412 storage/replica_command.go:3606 [replicate,n1,s1,r9/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r9:/Table/1{4-5} [(n1,s1):1, (n3,s3):2, next=3] I170719 18:40:45.781228 52125 storage/replica.go:2947 [n1,s1,r9/1:/Table/1{4-5}] proposing ADD_REPLICA (n2,s2):3: [(n1,s1):1 (n3,s3):2 (n2,s2):3] I170719 18:40:45.810123 52065 storage/raft_transport.go:453 [n2] raft transport stream to node 3 established I170719 18:40:46.145881 50411 storage/replica_command.go:2673 [split,n1,s1,r10/1:/{Table/15-Max}] initiating a split of this range at key /Table/50 [r11] I170719 18:40:46.154443 52104 sql/event_log.go:101 [client=127.0.0.1:42049,user=root,n1] Event: "create_database", target: 50, info: {DatabaseName:data Statement:CREATE DATABASE IF NOT EXISTS data User:root} I170719 18:40:46.335190 52104 sql/event_log.go:101 [client=127.0.0.1:42049,user=root,n1] Event: "create_table", target: 51, info: {TableName:data.bank Statement:CREATE TABLE data.bank (id INT PRIMARY KEY, balance INT, payload STRING, FAMILY (id, balance, payload)) User:root} I170719 18:40:46.388780 50411 storage/replica_command.go:2673 [split,n1,s1,r11/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r12] I170719 18:40:46.721514 51036 storage/replica_proposal.go:449 [n2,s2,r11/3:/Table/5{0-1}] new range lease repl=(n2,s2):3 start=1500489646.656808267,0 epo=1 pro=1500489646.672018359,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:47.583193 52344 storage/replica_command.go:2673 [n1,s1,r12/1:/{Table/51-Max}] initiating a split of this range at key /Table/51/1/0 [r13] I170719 18:40:47.764403 52388 storage/replica_command.go:2673 [n1,s1,r13/1:/{Table/51/1/0-Max}] initiating a split of this range at key /Table/51/1/1 [r14] I170719 18:40:48.024040 52437 storage/replica_command.go:2673 [n1,s1,r14/1:/{Table/51/1/1-Max}] initiating a split of this range at key /Table/51/1/2 [r15] I170719 18:40:48.206567 52456 storage/replica_command.go:2673 [n1,s1,r15/1:/{Table/51/1/2-Max}] initiating a split of this range at key /Table/51/1/3 [r16] I170719 18:40:48.399966 52522 storage/replica_command.go:2673 [n1,s1,r16/1:/{Table/51/1/3-Max}] initiating a split of this range at key /Table/51/1/4 [r17] I170719 18:40:48.689980 52567 storage/replica_command.go:2673 [n1,s1,r17/1:/{Table/51/1/4-Max}] initiating a split of this range at key /Table/51/1/5 [r18] I170719 18:40:48.753295 51089 storage/replica_proposal.go:449 [n2,s2,r10/3:/Table/{15-50}] new range lease repl=(n2,s2):3 start=1500489648.736211591,0 epo=1 pro=1500489648.736220891,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:48.899607 52602 storage/replica_command.go:2673 [n1,s1,r18/1:/{Table/51/1/5-Max}] initiating a split of this range at key /Table/51/1/6 [r19] I170719 18:40:49.137039 52666 storage/replica_command.go:2673 [n1,s1,r19/1:/{Table/51/1/6-Max}] initiating a split of this range at key /Table/51/1/7 [r20] I170719 18:40:49.318726 52703 storage/replica_command.go:2673 [n1,s1,r20/1:/{Table/51/1/7-Max}] initiating a split of this range at key /Table/51/1/8 [r21] I170719 18:40:49.463074 50551 storage/replica_proposal.go:449 [n1,s1,r20/1:/{Table/51/1/7-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.456884527,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:49.512239 50565 storage/replica_proposal.go:449 [n1,s1,r20/1:/{Table/51/1/7-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.456884527,0 following repl=(n0,s0):? start=0.000000000,0 exp=0.000000000,0 I170719 18:40:49.539785 50540 storage/replica_proposal.go:449 [n1,s1,r8/1:/Table/1{3-4}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.530197735,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:49.642566 50554 storage/replica_proposal.go:449 [n1,s1,r5/1:/Table/{SystemCon���-11}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.631295676,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:49.710226 50528 storage/replica_proposal.go:449 [n1,s1,r7/1:/Table/1{2-3}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.700723509,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:49.761233 50411 storage/replica_command.go:2673 [split,n1,s1,r21/1:/{Table/51/1/8-Max}] initiating a split of this range at key /Table/52 [r22] I170719 18:40:49.771369 52104 sql/event_log.go:101 [client=127.0.0.1:42049,user=root,n1] Event: "create_database", target: 52, info: {DatabaseName:restoredb Statement:CREATE DATABASE restoredb User:root} I170719 18:40:50.014066 51368 storage/replica_proposal.go:449 [replica consistency checker,n3,s3,r17/2:/Table/51/1/{4-5}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489649.996621290,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.072598 50525 storage/replica_proposal.go:449 [n1,s1,r21/1:/{Table/51/1/8-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.456884527,0 following repl=(n0,s0):? start=0.000000000,0 exp=0.000000000,0 I170719 18:40:50.486173 50514 storage/replica_proposal.go:449 [replica consistency checker,n1,s1,r18/1:/Table/51/1/{5-6}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489650.480133073,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.540946 53139 ccl/storageccl/export.go:129 [n1,s1,r5/1:/Table/{SystemCon���-11}] export [/Table/4/1,/Table/4/2) I170719 18:40:50.546221 53124 ccl/storageccl/export.go:129 [n3,s3,r17/2:/Table/51/1/{4-5}] export [/Table/51/1/4,/Table/51/1/5) I170719 18:40:50.550387 53138 ccl/storageccl/export.go:129 [n1,s1,r5/1:/Table/{SystemCon���-11}] export [/Table/3/1,/Table/3/2) I170719 18:40:50.565638 53125 ccl/storageccl/export.go:129 [n1,s1,r21/1:/Table/5{1/1/8-2}] export [/Table/51/1/8,/Table/51/2) I170719 18:40:50.570950 53143 ccl/storageccl/export.go:129 [n1,s1,r20/1:/Table/51/1/{7-8}] export [/Table/51/1/7,/Table/51/1/8) I170719 18:40:50.578009 53031 ccl/storageccl/export.go:129 [n1,s1,r18/1:/Table/51/1/{5-6}] export [/Table/51/1/5,/Table/51/1/6) I170719 18:40:50.579386 51342 storage/replica_proposal.go:449 [n3,s3,r14/2:/Table/51/1/{1-2}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.545436627,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.584167 51405 storage/replica_proposal.go:449 [n3,s3,r15/2:/Table/51/1/{2-3}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.546828454,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.616689 51375 storage/replica_proposal.go:449 [n3,s3,r12/2:/Table/51{-/1/0}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.556122132,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.618563 51396 storage/replica_proposal.go:449 [n3,s3,r19/2:/Table/51/1/{6-7}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.569434188,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.620491 51382 storage/replica_proposal.go:449 [n3,s3,r13/2:/Table/51/1/{0-1}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.561660439,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.624096 51380 storage/replica_proposal.go:449 [n3,s3,r16/2:/Table/51/1/{3-4}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.573265961,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:51.055861 53186 ccl/storageccl/export.go:129 [n3,s3,r12/2:/Table/51{-/1/0}] export [/Table/51/1,/Table/51/1/0) I170719 18:40:51.056260 53187 ccl/storageccl/export.go:129 [n3,s3,r13/2:/Table/51/1/{0-1}] export [/Table/51/1/0,/Table/51/1/1) I170719 18:40:51.056375 53190 ccl/storageccl/export.go:129 [n3,s3,r19/2:/Table/51/1/{6-7}] export [/Table/51/1/6,/Table/51/1/7) I170719 18:40:51.057016 53191 ccl/storageccl/export.go:129 [n3,s3,r16/2:/Table/51/1/{3-4}] export [/Table/51/1/3,/Table/51/1/4) I170719 18:40:51.058199 53028 ccl/storageccl/export.go:129 [n3,s3,r15/2:/Table/51/1/{2-3}] export [/Table/51/1/2,/Table/51/1/3) ================== Write at 0x00c420b79c90 by goroutine 1159: github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.Backup.func3() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:463 +0x651 github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1() /go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:58 +0x68 Previous read at 0x00c420b79c90 by goroutine 1059: github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.Backup.func3() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:487 +0xa10 github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1() /go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:58 +0x68 Goroutine 1159 (running) created at: github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go() /go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:66 +0x73 github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.Backup() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:498 +0x16bb github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.backupPlanHook.func1() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:606 +0x72b github.com/cockroachdb/cockroach/pkg/sql.(*hookFnNode).Start() /go/src/github.com/cockroachdb/cockroach/pkg/sql/planhook.go:75 +0x65 github.com/cockroachdb/cockroach/pkg/sql.(*planner).startPlan() /go/src/github.com/cockroachdb/cockroach/pkg/sql/plan.go:209 +0xb5 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execClassic() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1572 +0x20a github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmt() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1715 +0x8b4 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmtInOpenTxn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1436 +0xc60 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmtsInCurrentTxn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1059 +0xb8e github.com/cockroachdb/cockroach/pkg/sql.runTxnAttempt() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:968 +0xbf github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execParsed.func1() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:773 +0x2bd github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).Exec() /go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:675 +0xe2 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execParsed() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:795 +0x6cc github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execPrepared() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:593 +0xbb github.com/cockroachdb/cockroach/pkg/sql.(*Executor).ExecutePreparedStatement() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:575 +0x233 github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*v3Conn).handleExecute() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/v3.go:841 +0x32b github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*v3Conn).serve() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/v3.go:463 +0xacd github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*Server).ServeConn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/server.go:421 +0xe7c github.com/cockroachdb/cockroach/pkg/server.(*Server).Start.func8.1() /go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:687 +0x17e github.com/cockroachdb/cockroach/pkg/util/netutil.(*Server).ServeWith.func1() /go/src/github.com/cockroachdb/cockroach/pkg/util/netutil/net.go:142 +0xdc Goroutine 1059 (finished) created at: github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go() /go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:66 +0x73 github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.Backup() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:498 +0x16bb github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.backupPlanHook.func1() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:606 +0x72b github.com/cockroachdb/cockroach/pkg/sql.(*hookFnNode).Start() /go/src/github.com/cockroachdb/cockroach/pkg/sql/planhook.go:75 +0x65 github.com/cockroachdb/cockroach/pkg/sql.(*planner).startPlan() /go/src/github.com/cockroachdb/cockroach/pkg/sql/plan.go:209 +0xb5 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execClassic() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1572 +0x20a github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmt() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1715 +0x8b4 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmtInOpenTxn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1436 +0xc60 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmtsInCurrentTxn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1059 +0xb8e github.com/cockroachdb/cockroach/pkg/sql.runTxnAttempt() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:968 +0xbf github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execParsed.func1() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:773 +0x2bd github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).Exec() /go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:675 +0xe2 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execParsed() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:795 +0x6cc github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execPrepared() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:593 +0xbb github.com/cockroachdb/cockroach/pkg/sql.(*Executor).ExecutePreparedStatement() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:575 +0x233 github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*v3Conn).handleExecute() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/v3.go:841 +0x32b github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*v3Conn).serve() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/v3.go:463 +0xacd github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*Server).ServeConn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/server.go:421 +0xe7c github.com/cockroachdb/cockroach/pkg/server.(*Server).Start.func8.1() /go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:687 +0x17e github.com/cockroachdb/cockroach/pkg/util/netutil.(*Server).ServeWith.func1() /go/src/github.com/cockroachdb/cockroach/pkg/util/netutil/net.go:142 +0xdc ================== I170719 18:38:12.036696 1 rand.go:76 Random seed: 4606219213896658469 ``` Please assign, take a look and update the issue accordingly.
1.0
teamcity: failed tests on master: testrace/TestBackupRestoreSystemJobsProgress - The following tests appear to have failed: [#300397](https://teamcity.cockroachdb.com/viewLog.html?buildId=300397): ``` --- FAIL: testrace/TestBackupRestoreSystemJobsProgress (0.000s) Race detected! ------- Stdout: ------- W170719 18:40:40.838101 50313 server/server.go:299 [n?] all stores are configured as in-memory stores, so not setting up a temporary store. Queries with working set larger than memory will fail W170719 18:40:40.839384 50313 server/status/runtime.go:111 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" I170719 18:40:40.868472 50313 server/config.go:534 [n?] 1 storage engine initialized I170719 18:40:40.868705 50313 server/config.go:536 [n?] RocksDB cache size: 512 MiB I170719 18:40:40.868755 50313 server/config.go:536 [n?] store 0: in-memory, size 100 MiB I170719 18:40:40.870446 50313 server/node.go:434 [n?] store [n0,s0] not bootstrapped I170719 18:40:40.897017 50313 server/node.go:369 [n?] **** cluster a185a4c7-e0ed-40fa-a7cd-3d3d21284208 has been created I170719 18:40:40.897146 50313 server/node.go:370 [n?] **** add additional nodes by specifying --join=127.0.0.1:60206 I170719 18:40:40.917239 50313 storage/store.go:1260 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available I170719 18:40:40.940628 50313 server/node.go:447 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:1 WritesPerSecond:47.95243003854344} I170719 18:40:40.940942 50313 server/node.go:331 [n1] node ID 1 initialized I170719 18:40:40.941190 50313 gossip/gossip.go:297 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:60206" > attrs:<> locality:<> I170719 18:40:40.942073 50313 storage/stores.go:295 [n1] read 0 node addresses from persistent storage I170719 18:40:40.942377 50313 server/node.go:588 [n1] connecting to gossip network to verify cluster ID... I170719 18:40:40.942524 50313 server/node.go:613 [n1] node connected via gossip and verified as part of cluster "a185a4c7-e0ed-40fa-a7cd-3d3d21284208" I170719 18:40:40.947400 50313 server/node.go:385 [n1] node=1: started with [=] engine(s) and attributes [] I170719 18:40:40.980329 50411 storage/replica_command.go:2673 [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2] E170719 18:40:41.016572 50412 storage/queue.go:658 [replicate,n1,s1,r1/1:/{Min-System/}] range requires a replication change, but lacks a quorum of live replicas (0/1) I170719 18:40:41.020409 50411 storage/replica_command.go:2673 [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/tsd [r3] I170719 18:40:41.022860 50313 sql/executor.go:364 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:60206} I170719 18:40:41.070753 50313 server/server.go:815 [n1] starting https server at 127.0.0.1:34285 I170719 18:40:41.070878 50313 server/server.go:816 [n1] starting grpc/postgres server at 127.0.0.1:60206 I170719 18:40:41.070923 50313 server/server.go:817 [n1] advertising CockroachDB node at 127.0.0.1:60206 E170719 18:40:41.102462 50412 storage/queue.go:658 [replicate,n1,s1,r2/1:/System/{-tsd}] range requires a replication change, but lacks a quorum of live replicas (0/1) I170719 18:40:41.121193 50411 storage/replica_command.go:2673 [split,n1,s1,r3/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r4] E170719 18:40:41.219859 50543 storage/replica_proposal.go:522 [n1,s1,r3/1:/{System/tsd-Max}] could not load SystemConfig span: must retry later due to intent on SystemConfigSpan I170719 18:40:41.229615 50411 storage/replica_command.go:2673 [split,n1,s1,r4/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r5] I170719 18:40:41.290782 50313 sql/event_log.go:101 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN "uniqueID" SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]} I170719 18:40:41.349143 50411 storage/replica_command.go:2673 [split,n1,s1,r5/1:/{Table/System���-Max}] initiating a split of this range at key /Table/11 [r6] I170719 18:40:41.417176 50313 sql/lease.go:367 [n1] publish: descID=12 (eventlog) version=2 mtime=2017-07-19 18:40:41.417022165 +0000 UTC I170719 18:40:41.464897 50411 storage/replica_command.go:2673 [split,n1,s1,r6/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r7] I170719 18:40:41.570660 50411 storage/replica_command.go:2673 [split,n1,s1,r7/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r8] I170719 18:40:41.641366 50313 server/server.go:951 [n1] done ensuring all necessary migrations have run I170719 18:40:41.642584 50313 server/server.go:953 [n1] serving sql connections I170719 18:40:41.683176 50411 storage/replica_command.go:2673 [split,n1,s1,r8/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r9] I170719 18:40:41.705747 50828 sql/event_log.go:101 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:60206} Attrs: Locality:} ClusterID:a185a4c7-e0ed-40fa-a7cd-3d3d21284208 StartedAt:1500489640942566356 LastUp:1500489640942566356} I170719 18:40:41.818014 50411 storage/replica_command.go:2673 [split,n1,s1,r9/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r10] W170719 18:40:42.013245 50313 server/server.go:299 [n?] all stores are configured as in-memory stores, so not setting up a temporary store. Queries with working set larger than memory will fail W170719 18:40:42.026415 50313 server/status/runtime.go:111 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" W170719 18:40:42.061493 50313 gossip/gossip.go:1196 [n?] no incoming or outgoing connections I170719 18:40:42.088867 50313 server/config.go:534 [n?] 1 storage engine initialized I170719 18:40:42.090230 50313 server/config.go:536 [n?] RocksDB cache size: 512 MiB I170719 18:40:42.091016 50313 server/config.go:536 [n?] store 0: in-memory, size 100 MiB I170719 18:40:42.093164 50313 server/node.go:434 [n?] store [n0,s0] not bootstrapped I170719 18:40:42.093284 50313 storage/stores.go:295 [n?] read 0 node addresses from persistent storage I170719 18:40:42.093533 50313 server/node.go:588 [n?] connecting to gossip network to verify cluster ID... I170719 18:40:42.167213 50913 gossip/client.go:131 [n?] started gossip client to 127.0.0.1:60206 I170719 18:40:42.169102 50974 gossip/server.go:234 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:38533} I170719 18:40:42.172362 51000 storage/stores.go:314 [n?] wrote 1 node addresses to persistent storage I170719 18:40:42.182534 50313 server/node.go:613 [n?] node connected via gossip and verified as part of cluster "a185a4c7-e0ed-40fa-a7cd-3d3d21284208" I170719 18:40:42.199955 50313 kv/dist_sender.go:370 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I170719 18:40:42.210559 50313 server/node.go:324 [n?] new node allocated ID 2 I170719 18:40:42.210920 50313 gossip/gossip.go:297 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:38533" > attrs:<> locality:<> I170719 18:40:42.211876 50313 server/node.go:385 [n2] node=2: started with [=] engine(s) and attributes [] I170719 18:40:42.227580 51013 storage/stores.go:314 [n1] wrote 1 node addresses to persistent storage I170719 18:40:42.240254 50313 sql/executor.go:364 [n2] creating distSQLPlanner with address {tcp 127.0.0.1:38533} I170719 18:40:42.266098 50313 server/server.go:815 [n2] starting https server at 127.0.0.1:52863 I170719 18:40:42.266240 50313 server/server.go:816 [n2] starting grpc/postgres server at 127.0.0.1:38533 I170719 18:40:42.266396 50313 server/server.go:817 [n2] advertising CockroachDB node at 127.0.0.1:38533 I170719 18:40:42.282476 50313 server/server.go:951 [n2] done ensuring all necessary migrations have run I170719 18:40:42.282663 50313 server/server.go:953 [n2] serving sql connections I170719 18:40:42.291133 51008 server/node.go:569 [n2] bootstrapped store [n2,s2] I170719 18:40:42.410001 51160 sql/event_log.go:101 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:38533} Attrs: Locality:} ClusterID:a185a4c7-e0ed-40fa-a7cd-3d3d21284208 StartedAt:1500489642211586320 LastUp:1500489642211586320} W170719 18:40:42.430964 50313 server/server.go:299 [n?] all stores are configured as in-memory stores, so not setting up a temporary store. Queries with working set larger than memory will fail W170719 18:40:42.442067 50313 server/status/runtime.go:111 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006" W170719 18:40:42.453081 50313 gossip/gossip.go:1196 [n?] no incoming or outgoing connections I170719 18:40:42.530473 50313 server/config.go:534 [n?] 1 storage engine initialized I170719 18:40:42.530584 50313 server/config.go:536 [n?] RocksDB cache size: 512 MiB I170719 18:40:42.530619 50313 server/config.go:536 [n?] store 0: in-memory, size 100 MiB I170719 18:40:42.532003 50313 server/node.go:434 [n?] store [n0,s0] not bootstrapped I170719 18:40:42.532096 50313 storage/stores.go:295 [n?] read 0 node addresses from persistent storage I170719 18:40:42.532213 50313 server/node.go:588 [n?] connecting to gossip network to verify cluster ID... I170719 18:40:42.724378 51175 gossip/client.go:131 [n?] started gossip client to 127.0.0.1:60206 I170719 18:40:42.726603 51252 gossip/server.go:234 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:58350} I170719 18:40:42.740877 51238 storage/stores.go:314 [n?] wrote 1 node addresses to persistent storage I170719 18:40:42.741205 50313 server/node.go:613 [n?] node connected via gossip and verified as part of cluster "a185a4c7-e0ed-40fa-a7cd-3d3d21284208" I170719 18:40:42.747986 51239 storage/stores.go:314 [n?] wrote 2 node addresses to persistent storage I170719 18:40:42.754604 50313 kv/dist_sender.go:370 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping I170719 18:40:42.760330 50313 server/node.go:324 [n?] new node allocated ID 3 I170719 18:40:42.760679 50313 gossip/gossip.go:297 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:58350" > attrs:<> locality:<> I170719 18:40:42.761468 50313 server/node.go:385 [n3] node=3: started with [=] engine(s) and attributes [] I170719 18:40:42.763950 50313 sql/executor.go:364 [n3] creating distSQLPlanner with address {tcp 127.0.0.1:58350} I170719 18:40:42.803862 51168 storage/stores.go:314 [n1] wrote 2 node addresses to persistent storage I170719 18:40:42.806366 51210 storage/stores.go:314 [n2] wrote 2 node addresses to persistent storage I170719 18:40:42.807662 50313 server/server.go:815 [n3] starting https server at 127.0.0.1:47309 I170719 18:40:42.807788 50313 server/server.go:816 [n3] starting grpc/postgres server at 127.0.0.1:58350 I170719 18:40:42.807836 50313 server/server.go:817 [n3] advertising CockroachDB node at 127.0.0.1:58350 I170719 18:40:42.814891 50313 server/server.go:951 [n3] done ensuring all necessary migrations have run I170719 18:40:42.815060 50313 server/server.go:953 [n3] serving sql connections I170719 18:40:42.898843 51211 sql/event_log.go:101 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:58350} Attrs: Locality:} ClusterID:a185a4c7-e0ed-40fa-a7cd-3d3d21284208 StartedAt:1500489642761147272 LastUp:1500489642761147272} I170719 18:40:42.915698 51249 server/node.go:569 [n3] bootstrapped store [n3,s3] I170719 18:40:42.924260 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r6/1:/Table/1{1-2}] generated preemptive snapshot 81e55aca at index 16 I170719 18:40:43.053543 50669 storage/store.go:3479 [replicate,n1,s1,r6/1:/Table/1{1-2}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 6, rate-limit: 8.0 MiB/sec, 5ms I170719 18:40:43.054856 51431 storage/replica_raftstorage.go:705 [n2,s2,r6/?:{-}] applying preemptive snapshot at index 16 (id=81e55aca, encoded size=5443, 1 rocksdb batches, 6 log entries) I170719 18:40:43.056503 51431 storage/replica_raftstorage.go:713 [n2,s2,r6/?:/Table/1{1-2}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms] I170719 18:40:43.063141 50669 storage/replica_command.go:3606 [replicate,n1,s1,r6/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r6:/Table/1{1-2} [(n1,s1):1, next=2] I170719 18:40:43.090168 51197 storage/replica.go:2947 [n1,s1,r6/1:/Table/1{1-2}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2] I170719 18:40:43.103760 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r8/1:/Table/1{3-4}] generated preemptive snapshot a29c2bdc at index 26 I170719 18:40:43.141603 51434 storage/raft_transport.go:453 [n2] raft transport stream to node 1 established I170719 18:40:43.268943 50669 storage/store.go:3479 [replicate,n1,s1,r8/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 69, log entries: 16, rate-limit: 8.0 MiB/sec, 9ms I170719 18:40:43.271393 51272 storage/replica_raftstorage.go:705 [n3,s3,r8/?:{-}] applying preemptive snapshot at index 26 (id=a29c2bdc, encoded size=21465, 1 rocksdb batches, 16 log entries) I170719 18:40:43.276257 51272 storage/replica_raftstorage.go:713 [n3,s3,r8/?:/Table/1{3-4}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=1ms] I170719 18:40:43.292527 50669 storage/replica_command.go:3606 [replicate,n1,s1,r8/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{3-4} [(n1,s1):1, next=2] I170719 18:40:43.390219 51514 storage/replica.go:2947 [n1,s1,r8/1:/Table/1{3-4}] proposing ADD_REPLICA (n3,s3):2: [(n1,s1):1 (n3,s3):2] I170719 18:40:43.401263 51517 storage/raft_transport.go:453 [n3] raft transport stream to node 1 established I170719 18:40:43.403979 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot 30b10ab4 at index 46 I170719 18:40:43.419086 50669 storage/store.go:3479 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 34, log entries: 36, rate-limit: 8.0 MiB/sec, 13ms I170719 18:40:43.449252 51275 storage/replica_raftstorage.go:705 [n3,s3,r1/?:{-}] applying preemptive snapshot at index 46 (id=30b10ab4, encoded size=21450, 1 rocksdb batches, 36 log entries) I170719 18:40:43.453086 51275 storage/replica_raftstorage.go:713 [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms] I170719 18:40:43.462641 50669 storage/replica_command.go:3606 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2] I170719 18:40:43.502298 51540 storage/replica.go:2947 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA (n3,s3):2: [(n1,s1):1 (n3,s3):2] I170719 18:40:43.506769 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r3/1:/System/ts{d-e}] generated preemptive snapshot 91832340 at index 24 I170719 18:40:43.550884 50669 storage/store.go:3479 [replicate,n1,s1,r3/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 911, log entries: 3, rate-limit: 8.0 MiB/sec, 43ms I170719 18:40:43.552716 51542 storage/replica_raftstorage.go:705 [n2,s2,r3/?:{-}] applying preemptive snapshot at index 24 (id=91832340, encoded size=150432, 1 rocksdb batches, 3 log entries) I170719 18:40:43.555772 51542 storage/replica_raftstorage.go:713 [n2,s2,r3/?:/System/ts{d-e}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=1ms] I170719 18:40:43.567043 50669 storage/replica_command.go:3606 [replicate,n1,s1,r3/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r3:/System/ts{d-e} [(n1,s1):1, next=2] I170719 18:40:43.624562 51537 storage/replica.go:2947 [n1,s1,r3/1:/System/ts{d-e}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2] I170719 18:40:43.628535 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] generated preemptive snapshot 3675bf7f at index 25 I170719 18:40:43.638838 50669 storage/store.go:3479 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 15, rate-limit: 8.0 MiB/sec, 9ms I170719 18:40:43.639680 51549 storage/replica_raftstorage.go:705 [n3,s3,r4/?:{-}] applying preemptive snapshot at index 25 (id=3675bf7f, encoded size=11686, 1 rocksdb batches, 15 log entries) I170719 18:40:43.641407 51549 storage/replica_raftstorage.go:713 [n3,s3,r4/?:/{System/tse-Table/System���}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=1ms commit=0ms] I170719 18:40:43.646391 50669 storage/replica_command.go:3606 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r4:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2] I170719 18:40:43.756963 51621 storage/replica.go:2947 [n1,s1,r4/1:/{System/tse-Table/System���}] proposing ADD_REPLICA (n3,s3):2: [(n1,s1):1 (n3,s3):2] I170719 18:40:43.764345 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] generated preemptive snapshot 6eeaa04a at index 27 I170719 18:40:43.787622 50669 storage/store.go:3479 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] streamed snapshot to (n2,s2):?: kv pairs: 40, log entries: 17, rate-limit: 8.0 MiB/sec, 21ms I170719 18:40:43.789740 51551 storage/replica_raftstorage.go:705 [n2,s2,r5/?:{-}] applying preemptive snapshot at index 27 (id=6eeaa04a, encoded size=19426, 1 rocksdb batches, 17 log entries) I170719 18:40:43.791715 51551 storage/replica_raftstorage.go:713 [n2,s2,r5/?:/Table/{SystemCon���-11}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms] I170719 18:40:43.797140 50669 storage/replica_command.go:3606 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r5:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2] I170719 18:40:43.866899 51595 storage/replica.go:2947 [n1,s1,r5/1:/Table/{SystemCon���-11}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2] I170719 18:40:43.888076 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r9/1:/Table/1{4-5}] generated preemptive snapshot 938f0c8c at index 19 I170719 18:40:43.891766 50669 storage/store.go:3479 [replicate,n1,s1,r9/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 3ms I170719 18:40:43.893571 51666 storage/replica_raftstorage.go:705 [n3,s3,r9/?:{-}] applying preemptive snapshot at index 19 (id=938f0c8c, encoded size=5870, 1 rocksdb batches, 9 log entries) I170719 18:40:43.895901 51666 storage/replica_raftstorage.go:713 [n3,s3,r9/?:/Table/1{4-5}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=1ms] I170719 18:40:43.899156 50669 storage/replica_command.go:3606 [replicate,n1,s1,r9/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r9:/Table/1{4-5} [(n1,s1):1, next=2] I170719 18:40:44.010797 51701 storage/replica.go:2947 [n1,s1,r9/1:/Table/1{4-5}] proposing ADD_REPLICA (n3,s3):2: [(n1,s1):1 (n3,s3):2] I170719 18:40:44.017768 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r7/1:/Table/1{2-3}] generated preemptive snapshot 69186943 at index 25 I170719 18:40:44.093159 50669 storage/store.go:3479 [replicate,n1,s1,r7/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 31, log entries: 15, rate-limit: 8.0 MiB/sec, 48ms I170719 18:40:44.097616 51706 storage/replica_raftstorage.go:705 [n2,s2,r7/?:{-}] applying preemptive snapshot at index 25 (id=69186943, encoded size=16662, 1 rocksdb batches, 15 log entries) I170719 18:40:44.099452 51706 storage/replica_raftstorage.go:713 [n2,s2,r7/?:/Table/1{2-3}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms] I170719 18:40:44.105901 50669 storage/replica_command.go:3606 [replicate,n1,s1,r7/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/1{2-3} [(n1,s1):1, next=2] I170719 18:40:44.160416 51612 storage/replica.go:2947 [n1,s1,r7/1:/Table/1{2-3}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2] I170719 18:40:44.175010 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r10/1:/{Table/15-Max}] generated preemptive snapshot 5ea961de at index 11 I170719 18:40:44.178578 50669 storage/store.go:3479 [replicate,n1,s1,r10/1:/{Table/15-Max}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 1, rate-limit: 8.0 MiB/sec, 3ms I170719 18:40:44.180008 51600 storage/replica_raftstorage.go:705 [n3,s3,r10/?:{-}] applying preemptive snapshot at index 11 (id=5ea961de, encoded size=548, 1 rocksdb batches, 1 log entries) I170719 18:40:44.181387 51600 storage/replica_raftstorage.go:713 [n3,s3,r10/?:/{Table/15-Max}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms] I170719 18:40:44.192858 50669 storage/replica_command.go:3606 [replicate,n1,s1,r10/1:/{Table/15-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/{Table/15-Max} [(n1,s1):1, next=2] I170719 18:40:44.272391 51763 storage/replica.go:2947 [n1,s1,r10/1:/{Table/15-Max}] proposing ADD_REPLICA (n3,s3):2: [(n1,s1):1 (n3,s3):2] I170719 18:40:44.276880 50669 storage/replica_raftstorage.go:496 [replicate,n1,s1,r2/1:/System/{-tsd}] generated preemptive snapshot 10b563e8 at index 40 I170719 18:40:44.292607 50669 storage/store.go:3479 [replicate,n1,s1,r2/1:/System/{-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 32, log entries: 3, rate-limit: 8.0 MiB/sec, 12ms I170719 18:40:44.294495 51665 storage/replica_raftstorage.go:705 [n2,s2,r2/?:{-}] applying preemptive snapshot at index 40 (id=10b563e8, encoded size=74635, 1 rocksdb batches, 3 log entries) I170719 18:40:44.296820 51665 storage/replica_raftstorage.go:713 [n2,s2,r2/?:/System/{-tsd}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=1ms] I170719 18:40:44.300526 50669 storage/replica_command.go:3606 [replicate,n1,s1,r2/1:/System/{-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-tsd} [(n1,s1):1, next=2] I170719 18:40:44.368251 51737 storage/replica.go:2947 [n1,s1,r2/1:/System/{-tsd}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2] I170719 18:40:44.377803 50669 storage/queue.go:725 [n1,replicate] purgatory is now empty I170719 18:40:44.382176 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r6/1:/Table/1{1-2}] generated preemptive snapshot 4bea8438 at index 21 I170719 18:40:44.392734 50412 storage/store.go:3479 [replicate,n1,s1,r6/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 11, rate-limit: 8.0 MiB/sec, 8ms I170719 18:40:44.401913 51768 storage/replica_raftstorage.go:705 [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=4bea8438, encoded size=8431, 1 rocksdb batches, 11 log entries) I170719 18:40:44.405612 51768 storage/replica_raftstorage.go:713 [n3,s3,r6/?:/Table/1{1-2}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=0ms commit=0ms] I170719 18:40:44.426634 50412 storage/replica_command.go:3606 [replicate,n1,s1,r6/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r6:/Table/1{1-2} [(n1,s1):1, (n2,s2):2, next=3] I170719 18:40:44.522097 51756 storage/replica.go:2947 [n1,s1,r6/1:/Table/1{1-2}] proposing ADD_REPLICA (n3,s3):3: [(n1,s1):1 (n2,s2):2 (n3,s3):3] I170719 18:40:44.538790 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r3/1:/System/ts{d-e}] generated preemptive snapshot dedc3551 at index 27 I170719 18:40:44.560014 50412 storage/store.go:3479 [replicate,n1,s1,r3/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 912, log entries: 6, rate-limit: 8.0 MiB/sec, 20ms I170719 18:40:44.583255 51678 storage/replica_raftstorage.go:705 [n3,s3,r3/?:{-}] applying preemptive snapshot at index 27 (id=dedc3551, encoded size=152532, 1 rocksdb batches, 6 log entries) I170719 18:40:44.602153 51678 storage/replica_raftstorage.go:713 [n3,s3,r3/?:/System/ts{d-e}] applied preemptive snapshot in 19ms [clear=0ms batch=0ms entries=16ms commit=1ms] I170719 18:40:44.610959 50412 storage/replica_command.go:3606 [replicate,n1,s1,r3/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r3:/System/ts{d-e} [(n1,s1):1, (n2,s2):2, next=3] I170719 18:40:44.683592 51831 storage/replica.go:2947 [n1,s1,r3/1:/System/ts{d-e}] proposing ADD_REPLICA (n3,s3):3: [(n1,s1):1 (n2,s2):2 (n3,s3):3] I170719 18:40:44.692645 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r8/1:/Table/1{3-4}] generated preemptive snapshot 1e9608b1 at index 51 I170719 18:40:44.699159 50412 storage/store.go:3479 [replicate,n1,s1,r8/1:/Table/1{3-4}] streamed snapshot to (n2,s2):?: kv pairs: 131, log entries: 41, rate-limit: 8.0 MiB/sec, 6ms I170719 18:40:44.702296 51772 storage/replica_raftstorage.go:705 [n2,s2,r8/?:{-}] applying preemptive snapshot at index 51 (id=1e9608b1, encoded size=51779, 1 rocksdb batches, 41 log entries) I170719 18:40:44.704948 51772 storage/replica_raftstorage.go:713 [n2,s2,r8/?:/Table/1{3-4}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms] I170719 18:40:44.713856 50412 storage/replica_command.go:3606 [replicate,n1,s1,r8/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r8:/Table/1{3-4} [(n1,s1):1, (n3,s3):2, next=3] I170719 18:40:44.775630 51803 storage/replica.go:2947 [n1,s1,r8/1:/Table/1{3-4}] proposing ADD_REPLICA (n2,s2):3: [(n1,s1):1 (n3,s3):2 (n2,s2):3] I170719 18:40:44.808240 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r7/1:/Table/1{2-3}] generated preemptive snapshot 5b1246fb at index 28 I170719 18:40:44.814480 50412 storage/store.go:3479 [replicate,n1,s1,r7/1:/Table/1{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 32, log entries: 18, rate-limit: 8.0 MiB/sec, 5ms I170719 18:40:44.816750 51878 storage/replica_raftstorage.go:705 [n3,s3,r7/?:{-}] applying preemptive snapshot at index 28 (id=5b1246fb, encoded size=18693, 1 rocksdb batches, 18 log entries) I170719 18:40:44.820136 51878 storage/replica_raftstorage.go:713 [n3,s3,r7/?:/Table/1{2-3}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms] I170719 18:40:44.831633 50412 storage/replica_command.go:3606 [replicate,n1,s1,r7/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/1{2-3} [(n1,s1):1, (n2,s2):2, next=3] I170719 18:40:44.909251 51909 storage/replica.go:2947 [n1,s1,r7/1:/Table/1{2-3}] proposing ADD_REPLICA (n3,s3):3: [(n1,s1):1 (n2,s2):2 (n3,s3):3] I170719 18:40:44.920047 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] generated preemptive snapshot b72fcd3d at index 30 I170719 18:40:44.933548 50412 storage/store.go:3479 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] streamed snapshot to (n3,s3):?: kv pairs: 41, log entries: 20, rate-limit: 8.0 MiB/sec, 11ms I170719 18:40:44.935452 51924 storage/replica_raftstorage.go:705 [n3,s3,r5/?:{-}] applying preemptive snapshot at index 30 (id=b72fcd3d, encoded size=21457, 1 rocksdb batches, 20 log entries) I170719 18:40:44.941615 51924 storage/replica_raftstorage.go:713 [n3,s3,r5/?:/Table/{SystemCon���-11}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=1ms] I170719 18:40:44.955497 50412 storage/replica_command.go:3606 [replicate,n1,s1,r5/1:/Table/{SystemCon���-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r5:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3] I170719 18:40:45.029256 51918 storage/replica.go:2947 [n1,s1,r5/1:/Table/{SystemCon���-11}] proposing ADD_REPLICA (n3,s3):3: [(n1,s1):1 (n2,s2):2 (n3,s3):3] I170719 18:40:45.040591 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r2/1:/System/{-tsd}] generated preemptive snapshot a3676c17 at index 43 I170719 18:40:45.060519 50412 storage/store.go:3479 [replicate,n1,s1,r2/1:/System/{-tsd}] streamed snapshot to (n3,s3):?: kv pairs: 33, log entries: 6, rate-limit: 8.0 MiB/sec, 15ms I170719 18:40:45.083606 51863 storage/replica_raftstorage.go:705 [n3,s3,r2/?:{-}] applying preemptive snapshot at index 43 (id=a3676c17, encoded size=76678, 1 rocksdb batches, 6 log entries) I170719 18:40:45.144547 51863 storage/replica_raftstorage.go:713 [n3,s3,r2/?:/System/{-tsd}] applied preemptive snapshot in 61ms [clear=0ms batch=29ms entries=27ms commit=3ms] I170719 18:40:45.156995 50412 storage/replica_command.go:3606 [replicate,n1,s1,r2/1:/System/{-tsd}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:/System/{-tsd} [(n1,s1):1, (n2,s2):2, next=3] I170719 18:40:45.228832 51884 storage/replica.go:2947 [n1,s1,r2/1:/System/{-tsd}] proposing ADD_REPLICA (n3,s3):3: [(n1,s1):1 (n2,s2):2 (n3,s3):3] I170719 18:40:45.240561 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] generated preemptive snapshot b2348277 at index 28 I170719 18:40:45.255761 50412 storage/store.go:3479 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] streamed snapshot to (n2,s2):?: kv pairs: 13, log entries: 18, rate-limit: 8.0 MiB/sec, 11ms I170719 18:40:45.257762 51988 storage/replica_raftstorage.go:705 [n2,s2,r4/?:{-}] applying preemptive snapshot at index 28 (id=b2348277, encoded size=13774, 1 rocksdb batches, 18 log entries) I170719 18:40:45.260475 51988 storage/replica_raftstorage.go:713 [n2,s2,r4/?:/{System/tse-Table/System���}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=1ms] I170719 18:40:45.266517 50412 storage/replica_command.go:3606 [replicate,n1,s1,r4/1:/{System/tse-Table/System���}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r4:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3] I170719 18:40:45.336241 51959 storage/replica.go:2947 [n1,s1,r4/1:/{System/tse-Table/System���}] proposing ADD_REPLICA (n2,s2):3: [(n1,s1):1 (n3,s3):2 (n2,s2):3] I170719 18:40:45.357539 51386 storage/replica_proposal.go:449 [n3,s3,r6/3:/Table/1{1-2}] new range lease repl=(n3,s3):3 start=1500489645.346104402,0 epo=1 pro=1500489645.346113302,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:45.363770 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot ecbcb4c9 at index 78 I170719 18:40:45.369321 50412 storage/store.go:3479 [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 50, log entries: 68, rate-limit: 8.0 MiB/sec, 5ms I170719 18:40:45.372201 52006 storage/replica_raftstorage.go:705 [n2,s2,r1/?:{-}] applying preemptive snapshot at index 78 (id=ecbcb4c9, encoded size=37979, 1 rocksdb batches, 68 log entries) I170719 18:40:45.377124 52006 storage/replica_raftstorage.go:713 [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms] I170719 18:40:45.399349 50412 storage/replica_command.go:3606 [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n3,s3):2, next=3] I170719 18:40:45.481218 51871 storage/replica.go:2947 [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA (n2,s2):3: [(n1,s1):1 (n3,s3):2 (n2,s2):3] I170719 18:40:45.507428 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r10/1:/{Table/15-Max}] generated preemptive snapshot 146490fd at index 16 I170719 18:40:45.511341 50412 storage/store.go:3479 [replicate,n1,s1,r10/1:/{Table/15-Max}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms I170719 18:40:45.513151 51993 storage/replica_raftstorage.go:705 [n2,s2,r10/?:{-}] applying preemptive snapshot at index 16 (id=146490fd, encoded size=3542, 1 rocksdb batches, 6 log entries) I170719 18:40:45.516354 51993 storage/replica_raftstorage.go:713 [n2,s2,r10/?:/{Table/15-Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=0ms commit=2ms] I170719 18:40:45.539440 50412 storage/replica_command.go:3606 [replicate,n1,s1,r10/1:/{Table/15-Max}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r10:/{Table/15-Max} [(n1,s1):1, (n3,s3):2, next=3] I170719 18:40:45.594948 52021 storage/replica.go:2947 [n1,s1,r10/1:/{Table/15-Max}] proposing ADD_REPLICA (n2,s2):3: [(n1,s1):1 (n3,s3):2 (n2,s2):3] I170719 18:40:45.626592 50412 storage/replica_raftstorage.go:496 [replicate,n1,s1,r9/1:/Table/1{4-5}] generated preemptive snapshot 2eeed046 at index 24 I170719 18:40:45.675668 52046 storage/replica_raftstorage.go:705 [n2,s2,r9/?:{-}] applying preemptive snapshot at index 24 (id=2eeed046, encoded size=8858, 1 rocksdb batches, 14 log entries) I170719 18:40:45.677265 52046 storage/replica_raftstorage.go:713 [n2,s2,r9/?:/Table/1{4-5}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms] I170719 18:40:45.678460 50412 storage/store.go:3479 [replicate,n1,s1,r9/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 14, rate-limit: 8.0 MiB/sec, 45ms I170719 18:40:45.682760 52056 storage/raft_transport.go:453 [n3] raft transport stream to node 2 established I170719 18:40:45.697634 50412 storage/replica_command.go:3606 [replicate,n1,s1,r9/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r9:/Table/1{4-5} [(n1,s1):1, (n3,s3):2, next=3] I170719 18:40:45.781228 52125 storage/replica.go:2947 [n1,s1,r9/1:/Table/1{4-5}] proposing ADD_REPLICA (n2,s2):3: [(n1,s1):1 (n3,s3):2 (n2,s2):3] I170719 18:40:45.810123 52065 storage/raft_transport.go:453 [n2] raft transport stream to node 3 established I170719 18:40:46.145881 50411 storage/replica_command.go:2673 [split,n1,s1,r10/1:/{Table/15-Max}] initiating a split of this range at key /Table/50 [r11] I170719 18:40:46.154443 52104 sql/event_log.go:101 [client=127.0.0.1:42049,user=root,n1] Event: "create_database", target: 50, info: {DatabaseName:data Statement:CREATE DATABASE IF NOT EXISTS data User:root} I170719 18:40:46.335190 52104 sql/event_log.go:101 [client=127.0.0.1:42049,user=root,n1] Event: "create_table", target: 51, info: {TableName:data.bank Statement:CREATE TABLE data.bank (id INT PRIMARY KEY, balance INT, payload STRING, FAMILY (id, balance, payload)) User:root} I170719 18:40:46.388780 50411 storage/replica_command.go:2673 [split,n1,s1,r11/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r12] I170719 18:40:46.721514 51036 storage/replica_proposal.go:449 [n2,s2,r11/3:/Table/5{0-1}] new range lease repl=(n2,s2):3 start=1500489646.656808267,0 epo=1 pro=1500489646.672018359,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:47.583193 52344 storage/replica_command.go:2673 [n1,s1,r12/1:/{Table/51-Max}] initiating a split of this range at key /Table/51/1/0 [r13] I170719 18:40:47.764403 52388 storage/replica_command.go:2673 [n1,s1,r13/1:/{Table/51/1/0-Max}] initiating a split of this range at key /Table/51/1/1 [r14] I170719 18:40:48.024040 52437 storage/replica_command.go:2673 [n1,s1,r14/1:/{Table/51/1/1-Max}] initiating a split of this range at key /Table/51/1/2 [r15] I170719 18:40:48.206567 52456 storage/replica_command.go:2673 [n1,s1,r15/1:/{Table/51/1/2-Max}] initiating a split of this range at key /Table/51/1/3 [r16] I170719 18:40:48.399966 52522 storage/replica_command.go:2673 [n1,s1,r16/1:/{Table/51/1/3-Max}] initiating a split of this range at key /Table/51/1/4 [r17] I170719 18:40:48.689980 52567 storage/replica_command.go:2673 [n1,s1,r17/1:/{Table/51/1/4-Max}] initiating a split of this range at key /Table/51/1/5 [r18] I170719 18:40:48.753295 51089 storage/replica_proposal.go:449 [n2,s2,r10/3:/Table/{15-50}] new range lease repl=(n2,s2):3 start=1500489648.736211591,0 epo=1 pro=1500489648.736220891,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:48.899607 52602 storage/replica_command.go:2673 [n1,s1,r18/1:/{Table/51/1/5-Max}] initiating a split of this range at key /Table/51/1/6 [r19] I170719 18:40:49.137039 52666 storage/replica_command.go:2673 [n1,s1,r19/1:/{Table/51/1/6-Max}] initiating a split of this range at key /Table/51/1/7 [r20] I170719 18:40:49.318726 52703 storage/replica_command.go:2673 [n1,s1,r20/1:/{Table/51/1/7-Max}] initiating a split of this range at key /Table/51/1/8 [r21] I170719 18:40:49.463074 50551 storage/replica_proposal.go:449 [n1,s1,r20/1:/{Table/51/1/7-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.456884527,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:49.512239 50565 storage/replica_proposal.go:449 [n1,s1,r20/1:/{Table/51/1/7-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.456884527,0 following repl=(n0,s0):? start=0.000000000,0 exp=0.000000000,0 I170719 18:40:49.539785 50540 storage/replica_proposal.go:449 [n1,s1,r8/1:/Table/1{3-4}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.530197735,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:49.642566 50554 storage/replica_proposal.go:449 [n1,s1,r5/1:/Table/{SystemCon���-11}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.631295676,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:49.710226 50528 storage/replica_proposal.go:449 [n1,s1,r7/1:/Table/1{2-3}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.700723509,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:49.761233 50411 storage/replica_command.go:2673 [split,n1,s1,r21/1:/{Table/51/1/8-Max}] initiating a split of this range at key /Table/52 [r22] I170719 18:40:49.771369 52104 sql/event_log.go:101 [client=127.0.0.1:42049,user=root,n1] Event: "create_database", target: 52, info: {DatabaseName:restoredb Statement:CREATE DATABASE restoredb User:root} I170719 18:40:50.014066 51368 storage/replica_proposal.go:449 [replica consistency checker,n3,s3,r17/2:/Table/51/1/{4-5}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489649.996621290,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.072598 50525 storage/replica_proposal.go:449 [n1,s1,r21/1:/{Table/51/1/8-Max}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489649.456884527,0 following repl=(n0,s0):? start=0.000000000,0 exp=0.000000000,0 I170719 18:40:50.486173 50514 storage/replica_proposal.go:449 [replica consistency checker,n1,s1,r18/1:/Table/51/1/{5-6}] new range lease repl=(n1,s1):1 start=0.000000000,0 epo=1 pro=1500489650.480133073,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.540946 53139 ccl/storageccl/export.go:129 [n1,s1,r5/1:/Table/{SystemCon���-11}] export [/Table/4/1,/Table/4/2) I170719 18:40:50.546221 53124 ccl/storageccl/export.go:129 [n3,s3,r17/2:/Table/51/1/{4-5}] export [/Table/51/1/4,/Table/51/1/5) I170719 18:40:50.550387 53138 ccl/storageccl/export.go:129 [n1,s1,r5/1:/Table/{SystemCon���-11}] export [/Table/3/1,/Table/3/2) I170719 18:40:50.565638 53125 ccl/storageccl/export.go:129 [n1,s1,r21/1:/Table/5{1/1/8-2}] export [/Table/51/1/8,/Table/51/2) I170719 18:40:50.570950 53143 ccl/storageccl/export.go:129 [n1,s1,r20/1:/Table/51/1/{7-8}] export [/Table/51/1/7,/Table/51/1/8) I170719 18:40:50.578009 53031 ccl/storageccl/export.go:129 [n1,s1,r18/1:/Table/51/1/{5-6}] export [/Table/51/1/5,/Table/51/1/6) I170719 18:40:50.579386 51342 storage/replica_proposal.go:449 [n3,s3,r14/2:/Table/51/1/{1-2}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.545436627,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.584167 51405 storage/replica_proposal.go:449 [n3,s3,r15/2:/Table/51/1/{2-3}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.546828454,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.616689 51375 storage/replica_proposal.go:449 [n3,s3,r12/2:/Table/51{-/1/0}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.556122132,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.618563 51396 storage/replica_proposal.go:449 [n3,s3,r19/2:/Table/51/1/{6-7}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.569434188,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.620491 51382 storage/replica_proposal.go:449 [n3,s3,r13/2:/Table/51/1/{0-1}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.561660439,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:50.624096 51380 storage/replica_proposal.go:449 [n3,s3,r16/2:/Table/51/1/{3-4}] new range lease repl=(n3,s3):2 start=1500489649.918328190,1 epo=1 pro=1500489650.573265961,0 following repl=(n1,s1):1 start=0.000000000,0 exp=1500489649.918328190,0 pro=1500489640.918384591,0 I170719 18:40:51.055861 53186 ccl/storageccl/export.go:129 [n3,s3,r12/2:/Table/51{-/1/0}] export [/Table/51/1,/Table/51/1/0) I170719 18:40:51.056260 53187 ccl/storageccl/export.go:129 [n3,s3,r13/2:/Table/51/1/{0-1}] export [/Table/51/1/0,/Table/51/1/1) I170719 18:40:51.056375 53190 ccl/storageccl/export.go:129 [n3,s3,r19/2:/Table/51/1/{6-7}] export [/Table/51/1/6,/Table/51/1/7) I170719 18:40:51.057016 53191 ccl/storageccl/export.go:129 [n3,s3,r16/2:/Table/51/1/{3-4}] export [/Table/51/1/3,/Table/51/1/4) I170719 18:40:51.058199 53028 ccl/storageccl/export.go:129 [n3,s3,r15/2:/Table/51/1/{2-3}] export [/Table/51/1/2,/Table/51/1/3) ================== Write at 0x00c420b79c90 by goroutine 1159: github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.Backup.func3() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:463 +0x651 github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1() /go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:58 +0x68 Previous read at 0x00c420b79c90 by goroutine 1059: github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.Backup.func3() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:487 +0xa10 github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1() /go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:58 +0x68 Goroutine 1159 (running) created at: github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go() /go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:66 +0x73 github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.Backup() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:498 +0x16bb github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.backupPlanHook.func1() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:606 +0x72b github.com/cockroachdb/cockroach/pkg/sql.(*hookFnNode).Start() /go/src/github.com/cockroachdb/cockroach/pkg/sql/planhook.go:75 +0x65 github.com/cockroachdb/cockroach/pkg/sql.(*planner).startPlan() /go/src/github.com/cockroachdb/cockroach/pkg/sql/plan.go:209 +0xb5 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execClassic() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1572 +0x20a github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmt() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1715 +0x8b4 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmtInOpenTxn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1436 +0xc60 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmtsInCurrentTxn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1059 +0xb8e github.com/cockroachdb/cockroach/pkg/sql.runTxnAttempt() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:968 +0xbf github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execParsed.func1() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:773 +0x2bd github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).Exec() /go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:675 +0xe2 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execParsed() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:795 +0x6cc github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execPrepared() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:593 +0xbb github.com/cockroachdb/cockroach/pkg/sql.(*Executor).ExecutePreparedStatement() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:575 +0x233 github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*v3Conn).handleExecute() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/v3.go:841 +0x32b github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*v3Conn).serve() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/v3.go:463 +0xacd github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*Server).ServeConn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/server.go:421 +0xe7c github.com/cockroachdb/cockroach/pkg/server.(*Server).Start.func8.1() /go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:687 +0x17e github.com/cockroachdb/cockroach/pkg/util/netutil.(*Server).ServeWith.func1() /go/src/github.com/cockroachdb/cockroach/pkg/util/netutil/net.go:142 +0xdc Goroutine 1059 (finished) created at: github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go() /go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:66 +0x73 github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.Backup() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:498 +0x16bb github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.backupPlanHook.func1() /go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/backup.go:606 +0x72b github.com/cockroachdb/cockroach/pkg/sql.(*hookFnNode).Start() /go/src/github.com/cockroachdb/cockroach/pkg/sql/planhook.go:75 +0x65 github.com/cockroachdb/cockroach/pkg/sql.(*planner).startPlan() /go/src/github.com/cockroachdb/cockroach/pkg/sql/plan.go:209 +0xb5 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execClassic() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1572 +0x20a github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmt() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1715 +0x8b4 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmtInOpenTxn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1436 +0xc60 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execStmtsInCurrentTxn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:1059 +0xb8e github.com/cockroachdb/cockroach/pkg/sql.runTxnAttempt() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:968 +0xbf github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execParsed.func1() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:773 +0x2bd github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).Exec() /go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:675 +0xe2 github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execParsed() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:795 +0x6cc github.com/cockroachdb/cockroach/pkg/sql.(*Executor).execPrepared() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:593 +0xbb github.com/cockroachdb/cockroach/pkg/sql.(*Executor).ExecutePreparedStatement() /go/src/github.com/cockroachdb/cockroach/pkg/sql/executor.go:575 +0x233 github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*v3Conn).handleExecute() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/v3.go:841 +0x32b github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*v3Conn).serve() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/v3.go:463 +0xacd github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*Server).ServeConn() /go/src/github.com/cockroachdb/cockroach/pkg/sql/pgwire/server.go:421 +0xe7c github.com/cockroachdb/cockroach/pkg/server.(*Server).Start.func8.1() /go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:687 +0x17e github.com/cockroachdb/cockroach/pkg/util/netutil.(*Server).ServeWith.func1() /go/src/github.com/cockroachdb/cockroach/pkg/util/netutil/net.go:142 +0xdc ================== I170719 18:38:12.036696 1 rand.go:76 Random seed: 4606219213896658469 ``` Please assign, take a look and update the issue accordingly.
test
teamcity failed tests on master testrace testbackuprestoresystemjobsprogress the following tests appear to have failed fail testrace testbackuprestoresystemjobsprogress race detected stdout server server go all stores are configured as in memory stores so not setting up a temporary store queries with working set larger than memory will fail server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size mib server node go store not bootstrapped server node go cluster has been created server node go add additional nodes by specifying join storage store go failed initial metrics computation system config not yet available server node go initialized store capacity available rangecount leasecount writespersecond server node go node id initialized gossip gossip go nodedescriptor set to node id address attrs locality storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes storage replica command go initiating a split of this range at key system storage queue go range requires a replication change but lacks a quorum of live replicas storage replica command go initiating a split of this range at key system tsd sql executor go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage queue go range requires a replication change but lacks a quorum of live replicas storage replica command go initiating a split of this range at key system tse storage replica proposal go could not load systemconfig span must retry later due to intent on systemconfigspan storage replica command go initiating a split of this range at key table systemconfigspan start sql event log go event alter table target info tablename eventlog statement alter table system eventlog alter column uniqueid set default uuid user node mutationid cascadedroppedviews storage replica command go initiating a split of this range at key table sql lease go publish descid eventlog version mtime utc storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table server server go done ensuring all necessary migrations have run server server go serving sql connections storage replica command go initiating a split of this range at key table sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat lastup storage replica command go initiating a split of this range at key table server server go all stores are configured as in memory stores so not setting up a temporary store queries with working set larger than memory will fail server status runtime go could not parse build timestamp parsing time as cannot parse as gossip gossip go no incoming or outgoing connections server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size mib server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp storage stores go wrote node addresses to persistent storage server node go node connected via gossip and verified as part of cluster kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes storage stores go wrote node addresses to persistent storage sql executor go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections server node go bootstrapped store sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat lastup server server go all stores are configured as in memory stores so not setting up a temporary store queries with working set larger than memory will fail server status runtime go could not parse build timestamp parsing time as cannot parse as gossip gossip go no incoming or outgoing connections server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size mib server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp storage stores go wrote node addresses to persistent storage server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat lastup server node go bootstrapped store storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage raft transport go raft transport stream to node established storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica storage raft transport go raft transport stream to node established storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor min system storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system ts d e storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system tse table systemconfigspan start storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table systemconfigspan start storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table max storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system tsd storage replica go proposing add replica storage queue go purgatory is now empty storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system ts d e storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table systemconfigspan start storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system tsd storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system tse table systemconfigspan start storage replica go proposing add replica storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor min system storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table max storage replica go proposing add replica storage replica raftstorage go generated preemptive snapshot at index storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage store go streamed snapshot to kv pairs log entries rate limit mib sec storage raft transport go raft transport stream to node established storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica storage raft transport go raft transport stream to node established storage replica command go initiating a split of this range at key table sql event log go event create database target info databasename data statement create database if not exists data user root sql event log go event create table target info tablename data bank statement create table data bank id int primary key balance int payload string family id balance payload user root storage replica command go initiating a split of this range at key table storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica command go initiating a split of this range at key table sql event log go event create database target info databasename restoredb statement create database restoredb user root storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp storage replica proposal go new range lease repl start epo pro following repl start exp pro ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro storage replica proposal go new range lease repl start epo pro following repl start exp pro ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table ccl storageccl export go export table table write at by goroutine github com cockroachdb cockroach pkg ccl sqlccl backup go src github com cockroachdb cockroach pkg ccl sqlccl backup go github com cockroachdb cockroach vendor golang org x sync errgroup group go go src github com cockroachdb cockroach vendor golang org x sync errgroup errgroup go previous read at by goroutine github com cockroachdb cockroach pkg ccl sqlccl backup go src github com cockroachdb cockroach pkg ccl sqlccl backup go github com cockroachdb cockroach vendor golang org x sync errgroup group go go src github com cockroachdb cockroach vendor golang org x sync errgroup errgroup go goroutine running created at github com cockroachdb cockroach vendor golang org x sync errgroup group go go src github com cockroachdb cockroach vendor golang org x sync errgroup errgroup go github com cockroachdb cockroach pkg ccl sqlccl backup go src github com cockroachdb cockroach pkg ccl sqlccl backup go github com cockroachdb cockroach pkg ccl sqlccl backupplanhook go src github com cockroachdb cockroach pkg ccl sqlccl backup go github com cockroachdb cockroach pkg sql hookfnnode start go src github com cockroachdb cockroach pkg sql planhook go github com cockroachdb cockroach pkg sql planner startplan go src github com cockroachdb cockroach pkg sql plan go github com cockroachdb cockroach pkg sql executor execclassic go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor execstmt go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor execstmtinopentxn go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor execstmtsincurrenttxn go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql runtxnattempt go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor execparsed go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg internal client txn exec go src github com cockroachdb cockroach pkg internal client txn go github com cockroachdb cockroach pkg sql executor execparsed go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor execprepared go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor executepreparedstatement go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql pgwire handleexecute go src github com cockroachdb cockroach pkg sql pgwire go github com cockroachdb cockroach pkg sql pgwire serve go src github com cockroachdb cockroach pkg sql pgwire go github com cockroachdb cockroach pkg sql pgwire server serveconn go src github com cockroachdb cockroach pkg sql pgwire server go github com cockroachdb cockroach pkg server server start go src github com cockroachdb cockroach pkg server server go github com cockroachdb cockroach pkg util netutil server servewith go src github com cockroachdb cockroach pkg util netutil net go goroutine finished created at github com cockroachdb cockroach vendor golang org x sync errgroup group go go src github com cockroachdb cockroach vendor golang org x sync errgroup errgroup go github com cockroachdb cockroach pkg ccl sqlccl backup go src github com cockroachdb cockroach pkg ccl sqlccl backup go github com cockroachdb cockroach pkg ccl sqlccl backupplanhook go src github com cockroachdb cockroach pkg ccl sqlccl backup go github com cockroachdb cockroach pkg sql hookfnnode start go src github com cockroachdb cockroach pkg sql planhook go github com cockroachdb cockroach pkg sql planner startplan go src github com cockroachdb cockroach pkg sql plan go github com cockroachdb cockroach pkg sql executor execclassic go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor execstmt go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor execstmtinopentxn go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor execstmtsincurrenttxn go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql runtxnattempt go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor execparsed go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg internal client txn exec go src github com cockroachdb cockroach pkg internal client txn go github com cockroachdb cockroach pkg sql executor execparsed go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor execprepared go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql executor executepreparedstatement go src github com cockroachdb cockroach pkg sql executor go github com cockroachdb cockroach pkg sql pgwire handleexecute go src github com cockroachdb cockroach pkg sql pgwire go github com cockroachdb cockroach pkg sql pgwire serve go src github com cockroachdb cockroach pkg sql pgwire go github com cockroachdb cockroach pkg sql pgwire server serveconn go src github com cockroachdb cockroach pkg sql pgwire server go github com cockroachdb cockroach pkg server server start go src github com cockroachdb cockroach pkg server server go github com cockroachdb cockroach pkg util netutil server servewith go src github com cockroachdb cockroach pkg util netutil net go rand go random seed please assign take a look and update the issue accordingly
1
256,756
22,096,468,707
IssuesEvent
2022-06-01 10:32:05
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
[CI] GeoGridAggAndQueryConsistencyIT testGeoShapeGeoTile failing
:Analytics/Geo >test-failure Team:Analytics
**Build scan:** https://gradle-enterprise.elastic.co/s/y7ztsfdfbtvge/tests/:x-pack:plugin:spatial:internalClusterTest/org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT/testGeoShapeGeoTile **Reproduction line:** `./gradlew ':x-pack:plugin:spatial:internalClusterTest' --tests "org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT.testGeoShapeGeoTile" -Dtests.seed=7F1C1C35D817A418 -Dtests.locale=es-US -Dtests.timezone=Africa/Dar_es_Salaam -Druntime.java=17` **Applicable branches:** master, 8.3 **Reproduces locally?:** Yes **Failure history:** https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT&tests.test=testGeoShapeGeoTile **Failure excerpt:** ``` java.lang.AssertionError: Expected: <170L> but: was <169L> at __randomizedtesting.SeedInfo.seed([7F1C1C35D817A418:A9FF02F3D52E7F38]:0) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.junit.Assert.assertThat(Assert.java:923) at org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT.assertQuery(GeoGridAggAndQueryConsistencyIT.java:220) at org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT.doTestGrid(GeoGridAggAndQueryConsistencyIT.java:211) at org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT.doTestGeotileGrid(GeoGridAggAndQueryConsistencyIT.java:106) at org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT.testGeoShapeGeoTile(GeoGridAggAndQueryConsistencyIT.java:88) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:568) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831) at java.lang.Thread.run(Thread.java:833) ```
1.0
[CI] GeoGridAggAndQueryConsistencyIT testGeoShapeGeoTile failing - **Build scan:** https://gradle-enterprise.elastic.co/s/y7ztsfdfbtvge/tests/:x-pack:plugin:spatial:internalClusterTest/org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT/testGeoShapeGeoTile **Reproduction line:** `./gradlew ':x-pack:plugin:spatial:internalClusterTest' --tests "org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT.testGeoShapeGeoTile" -Dtests.seed=7F1C1C35D817A418 -Dtests.locale=es-US -Dtests.timezone=Africa/Dar_es_Salaam -Druntime.java=17` **Applicable branches:** master, 8.3 **Reproduces locally?:** Yes **Failure history:** https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT&tests.test=testGeoShapeGeoTile **Failure excerpt:** ``` java.lang.AssertionError: Expected: <170L> but: was <169L> at __randomizedtesting.SeedInfo.seed([7F1C1C35D817A418:A9FF02F3D52E7F38]:0) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.junit.Assert.assertThat(Assert.java:923) at org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT.assertQuery(GeoGridAggAndQueryConsistencyIT.java:220) at org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT.doTestGrid(GeoGridAggAndQueryConsistencyIT.java:211) at org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT.doTestGeotileGrid(GeoGridAggAndQueryConsistencyIT.java:106) at org.elasticsearch.xpack.spatial.search.GeoGridAggAndQueryConsistencyIT.testGeoShapeGeoTile(GeoGridAggAndQueryConsistencyIT.java:88) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:568) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375) at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831) at java.lang.Thread.run(Thread.java:833) ```
test
geogridaggandqueryconsistencyit testgeoshapegeotile failing build scan reproduction line gradlew x pack plugin spatial internalclustertest tests org elasticsearch xpack spatial search geogridaggandqueryconsistencyit testgeoshapegeotile dtests seed dtests locale es us dtests timezone africa dar es salaam druntime java applicable branches master reproduces locally yes failure history failure excerpt java lang assertionerror expected but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org junit assert assertthat assert java at org elasticsearch xpack spatial search geogridaggandqueryconsistencyit assertquery geogridaggandqueryconsistencyit java at org elasticsearch xpack spatial search geogridaggandqueryconsistencyit dotestgrid geogridaggandqueryconsistencyit java at org elasticsearch xpack spatial search geogridaggandqueryconsistencyit dotestgeotilegrid geogridaggandqueryconsistencyit java at org elasticsearch xpack spatial search geogridaggandqueryconsistencyit testgeoshapegeotile geogridaggandqueryconsistencyit java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
1
141,906
11,440,985,439
IssuesEvent
2020-02-05 10:43:43
WorldHealthOrganization/herams-backend
https://api.github.com/repos/WorldHealthOrganization/herams-backend
reopened
Allow everything should be removed from all levels.
test on staging
> We prefer to manage permissions individually.
1.0
Allow everything should be removed from all levels. - > We prefer to manage permissions individually.
test
allow everything should be removed from all levels we prefer to manage permissions individually
1
13,328
22,638,297,474
IssuesEvent
2022-06-30 21:33:05
NASA-PDS/pds-api
https://api.github.com/repos/NASA-PDS/pds-api
reopened
As a user, I want to only return the latest version of a product that has changed logical identifiers in it's history
requirement B12.1 B13.0 p.should-have sprint-backlog c.search-api
<!-- For more information on how to populate this new feature request, see the PDS Wiki on User Story Development: https://github.com/NASA-PDS/nasa-pds.github.io/wiki/Issue-Tracking#user-story-development --> ## 💪 Motivation ...so that I am not confused by seeing superseded data returned in search results ## 📖 Additional Details <!-- Please prove any additional details or information that could help provide some context for the user story. --> Per the [parent epic](https://github.com/nasa-pds/pds-registry-app/issues/219), and [the design](https://github.com/nasa-pds/pds-registry-app/issues/229) we need to update the API to implement these changes so we can sufficiently understand the version history. See parent epic for more details. ## ⚖️ Acceptance Criteria **Given** the context products [urn:nasa:pds:context:instrument:crs.vg1::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0_deprecated.xml) and [urn:nasa:pds:context:instrument:vg1.crs::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0.xml) ingested into the registry **When I perform** a query of the API for `products/` and paginate through the results **Then I expect** I should only see the product metadata for [urn:nasa:pds:context:instrument:vg1.crs::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0.xml) returned, not the superseded/deprecated [urn:nasa:pds:context:instrument:crs.vg1::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0_deprecated.xml) **NOTE: This functionality should apply to all endpoints, not just the `products/` endpoints** <!-- For Internal Dev Team Use --> ## ⚙️ Engineering Details <!-- Provide some design / implementation details and/or a sub-task checklist as needed. Convert issue to Epic if estimate is outside the scope of 1 sprint. -->
1.0
As a user, I want to only return the latest version of a product that has changed logical identifiers in it's history - <!-- For more information on how to populate this new feature request, see the PDS Wiki on User Story Development: https://github.com/NASA-PDS/nasa-pds.github.io/wiki/Issue-Tracking#user-story-development --> ## 💪 Motivation ...so that I am not confused by seeing superseded data returned in search results ## 📖 Additional Details <!-- Please prove any additional details or information that could help provide some context for the user story. --> Per the [parent epic](https://github.com/nasa-pds/pds-registry-app/issues/219), and [the design](https://github.com/nasa-pds/pds-registry-app/issues/229) we need to update the API to implement these changes so we can sufficiently understand the version history. See parent epic for more details. ## ⚖️ Acceptance Criteria **Given** the context products [urn:nasa:pds:context:instrument:crs.vg1::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0_deprecated.xml) and [urn:nasa:pds:context:instrument:vg1.crs::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0.xml) ingested into the registry **When I perform** a query of the API for `products/` and paginate through the results **Then I expect** I should only see the product metadata for [urn:nasa:pds:context:instrument:vg1.crs::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0.xml) returned, not the superseded/deprecated [urn:nasa:pds:context:instrument:crs.vg1::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0_deprecated.xml) **NOTE: This functionality should apply to all endpoints, not just the `products/` endpoints** <!-- For Internal Dev Team Use --> ## ⚙️ Engineering Details <!-- Provide some design / implementation details and/or a sub-task checklist as needed. Convert issue to Epic if estimate is outside the scope of 1 sprint. -->
non_test
as a user i want to only return the latest version of a product that has changed logical identifiers in it s history for more information on how to populate this new feature request see the pds wiki on user story development 💪 motivation so that i am not confused by seeing superseded data returned in search results 📖 additional details per the and we need to update the api to implement these changes so we can sufficiently understand the version history see parent epic for more details ⚖️ acceptance criteria given the context products and ingested into the registry when i perform a query of the api for products and paginate through the results then i expect i should only see the product metadata for returned not the superseded deprecated note this functionality should apply to all endpoints not just the products endpoints ⚙️ engineering details provide some design implementation details and or a sub task checklist as needed convert issue to epic if estimate is outside the scope of sprint
0
7,928
3,121,083,250
IssuesEvent
2015-09-05 09:24:00
AmpersandJS/tools.ampersandjs.com
https://api.github.com/repos/AmpersandJS/tools.ampersandjs.com
closed
Ampersand view based inspired by Polymer's core-drawer-panel
documentation enhancement
I was inspired by the idea of having views that provide basic layouts for web apps. I created [ampersand-drawer-view](https://github.com/scottcorgan/ampersand-drawer-view) to do that.
1.0
Ampersand view based inspired by Polymer's core-drawer-panel - I was inspired by the idea of having views that provide basic layouts for web apps. I created [ampersand-drawer-view](https://github.com/scottcorgan/ampersand-drawer-view) to do that.
non_test
ampersand view based inspired by polymer s core drawer panel i was inspired by the idea of having views that provide basic layouts for web apps i created to do that
0
547,843
16,048,444,449
IssuesEvent
2021-04-22 16:06:34
KingSupernova31/RulesGuru
https://api.github.com/repos/KingSupernova31/RulesGuru
opened
Preloading images doesn't work on Chrome
bug low priority
Preloaded questions attempt to also preload card images by creating an image element with that url. This works on Firefox, but not on Chrome. (Image preloading has been temporarily disabled entirely to get around #25.)
1.0
Preloading images doesn't work on Chrome - Preloaded questions attempt to also preload card images by creating an image element with that url. This works on Firefox, but not on Chrome. (Image preloading has been temporarily disabled entirely to get around #25.)
non_test
preloading images doesn t work on chrome preloaded questions attempt to also preload card images by creating an image element with that url this works on firefox but not on chrome image preloading has been temporarily disabled entirely to get around
0
72,901
7,314,817,550
IssuesEvent
2018-03-01 08:54:34
hazelcast/hazelcast-jet
https://api.github.com/repos/hazelcast/hazelcast-jet
closed
com.hazelcast.jet.core.SnapshotFailureTest.when_snapshotFails_then_jobShouldNotFail
test-failure
https://hazelcast-l337.ci.cloudbees.com/job/Jet-pr-builder/com.hazelcast.jet$hazelcast-jet-core/2484/testReport/junit/com.hazelcast.jet.core/SnapshotFailureTest/when_snapshotFails_then_jobShouldNotFail/ ``` Error Message no failed snapshot appeared in snapshotsMap Stacktrace java.lang.AssertionError: no failed snapshot appeared in snapshotsMap at com.hazelcast.jet.core.SnapshotFailureTest.when_snapshotFails_then_jobShouldNotFail(SnapshotFailureTest.java:121) ```
1.0
com.hazelcast.jet.core.SnapshotFailureTest.when_snapshotFails_then_jobShouldNotFail - https://hazelcast-l337.ci.cloudbees.com/job/Jet-pr-builder/com.hazelcast.jet$hazelcast-jet-core/2484/testReport/junit/com.hazelcast.jet.core/SnapshotFailureTest/when_snapshotFails_then_jobShouldNotFail/ ``` Error Message no failed snapshot appeared in snapshotsMap Stacktrace java.lang.AssertionError: no failed snapshot appeared in snapshotsMap at com.hazelcast.jet.core.SnapshotFailureTest.when_snapshotFails_then_jobShouldNotFail(SnapshotFailureTest.java:121) ```
test
com hazelcast jet core snapshotfailuretest when snapshotfails then jobshouldnotfail error message no failed snapshot appeared in snapshotsmap stacktrace java lang assertionerror no failed snapshot appeared in snapshotsmap at com hazelcast jet core snapshotfailuretest when snapshotfails then jobshouldnotfail snapshotfailuretest java
1
285,192
24,649,267,338
IssuesEvent
2022-10-17 17:12:33
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
closed
[DocDB] YbAdminSnapshotScheduleTest.RestoreAfterSplit test failures.
kind/bug kind/failing-test area/docdb priority/medium
Jira Link: [[DB-454]](https://yugabyte.atlassian.net/browse/DB-454) ### Description Need to fix. Source: https://detective.dev.yugabyte.com/stability?num_commits=50&sort=commit&threshold=10 [DB-454]: https://yugabyte.atlassian.net/browse/DB-454?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
1.0
[DocDB] YbAdminSnapshotScheduleTest.RestoreAfterSplit test failures. - Jira Link: [[DB-454]](https://yugabyte.atlassian.net/browse/DB-454) ### Description Need to fix. Source: https://detective.dev.yugabyte.com/stability?num_commits=50&sort=commit&threshold=10 [DB-454]: https://yugabyte.atlassian.net/browse/DB-454?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
test
ybadminsnapshotscheduletest restoreaftersplit test failures jira link description need to fix source
1