Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
14,067 | 16,890,488,014 | IssuesEvent | 2021-06-23 08:39:58 | arcus-azure/arcus.messaging | https://api.github.com/repos/arcus-azure/arcus.messaging | opened | Move `ServiceBusReceiver` to options model for furture-proof message routing | area:message-processing enhancement integration:service-bus | **Is your feature request related to a problem? Please describe.**
Move our `ServiceBusReceiver` model from the router signature to an options model so that we are more safe in the future when we want to add stuff from the Azure Functions/message pump to the router.
**Describe alternatives you've considered**
Adding new stuff to the signature, but that requires breaking changes. | 1.0 | Move `ServiceBusReceiver` to options model for furture-proof message routing - **Is your feature request related to a problem? Please describe.**
Move our `ServiceBusReceiver` model from the router signature to an options model so that we are more safe in the future when we want to add stuff from the Azure Functions/message pump to the router.
**Describe alternatives you've considered**
Adding new stuff to the signature, but that requires breaking changes. | non_test | move servicebusreceiver to options model for furture proof message routing is your feature request related to a problem please describe move our servicebusreceiver model from the router signature to an options model so that we are more safe in the future when we want to add stuff from the azure functions message pump to the router describe alternatives you ve considered adding new stuff to the signature but that requires breaking changes | 0 |
744,168 | 25,932,251,419 | IssuesEvent | 2022-12-16 11:01:47 | Avaiga/taipy-core | https://api.github.com/repos/Avaiga/taipy-core | closed | Dev mode : Modify the `_load_all` and `load_all_by` to take a version number into account | Core: Versioning ๐ง Priority: High ๐ Staff only | ### Description
Change the repository APIs `_load_all` and `load_all_by` to list only the entities related to a version.
- Make sure to modify the manager APIs if necessary
- By default, the current version should be used
- Evaluate new performances in integration-testing repository
| 1.0 | Dev mode : Modify the `_load_all` and `load_all_by` to take a version number into account - ### Description
Change the repository APIs `_load_all` and `load_all_by` to list only the entities related to a version.
- Make sure to modify the manager APIs if necessary
- By default, the current version should be used
- Evaluate new performances in integration-testing repository
| non_test | dev mode modify the load all and load all by to take a version number into account description change the repository apis load all and load all by to list only the entities related to a version make sure to modify the manager apis if necessary by default the current version should be used evaluate new performances in integration testing repository | 0 |
233,808 | 19,071,607,357 | IssuesEvent | 2021-11-27 01:49:06 | boostcampwm-2021/Web11-Donggle | https://api.github.com/repos/boostcampwm-2021/Web11-Donggle | opened | test/express Express test ์ฝ๋ ์์ฑ ๋ฐ ์ํ | Test BE Middle | ### ๐จ ๊ธฐ๋ฅ ์ค๋ช
- Express test ์ฝ๋ ์์ฑ ๋ฐ ์ํ
### ๐ ๊ตฌํํ ๋ด์ฉ
- [ ] Jest ๊ธฐ๋ฐ์ Test ์ฝ๋ ์์ฑ ๋ฐ ์ํ ๊ฒฐ๊ณผ ์ ๋ฆฌ
### ๐ง ์ฃผ์ ์ฌํญ
๊ธฐ๋ฅ์ ๊ตฌํํ ๋ ์ ์๊น๊ฒ ์ดํด๋ณผ ์ฌํญ
(### ์คํฌ๋ฆฐ ์ท)
| 1.0 | test/express Express test ์ฝ๋ ์์ฑ ๋ฐ ์ํ - ### ๐จ ๊ธฐ๋ฅ ์ค๋ช
- Express test ์ฝ๋ ์์ฑ ๋ฐ ์ํ
### ๐ ๊ตฌํํ ๋ด์ฉ
- [ ] Jest ๊ธฐ๋ฐ์ Test ์ฝ๋ ์์ฑ ๋ฐ ์ํ ๊ฒฐ๊ณผ ์ ๋ฆฌ
### ๐ง ์ฃผ์ ์ฌํญ
๊ธฐ๋ฅ์ ๊ตฌํํ ๋ ์ ์๊น๊ฒ ์ดํด๋ณผ ์ฌํญ
(### ์คํฌ๋ฆฐ ์ท)
| test | test express express test ์ฝ๋ ์์ฑ ๋ฐ ์ํ ๐จ ๊ธฐ๋ฅ ์ค๋ช
express test ์ฝ๋ ์์ฑ ๋ฐ ์ํ ๐ ๊ตฌํํ ๋ด์ฉ jest ๊ธฐ๋ฐ์ test ์ฝ๋ ์์ฑ ๋ฐ ์ํ ๊ฒฐ๊ณผ ์ ๋ฆฌ ๐ง ์ฃผ์ ์ฌํญ ๊ธฐ๋ฅ์ ๊ตฌํํ ๋ ์ ์๊น๊ฒ ์ดํด๋ณผ ์ฌํญ ์คํฌ๋ฆฐ ์ท | 1 |
419,859 | 28,182,556,203 | IssuesEvent | 2023-04-04 04:36:25 | dotnetcore/BootstrapBlazor | https://api.github.com/repos/dotnetcore/BootstrapBlazor | opened | doc(Localizer): remove inject IStringLocalizer statement from sample code | documentation | ### Document describing which component
Add sample code for localization
| 1.0 | doc(Localizer): remove inject IStringLocalizer statement from sample code - ### Document describing which component
Add sample code for localization
| non_test | doc localizer remove inject istringlocalizer statement from sample code document describing which component add sample code for localization | 0 |
113,903 | 9,668,367,217 | IssuesEvent | 2019-05-21 15:00:42 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: ycsb/B/nodes=3/cpu=32 failed | C-test-failure O-roachtest O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/9671342fead0509bec0913bae4ae1f244660788e
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=ycsb/B/nodes=3/cpu=32 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1298500&tab=buildLog
```
The test failed on branch=release-19.1, cloud=gce:
cluster.go:1474,ycsb.go:41,cluster.go:1812,errgroup.go:57: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1298500-ycsb-b-nodes-3-cpu-32:4 -- ./workload run ycsb --init --initial-rows=1000000 --splits=100 --workload=B --concurrency=64 --histograms=logs/stats.json --ramp=1m --duration=10m {pgurl:1-3} returned:
stderr:
stdout:
I190521 13:20:50.572219 1 workload/workload.go:562 starting 100 splits
Error: ALTER TABLE usertable SPLIT AT VALUES ('user18375070177385010517'): pq: splits would be immediately discarded by merge queue; disable the merge queue first by running 'SET CLUSTER SETTING kv.range_merge.queue_enabled = false'
Error: ssh verbose log retained in /root/.roachprod/debug/ssh_35.229.75.151_2019-05-21T13:20:41Z: exit status 1
: exit status 1
cluster.go:1833,ycsb.go:44,ycsb.go:65,test.go:1251: Goexit() was called
``` | 2.0 | roachtest: ycsb/B/nodes=3/cpu=32 failed - SHA: https://github.com/cockroachdb/cockroach/commits/9671342fead0509bec0913bae4ae1f244660788e
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=ycsb/B/nodes=3/cpu=32 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1298500&tab=buildLog
```
The test failed on branch=release-19.1, cloud=gce:
cluster.go:1474,ycsb.go:41,cluster.go:1812,errgroup.go:57: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1298500-ycsb-b-nodes-3-cpu-32:4 -- ./workload run ycsb --init --initial-rows=1000000 --splits=100 --workload=B --concurrency=64 --histograms=logs/stats.json --ramp=1m --duration=10m {pgurl:1-3} returned:
stderr:
stdout:
I190521 13:20:50.572219 1 workload/workload.go:562 starting 100 splits
Error: ALTER TABLE usertable SPLIT AT VALUES ('user18375070177385010517'): pq: splits would be immediately discarded by merge queue; disable the merge queue first by running 'SET CLUSTER SETTING kv.range_merge.queue_enabled = false'
Error: ssh verbose log retained in /root/.roachprod/debug/ssh_35.229.75.151_2019-05-21T13:20:41Z: exit status 1
: exit status 1
cluster.go:1833,ycsb.go:44,ycsb.go:65,test.go:1251: Goexit() was called
``` | test | roachtest ycsb b nodes cpu failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests ycsb b nodes cpu pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch release cloud gce cluster go ycsb go cluster go errgroup go home agent work go src github com cockroachdb cockroach bin roachprod run teamcity ycsb b nodes cpu workload run ycsb init initial rows splits workload b concurrency histograms logs stats json ramp duration pgurl returned stderr stdout workload workload go starting splits error alter table usertable split at values pq splits would be immediately discarded by merge queue disable the merge queue first by running set cluster setting kv range merge queue enabled false error ssh verbose log retained in root roachprod debug ssh exit status exit status cluster go ycsb go ycsb go test go goexit was called | 1 |
189,571 | 15,191,273,924 | IssuesEvent | 2021-02-15 19:34:21 | knightzmc/pdm | https://api.github.com/repos/knightzmc/pdm | opened | Update README.MD to document latest API changes | documentation | `PluginDependencyManager`'s API has partially split into platform specific classes, and a few other things have also changed. | 1.0 | Update README.MD to document latest API changes - `PluginDependencyManager`'s API has partially split into platform specific classes, and a few other things have also changed. | non_test | update readme md to document latest api changes plugindependencymanager s api has partially split into platform specific classes and a few other things have also changed | 0 |
51,146 | 6,149,163,176 | IssuesEvent | 2017-06-27 19:28:22 | okta/okta-sdk-php | https://api.github.com/repos/okta/okta-sdk-php | opened | Group IT - Group Rule Operations | tests | 1. Create a user with credentials, activated by default โ POST /api/v1/users?activate=true
```
const newUser = {
profile: {
firstName: 'John',
lastName: 'With-Group-Rule',
email: 'john-with-group-rule@example.com',
login: 'john-with-group-rule@example.com'
},
credentials: {
password: { value: 'Abcd1234' }
}
};
```
2. Create a new group โ POST /api/v1/groups
```
const newGroup = {
profile: {
name: 'Group-Member API Test Group'
}
};
```
3. Create a group rule and verify rule executes โ POST /api/v1/groups/rules
The rule below adds the user created in step 1 to the group created in step 2 upon rule execution/activation
```
const rule = {
type: 'group_rule',
name: 'Test group rule',
conditions: {
people: {
users: {
exclude: []
},
groups: {
exclude: []
}
},
expression: {
value: `user.lastName=="${createdUser.profile.lastName}"`,
type: 'urn:okta:expression:1.0'
}
},
actions: {
assignUserToGroups: {
groupIds: [
createdGroup.id
]
}
}
};
```
4. Activate the above rule and verify that user is added to the group โ POST /api/v1/groups/rules/{{ruleId}}/lifecycle/activate
>I have noted that there is a slight delay between the rule activation and triggering the rule action.
Hence wait for 1-2 seconds before validating the rule execution, in this case, validating that user was added to the group.
5. List the group rules and validate the above rule is present โ POST /api/v1/groups/rules
6. Deactivate the rule and update it (Rule can only be updated when it's deactivated) โ POST /api/v1/groups/rules/{{ruleId}}/lifecycle/deactivate + POST /api/v1/groups/rules/{{ruleId}}
rule.name = 'Test group rule updated';
rule.conditions.expression.value = 'user.lastName==\"incorrect\"';
7. Activate the updated rule and verify that the user is removed from the group โ POST /api/v1/groups/rules/{{ruleId}}/lifecycle/activate
8. Delete the user, group and group rule โ POST /api/v1/users/{{userId}}/lifecycle/deactivate + DELETE /api/v1/users/{{userId}} + DELETE /api/v1/groups/{{groupId}} + DELETE /api/v1/groups/rules/{{ruleId}} | 1.0 | Group IT - Group Rule Operations - 1. Create a user with credentials, activated by default โ POST /api/v1/users?activate=true
```
const newUser = {
profile: {
firstName: 'John',
lastName: 'With-Group-Rule',
email: 'john-with-group-rule@example.com',
login: 'john-with-group-rule@example.com'
},
credentials: {
password: { value: 'Abcd1234' }
}
};
```
2. Create a new group โ POST /api/v1/groups
```
const newGroup = {
profile: {
name: 'Group-Member API Test Group'
}
};
```
3. Create a group rule and verify rule executes โ POST /api/v1/groups/rules
The rule below adds the user created in step 1 to the group created in step 2 upon rule execution/activation
```
const rule = {
type: 'group_rule',
name: 'Test group rule',
conditions: {
people: {
users: {
exclude: []
},
groups: {
exclude: []
}
},
expression: {
value: `user.lastName=="${createdUser.profile.lastName}"`,
type: 'urn:okta:expression:1.0'
}
},
actions: {
assignUserToGroups: {
groupIds: [
createdGroup.id
]
}
}
};
```
4. Activate the above rule and verify that user is added to the group โ POST /api/v1/groups/rules/{{ruleId}}/lifecycle/activate
>I have noted that there is a slight delay between the rule activation and triggering the rule action.
Hence wait for 1-2 seconds before validating the rule execution, in this case, validating that user was added to the group.
5. List the group rules and validate the above rule is present โ POST /api/v1/groups/rules
6. Deactivate the rule and update it (Rule can only be updated when it's deactivated) โ POST /api/v1/groups/rules/{{ruleId}}/lifecycle/deactivate + POST /api/v1/groups/rules/{{ruleId}}
rule.name = 'Test group rule updated';
rule.conditions.expression.value = 'user.lastName==\"incorrect\"';
7. Activate the updated rule and verify that the user is removed from the group โ POST /api/v1/groups/rules/{{ruleId}}/lifecycle/activate
8. Delete the user, group and group rule โ POST /api/v1/users/{{userId}}/lifecycle/deactivate + DELETE /api/v1/users/{{userId}} + DELETE /api/v1/groups/{{groupId}} + DELETE /api/v1/groups/rules/{{ruleId}} | test | group it group rule operations create a user with credentials activated by default โ post api users activate true const newuser profile firstname john lastname with group rule email john with group rule example com login john with group rule example com credentials password value create a new group โ post api groups const newgroup profile name group member api test group create a group rule and verify rule executes โ post api groups rules the rule below adds the user created in step to the group created in step upon rule execution activation const rule type group rule name test group rule conditions people users exclude groups exclude expression value user lastname createduser profile lastname type urn okta expression actions assignusertogroups groupids createdgroup id activate the above rule and verify that user is added to the group โ post api groups rules ruleid lifecycle activate i have noted that there is a slight delay between the rule activation and triggering the rule action hence wait for seconds before validating the rule execution in this case validating that user was added to the group list the group rules and validate the above rule is present โ post api groups rules deactivate the rule and update it rule can only be updated when it s deactivated โ post api groups rules ruleid lifecycle deactivate post api groups rules ruleid rule name test group rule updated rule conditions expression value user lastname incorrect activate the updated rule and verify that the user is removed from the group โ post api groups rules ruleid lifecycle activate delete the user group and group rule โ post api users userid lifecycle deactivate delete api users userid delete api groups groupid delete api groups rules ruleid | 1 |
116,623 | 9,867,141,739 | IssuesEvent | 2019-06-21 09:26:48 | apoloval/artemisa | https://api.github.com/repos/apoloval/artemisa | closed | Motherboard test: PPI keyboard beep | testing | ## Description
Use ADT to test the keyboard clip beep produced from the PPI.
## Scenarios
- [ ] Test 3 of ADT passes
## Connections
- [x] Those from #89
- [x] C28 (former C29) capacitor that removes DC from `BEEP` signal
- [x] R10 (former R6) resistor that reduces the amplitude of `BEEP` signal
| 1.0 | Motherboard test: PPI keyboard beep - ## Description
Use ADT to test the keyboard clip beep produced from the PPI.
## Scenarios
- [ ] Test 3 of ADT passes
## Connections
- [x] Those from #89
- [x] C28 (former C29) capacitor that removes DC from `BEEP` signal
- [x] R10 (former R6) resistor that reduces the amplitude of `BEEP` signal
| test | motherboard test ppi keyboard beep description use adt to test the keyboard clip beep produced from the ppi scenarios test of adt passes connections those from former capacitor that removes dc from beep signal former resistor that reduces the amplitude of beep signal | 1 |
135,897 | 11,028,694,228 | IssuesEvent | 2019-12-06 12:18:57 | neuromation/cookiecutter-neuro-project | https://api.github.com/repos/neuromation/cookiecutter-neuro-project | closed | Docker API error during `make setup` | bug tests | ```
Saving job-c8131d2a-a0a4-457c-9815-1ccc2a07d252 -> image://artemyushkovskiy/neuromation-test-project:latest
Creating image image://artemyushkovskiy/neuromation-test-project:latest image from the job container
ERROR: Docker API error: Failed to save job 'job-c8131d2a-a0a4-457c-9815-1ccc2a07d252': DockerError(502, '<html>\r\n<head><title>502 Bad Gateway</title></head>\r\n<body>\r\n<center><h1>502 Bad Gateway</h1></center>\r\n<hr><center>nginx/1.17.3</center>\r\n</body>\r\n</html>\r\n')
make[1]: *** [Makefile:64: setup] Error 7
``` | 1.0 | Docker API error during `make setup` - ```
Saving job-c8131d2a-a0a4-457c-9815-1ccc2a07d252 -> image://artemyushkovskiy/neuromation-test-project:latest
Creating image image://artemyushkovskiy/neuromation-test-project:latest image from the job container
ERROR: Docker API error: Failed to save job 'job-c8131d2a-a0a4-457c-9815-1ccc2a07d252': DockerError(502, '<html>\r\n<head><title>502 Bad Gateway</title></head>\r\n<body>\r\n<center><h1>502 Bad Gateway</h1></center>\r\n<hr><center>nginx/1.17.3</center>\r\n</body>\r\n</html>\r\n')
make[1]: *** [Makefile:64: setup] Error 7
``` | test | docker api error during make setup saving job image artemyushkovskiy neuromation test project latest creating image image artemyushkovskiy neuromation test project latest image from the job container error docker api error failed to save job job dockererror r n bad gateway r n r n bad gateway r n nginx r n r n r n make error | 1 |
210,592 | 23,754,887,515 | IssuesEvent | 2022-09-01 01:25:27 | peterwkc85/Cucumber_Test | https://api.github.com/repos/peterwkc85/Cucumber_Test | opened | CVE-2021-37714 (High) detected in jsoup-1.9.2.jar | security vulnerability | ## CVE-2021-37714 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsoup-1.9.2.jar</b></p></summary>
<p>jsoup HTML parser</p>
<p>Path to vulnerable library: /jar/jsoup-1.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jsoup-1.9.2.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jsoup is a Java library for working with HTML. Those using jsoup versions prior to 1.14.2 to parse untrusted HTML or XML may be vulnerable to DOS attacks. If the parser is run on user supplied input, an attacker may supply content that causes the parser to get stuck (loop indefinitely until cancelled), to complete more slowly than usual, or to throw an unexpected exception. This effect may support a denial of service attack. The issue is patched in version 1.14.2. There are a few available workarounds. Users may rate limit input parsing, limit the size of inputs based on system resources, and/or implement thread watchdogs to cap and timeout parse runtimes.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37714>CVE-2021-37714</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://jsoup.org/news/release-1.14.2">https://jsoup.org/news/release-1.14.2</a></p>
<p>Release Date: 2021-08-18</p>
<p>Fix Resolution: 1.14.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37714 (High) detected in jsoup-1.9.2.jar - ## CVE-2021-37714 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsoup-1.9.2.jar</b></p></summary>
<p>jsoup HTML parser</p>
<p>Path to vulnerable library: /jar/jsoup-1.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jsoup-1.9.2.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jsoup is a Java library for working with HTML. Those using jsoup versions prior to 1.14.2 to parse untrusted HTML or XML may be vulnerable to DOS attacks. If the parser is run on user supplied input, an attacker may supply content that causes the parser to get stuck (loop indefinitely until cancelled), to complete more slowly than usual, or to throw an unexpected exception. This effect may support a denial of service attack. The issue is patched in version 1.14.2. There are a few available workarounds. Users may rate limit input parsing, limit the size of inputs based on system resources, and/or implement thread watchdogs to cap and timeout parse runtimes.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37714>CVE-2021-37714</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://jsoup.org/news/release-1.14.2">https://jsoup.org/news/release-1.14.2</a></p>
<p>Release Date: 2021-08-18</p>
<p>Fix Resolution: 1.14.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in jsoup jar cve high severity vulnerability vulnerable library jsoup jar jsoup html parser path to vulnerable library jar jsoup jar dependency hierarchy x jsoup jar vulnerable library vulnerability details jsoup is a java library for working with html those using jsoup versions prior to to parse untrusted html or xml may be vulnerable to dos attacks if the parser is run on user supplied input an attacker may supply content that causes the parser to get stuck loop indefinitely until cancelled to complete more slowly than usual or to throw an unexpected exception this effect may support a denial of service attack the issue is patched in version there are a few available workarounds users may rate limit input parsing limit the size of inputs based on system resources and or implement thread watchdogs to cap and timeout parse runtimes publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
13,412 | 3,710,406,597 | IssuesEvent | 2016-03-02 04:00:55 | KSP-KOS/KOS | https://api.github.com/repos/KSP-KOS/KOS | closed | Fix all occurrences of bogus data types in documentation | documentation | Because proper encapsulation types did not exist the docs often use things like `scalar`, `boolean`, `Boolean`, `scalar (metric tons)` to specify the type of a suffix parameter or a suffix return type. Now that we have proper user space types we should point to them.
As @hvacengi has pointed out we will probably want to take care of it after #1408 is done ;) | 1.0 | Fix all occurrences of bogus data types in documentation - Because proper encapsulation types did not exist the docs often use things like `scalar`, `boolean`, `Boolean`, `scalar (metric tons)` to specify the type of a suffix parameter or a suffix return type. Now that we have proper user space types we should point to them.
As @hvacengi has pointed out we will probably want to take care of it after #1408 is done ;) | non_test | fix all occurrences of bogus data types in documentation because proper encapsulation types did not exist the docs often use things like scalar boolean boolean scalar metric tons to specify the type of a suffix parameter or a suffix return type now that we have proper user space types we should point to them as hvacengi has pointed out we will probably want to take care of it after is done | 0 |
265,284 | 23,158,469,292 | IssuesEvent | 2022-07-29 15:08:58 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | opened | [CI] FileSettingsServiceIT testSettingsApplied failing | :Core/Infra/Core >test-failure Team:Core/Infra | **Build scan:**
https://gradle-enterprise.elastic.co/s/pijicvldxziw4/tests/:server:internalClusterTest/org.elasticsearch.reservedstate.service.FileSettingsServiceIT/testSettingsApplied
**Reproduction line:**
`./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.reservedstate.service.FileSettingsServiceIT.testSettingsApplied" -Dtests.seed=129CA1A8D872CBA3 -Dtests.locale=sr-BA -Dtests.timezone=Asia/Kamchatka -Druntime.java=17`
**Applicable branches:**
main
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.reservedstate.service.FileSettingsServiceIT&tests.test=testSettingsApplied
**Failure excerpt:**
```
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([129CA1A8D872CBA3]:0)
``` | 1.0 | [CI] FileSettingsServiceIT testSettingsApplied failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/pijicvldxziw4/tests/:server:internalClusterTest/org.elasticsearch.reservedstate.service.FileSettingsServiceIT/testSettingsApplied
**Reproduction line:**
`./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.reservedstate.service.FileSettingsServiceIT.testSettingsApplied" -Dtests.seed=129CA1A8D872CBA3 -Dtests.locale=sr-BA -Dtests.timezone=Asia/Kamchatka -Druntime.java=17`
**Applicable branches:**
main
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.reservedstate.service.FileSettingsServiceIT&tests.test=testSettingsApplied
**Failure excerpt:**
```
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([129CA1A8D872CBA3]:0)
``` | test | filesettingsserviceit testsettingsapplied failing build scan reproduction line gradlew server internalclustertest tests org elasticsearch reservedstate service filesettingsserviceit testsettingsapplied dtests seed dtests locale sr ba dtests timezone asia kamchatka druntime java applicable branches main reproduces locally no failure history failure excerpt java lang exception test abandoned because suite timeout was reached at randomizedtesting seedinfo seed | 1 |
217,551 | 16,855,806,476 | IssuesEvent | 2021-06-21 06:28:00 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | Case failureraftstore::test_region_heartbeat::test_server_down_peers_with_hibernate_regions | component/test | Latest build: <a href="https://internal.pingcap.net/idc-jenkins/job/tikv_ghpr_test/40088/display/redirect">https://internal.pingcap.net/idc-jenkins/job/tikv_ghpr_test/40088/display/redirect</a> | 1.0 | Case failureraftstore::test_region_heartbeat::test_server_down_peers_with_hibernate_regions - Latest build: <a href="https://internal.pingcap.net/idc-jenkins/job/tikv_ghpr_test/40088/display/redirect">https://internal.pingcap.net/idc-jenkins/job/tikv_ghpr_test/40088/display/redirect</a> | test | case failureraftstore test region heartbeat test server down peers with hibernate regions latest build a href | 1 |
20,770 | 16,023,619,572 | IssuesEvent | 2021-04-21 05:52:20 | imchillin/Anamnesis | https://api.github.com/repos/imchillin/Anamnesis | closed | Window won't regain opacity even after changing the slider value. | Bug Usability | Reported by Hecate#9242 in Discord. Workaround: Toggle "Custom Window Border" on then off.

| True | Window won't regain opacity even after changing the slider value. - Reported by Hecate#9242 in Discord. Workaround: Toggle "Custom Window Border" on then off.

| non_test | window won t regain opacity even after changing the slider value reported by hecate in discord workaround toggle custom window border on then off | 0 |
31,615 | 4,712,745,846 | IssuesEvent | 2016-10-14 17:47:15 | MachoThemes/newsmag-lite | https://api.github.com/repos/MachoThemes/newsmag-lite | closed | Add more recommended actions | enhancement needs testing tested | - [ ] Add a Widget - **description**: _Get started with Newsmag by adding a Slider Widget to the Header Area or by adding a Content widget. To achieve any of these actions, please head on to Customize -> Widgets -> Homepage : Header Area or Content Area and select any of the widgets presented there._
**Note**: ca si callback, poate fi o functie care verifica daca aceste 2 sidebar-uri sunt goale.
| 2.0 | Add more recommended actions - - [ ] Add a Widget - **description**: _Get started with Newsmag by adding a Slider Widget to the Header Area or by adding a Content widget. To achieve any of these actions, please head on to Customize -> Widgets -> Homepage : Header Area or Content Area and select any of the widgets presented there._
**Note**: ca si callback, poate fi o functie care verifica daca aceste 2 sidebar-uri sunt goale.
| test | add more recommended actions add a widget description get started with newsmag by adding a slider widget to the header area or by adding a content widget to achieve any of these actions please head on to customize widgets homepage header area or content area and select any of the widgets presented there note ca si callback poate fi o functie care verifica daca aceste sidebar uri sunt goale | 1 |
324,281 | 27,796,847,967 | IssuesEvent | 2023-03-17 13:11:11 | RubGonExp/git-ruben-test | https://api.github.com/repos/RubGonExp/git-ruben-test | closed | Para probar workflow en cambio automatico de estado | Testing | # Test Proposal
## Description
This test proposal is for testing a specific part of the project. Please review the details below and let me know if you have any questions or concerns.
## Test Objective
The objective of this test is to verify that the indicated parts of the project are working as expected.
## Test Steps
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## Expected Results
The expected result of this test is to see possible improvements. If any unexpected results occur, please provide detailed information on the issue.
## Test Environment
Please run this test in the following environment:
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
## Additional Comments
If you have any additional comments or concerns, please feel free to include them below.
Thank you for your help testing this part of the project!
| 1.0 | Para probar workflow en cambio automatico de estado - # Test Proposal
## Description
This test proposal is for testing a specific part of the project. Please review the details below and let me know if you have any questions or concerns.
## Test Objective
The objective of this test is to verify that the indicated parts of the project are working as expected.
## Test Steps
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## Expected Results
The expected result of this test is to see possible improvements. If any unexpected results occur, please provide detailed information on the issue.
## Test Environment
Please run this test in the following environment:
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
## Additional Comments
If you have any additional comments or concerns, please feel free to include them below.
Thank you for your help testing this part of the project!
| test | para probar workflow en cambio automatico de estado test proposal description this test proposal is for testing a specific part of the project please review the details below and let me know if you have any questions or concerns test objective the objective of this test is to verify that the indicated parts of the project are working as expected test steps go to click on scroll down to see error expected results the expected result of this test is to see possible improvements if any unexpected results occur please provide detailed information on the issue test environment please run this test in the following environment os browser version additional comments if you have any additional comments or concerns please feel free to include them below thank you for your help testing this part of the project | 1 |
171,810 | 13,249,709,497 | IssuesEvent | 2020-08-19 21:18:56 | microsoft/azuredatastudio | https://api.github.com/repos/microsoft/azuredatastudio | closed | A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader : Incorrect name as 'In collapse' is announced for a "^" control for screen reader users. | A11y_July27_2020_TestPass Area - Dashboard Bug Triage: Done | **"[Check out Accessibility Insights! ](https://nam06.safelinks.protection.outlook.com/?url=https://accessibilityinsights.io/&data=02%7c01%7cv-manai%40microsoft.com%7cb67b2c4b646d4f9561a208d6f4b5c39b%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c636965458847260936&sdata=T26HQfSGOlnuRQdX%2ByXk%2B2bxqgwFvCIVfuboZUWidYY%3D&reserved=0)- Identify accessibility bugs before check-in and make bug fixing faster and easier.โ**
GitHubTags:#A11y_AzureDataStudioJuly2020;#A11y_July27_2020_TestPass;#A11yMAS;#A11yTCS;#SQL Azure Data Studio;#Benchmark;#MAC;#Screenreader;#VoiceOver;#A11ySev2;#Benchmark;#MAS1.3.1;#MAS4.1.2;#MAS4.2.1;#FTP;
### Environment Details:
Application Name: Azure Data Studio
Application Version: 1.21.0-insider
Commit: eccf3cf
Date: 2020-07-24T09:28:31.172Z
VS Code: 1.48.0
Electron: 9.1.0
Chrome: 83.0.4103.122
Node.js: 12.14.1
V8: 8.3.110.13-electron.0
OS: Darwin x64 19.6.0
Operating system: macOS Catalina (Version 10.15.6 (19G73)
Screen Reader: VoiceOver
MAS References: MAS1.3.1, MAS4.1.2, MAS4.2.1
### Repro Steps:
1. Launch Azure Data Studio Insiders application.
2. Connect to server.
3. Double click on connected server or right click on it & select manage option to open the Dashboard.
4. Navigate to Home under Dashboard & hit enter.
5. Start screen reader, the navigate to "^" control and listen to the announcement been made for this control.
### Actual:
When screen reader users navigate to the "^" control, it name is announced as 'Collapsed selected' which is incorrect.
### Expected:
The "^" control should be provided with name as 'Show Details" with expand/collapse state for screen reader users so that users are able to identify it's state on interacting with it.
### User Impact:
If proper name and state of the control is not announced to the scree reader users the they will not understand how to interact with that control.
### Attachment link for Reference:
[11551_A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader Incorrect name as 'In collapse' is announced for a "^" control for screen reader users.zip](https://github.com/microsoft/azuredatastudio/files/4986396/11551_A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader.Incorrect.name.as.In.collapse.is.announced.for.a.Show.more.less.control.for.screen.reader.users.zip) | 1.0 | A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader : Incorrect name as 'In collapse' is announced for a "^" control for screen reader users. - **"[Check out Accessibility Insights! ](https://nam06.safelinks.protection.outlook.com/?url=https://accessibilityinsights.io/&data=02%7c01%7cv-manai%40microsoft.com%7cb67b2c4b646d4f9561a208d6f4b5c39b%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c636965458847260936&sdata=T26HQfSGOlnuRQdX%2ByXk%2B2bxqgwFvCIVfuboZUWidYY%3D&reserved=0)- Identify accessibility bugs before check-in and make bug fixing faster and easier.โ**
GitHubTags:#A11y_AzureDataStudioJuly2020;#A11y_July27_2020_TestPass;#A11yMAS;#A11yTCS;#SQL Azure Data Studio;#Benchmark;#MAC;#Screenreader;#VoiceOver;#A11ySev2;#Benchmark;#MAS1.3.1;#MAS4.1.2;#MAS4.2.1;#FTP;
### Environment Details:
Application Name: Azure Data Studio
Application Version: 1.21.0-insider
Commit: eccf3cf
Date: 2020-07-24T09:28:31.172Z
VS Code: 1.48.0
Electron: 9.1.0
Chrome: 83.0.4103.122
Node.js: 12.14.1
V8: 8.3.110.13-electron.0
OS: Darwin x64 19.6.0
Operating system: macOS Catalina (Version 10.15.6 (19G73)
Screen Reader: VoiceOver
MAS References: MAS1.3.1, MAS4.1.2, MAS4.2.1
### Repro Steps:
1. Launch Azure Data Studio Insiders application.
2. Connect to server.
3. Double click on connected server or right click on it & select manage option to open the Dashboard.
4. Navigate to Home under Dashboard & hit enter.
5. Start screen reader, the navigate to "^" control and listen to the announcement been made for this control.
### Actual:
When screen reader users navigate to the "^" control, it name is announced as 'Collapsed selected' which is incorrect.
### Expected:
The "^" control should be provided with name as 'Show Details" with expand/collapse state for screen reader users so that users are able to identify it's state on interacting with it.
### User Impact:
If proper name and state of the control is not announced to the scree reader users the they will not understand how to interact with that control.
### Attachment link for Reference:
[11551_A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader Incorrect name as 'In collapse' is announced for a "^" control for screen reader users.zip](https://github.com/microsoft/azuredatastudio/files/4986396/11551_A11y_AzureDataStudio_Dashboard_Home-Toolbar_ScreenReader.Incorrect.name.as.In.collapse.is.announced.for.a.Show.more.less.control.for.screen.reader.users.zip) | test | azuredatastudio dashboard home toolbar screenreader incorrect name as in collapse is announced for a control for screen reader users identify accessibility bugs before check in and make bug fixing faster and easier โ githubtags testpass sql azure data studio benchmark mac screenreader voiceover benchmark ftp environment details application name azure data studio application version insider commit date vs code electron chrome node js electron os darwin operating system macos catalina version screen reader voiceover mas references repro steps launch azure data studio insiders application connect to server double click on connected server or right click on it select manage option to open the dashboard navigate to home under dashboard hit enter start screen reader the navigate to control and listen to the announcement been made for this control actual when screen reader users navigate to the control it name is announced as collapsed selected which is incorrect expected the control should be provided with name as show details with expand collapse state for screen reader users so that users are able to identify it s state on interacting with it user impact if proper name and state of the control is not announced to the scree reader users the they will not understand how to interact with that control attachment link for reference | 1 |
1,151 | 3,633,939,081 | IssuesEvent | 2016-02-11 16:18:19 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | NTR: cellular response to pulsatile (and: oscillatory) fluid shear stress | BHF-UCL miRNA New term request pending RNA processes | Dear Biocurators,
I am writing to request a new GO term, which arose whilst annotating paper PMID: 21768538 (Wu et al., 2011).
It is demonstrated in Figures 3 and 4 in this paper that the type of fluid shear stress, which endothelial cells are exposed to, affects the expression of miR-92a and, consequently, its target: the transcription factor Kruยจppel-like factor 2 (KLF2).
I was able to capture the data presented in Figure 3 using term GO:0071499: โcellular response to laminar fluid shear stressโ.
I order to capture the information from Figure 4, I wish to request two sibling terms:
1) โcellular response to pulsatile fluid shear stressโ
2) โcellular response to oscillatory fluid shear stressโ
Like their existing sibling, these terms would be is_a child terms to two parents:
GO:0071498: โcellular response to fluid shear stressโ; and
GO:0034616: โresponse to laminar fluid shear stressโ.
DbxREFs: GOC:BHF, GOC:BHF_miRNA, GOC:bc
I will look forward to hearing from you with regard to my request.
Thank you,
Barbara
cc: @RLovering
cc: @rachhuntley
| 1.0 | NTR: cellular response to pulsatile (and: oscillatory) fluid shear stress - Dear Biocurators,
I am writing to request a new GO term, which arose whilst annotating paper PMID: 21768538 (Wu et al., 2011).
It is demonstrated in Figures 3 and 4 in this paper that the type of fluid shear stress, which endothelial cells are exposed to, affects the expression of miR-92a and, consequently, its target: the transcription factor Kruยจppel-like factor 2 (KLF2).
I was able to capture the data presented in Figure 3 using term GO:0071499: โcellular response to laminar fluid shear stressโ.
I order to capture the information from Figure 4, I wish to request two sibling terms:
1) โcellular response to pulsatile fluid shear stressโ
2) โcellular response to oscillatory fluid shear stressโ
Like their existing sibling, these terms would be is_a child terms to two parents:
GO:0071498: โcellular response to fluid shear stressโ; and
GO:0034616: โresponse to laminar fluid shear stressโ.
DbxREFs: GOC:BHF, GOC:BHF_miRNA, GOC:bc
I will look forward to hearing from you with regard to my request.
Thank you,
Barbara
cc: @RLovering
cc: @rachhuntley
| non_test | ntr cellular response to pulsatile and oscillatory fluid shear stress dear biocurators i am writing to request a new go term which arose whilst annotating paper pmid wu et al it is demonstrated in figures and in this paper that the type of fluid shear stress which endothelial cells are exposed to affects the expression of mir and consequently its target the transcription factor kruยจppel like factor i was able to capture the data presented in figure using term go โcellular response to laminar fluid shear stressโ i order to capture the information from figure i wish to request two sibling terms โcellular response to pulsatile fluid shear stressโ โcellular response to oscillatory fluid shear stressโ like their existing sibling these terms would be is a child terms to two parents go โcellular response to fluid shear stressโ and go โresponse to laminar fluid shear stressโ dbxrefs goc bhf goc bhf mirna goc bc i will look forward to hearing from you with regard to my request thank you barbara cc rlovering cc rachhuntley | 0 |
167,936 | 6,353,584,003 | IssuesEvent | 2017-07-29 00:35:26 | Tapestes/ripple-ri-prevention | https://api.github.com/repos/Tapestes/ripple-ri-prevention | closed | Current airport update | auto-migrated Priority-High Type-Enhancement | ```
iOS and Android versions need to rerun current airport search under certain
parameters. For example, after the app has been running x amount of time or
when current location is greater than some distance from the current airport.
```
Original issue reported on code.google.com by `Tapes...@gmail.com` on 19 May 2011 at 6:49
| 1.0 | Current airport update - ```
iOS and Android versions need to rerun current airport search under certain
parameters. For example, after the app has been running x amount of time or
when current location is greater than some distance from the current airport.
```
Original issue reported on code.google.com by `Tapes...@gmail.com` on 19 May 2011 at 6:49
| non_test | current airport update ios and android versions need to rerun current airport search under certain parameters for example after the app has been running x amount of time or when current location is greater than some distance from the current airport original issue reported on code google com by tapes gmail com on may at | 0 |
551,767 | 16,188,729,594 | IssuesEvent | 2021-05-04 03:56:10 | remnoteio/remnote-issues | https://api.github.com/repos/remnoteio/remnote-issues | closed | Some search breadcrumbs are incomplete | checked fixed-in-next-update fixed-in-remnote-1.3.7 priority=2 | I have this Ctrl + P search result:

But it's actually this path:

After opening the rem, going back and searching again it shows correctly:

| 1.0 | Some search breadcrumbs are incomplete - I have this Ctrl + P search result:

But it's actually this path:

After opening the rem, going back and searching again it shows correctly:

| non_test | some search breadcrumbs are incomplete i have this ctrl p search result but it s actually this path after opening the rem going back and searching again it shows correctly | 0 |
295,752 | 25,502,383,777 | IssuesEvent | 2022-11-28 06:06:52 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | opened | Test: random branch generation in web | testplan-item | Refs https://github.com/microsoft/vscode-remote-repositories-github/issues/256
- [ ] Windows
- [ ] macOS
- [ ] Linux
Authors: @joyceerhl
Complexity: 2
---
1. Go to https://insiders.vscode.dev/github/microsoft/vscode
2. Ensure random branch name generation is enabled (by default it's already enabled in workspace settings for the VS Code repo), and ensure you've configured a branch name prefix with `git.branchPrefix`
3. Make a change and commit directly to the main branch
4. Verify you are prompted to create a new branch with a random branch name populated like in desktop | 1.0 | Test: random branch generation in web - Refs https://github.com/microsoft/vscode-remote-repositories-github/issues/256
- [ ] Windows
- [ ] macOS
- [ ] Linux
Authors: @joyceerhl
Complexity: 2
---
1. Go to https://insiders.vscode.dev/github/microsoft/vscode
2. Ensure random branch name generation is enabled (by default it's already enabled in workspace settings for the VS Code repo), and ensure you've configured a branch name prefix with `git.branchPrefix`
3. Make a change and commit directly to the main branch
4. Verify you are prompted to create a new branch with a random branch name populated like in desktop | test | test random branch generation in web refs windows macos linux authors joyceerhl complexity go to ensure random branch name generation is enabled by default it s already enabled in workspace settings for the vs code repo and ensure you ve configured a branch name prefix with git branchprefix make a change and commit directly to the main branch verify you are prompted to create a new branch with a random branch name populated like in desktop | 1 |
95,370 | 3,946,670,925 | IssuesEvent | 2016-04-28 06:14:52 | xcat2/xcat-core | https://api.github.com/repos/xcat2/xcat-core | closed | [fvt]2.12:mac could not be assigned to docker container | component:docker priority:normal status:pending type:bug | env:ubuntu 14.04.3
build:
lsdef - Version 2.12 (git commit d3807e08e6642445cbca9bdc889b4265df879352, built Thu Mar 24 09:31:05 EDT 2016)
How to reproduce:
```
root@c910f04x30v14:~# lsdef host01c08
Object name: host01c08
dockercpus=1
dockerflag={"AttachStdin":true,"AttachStdout":true,"AttachStderr":true,"OpenStdin":true,"Tty":true}
dockerhost=c910f04x30v150:2375
dockermemory=4096
groups=docker,all
ip=10.4.30.148
mac=42:91:0a:04:1e:09
-------------------------------------------> mac is setted here is 42:91:0a:04:1e:09
mgt=docker
postbootscripts=otherpkgs
postscripts=syslog,remoteshell,syncfiles
provmethod=ubuntu!/bin/bash
root@c910f04x30v14:~#
Then
root@c910f04x30v150:~# docker attach host01c08
root@01c01b34b703:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
9: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0a:04:1e:94 brd ff:ff:ff:ff:ff:ff
------------------------> mac is 02:42:0a:04:1e:94
inet 10.4.30.148/8 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe04:1e94/64 scope link
valid_lft forever preferred_lft forever
| 1.0 | [fvt]2.12:mac could not be assigned to docker container - env:ubuntu 14.04.3
build:
lsdef - Version 2.12 (git commit d3807e08e6642445cbca9bdc889b4265df879352, built Thu Mar 24 09:31:05 EDT 2016)
How to reproduce:
```
root@c910f04x30v14:~# lsdef host01c08
Object name: host01c08
dockercpus=1
dockerflag={"AttachStdin":true,"AttachStdout":true,"AttachStderr":true,"OpenStdin":true,"Tty":true}
dockerhost=c910f04x30v150:2375
dockermemory=4096
groups=docker,all
ip=10.4.30.148
mac=42:91:0a:04:1e:09
-------------------------------------------> mac is setted here is 42:91:0a:04:1e:09
mgt=docker
postbootscripts=otherpkgs
postscripts=syslog,remoteshell,syncfiles
provmethod=ubuntu!/bin/bash
root@c910f04x30v14:~#
Then
root@c910f04x30v150:~# docker attach host01c08
root@01c01b34b703:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
9: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0a:04:1e:94 brd ff:ff:ff:ff:ff:ff
------------------------> mac is 02:42:0a:04:1e:94
inet 10.4.30.148/8 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe04:1e94/64 scope link
valid_lft forever preferred_lft forever
| non_test | mac could not be assigned to docker container env ubuntu build lsdef version git commit built thu mar edt how to reproduce root lsdef object name dockercpus dockerflag attachstdin true attachstdout true attachstderr true openstdin true tty true dockerhost dockermemory groups docker all ip mac mac is setted here is mgt docker postbootscripts otherpkgs postscripts syslog remoteshell syncfiles provmethod ubuntu bin bash root then root docker attach root ip addr lo mtu qdisc noqueue state unknown group default link loopback brd inet scope host lo valid lft forever preferred lft forever scope host valid lft forever preferred lft forever mtu qdisc noqueue state up group default link ether brd ff ff ff ff ff ff mac is inet scope global valid lft forever preferred lft forever aff scope link valid lft forever preferred lft forever | 0 |
7,272 | 2,599,736,485 | IssuesEvent | 2015-02-23 11:18:53 | calblueprint/PHC | https://api.github.com/repos/calblueprint/PHC | closed | "check-in" and "check-out" buttons are not disabled even when grayed out. | bug medium priority | check-in and check-out buttons still work even when the services list has not been retrieved yet. this is a problem. | 1.0 | "check-in" and "check-out" buttons are not disabled even when grayed out. - check-in and check-out buttons still work even when the services list has not been retrieved yet. this is a problem. | non_test | check in and check out buttons are not disabled even when grayed out check in and check out buttons still work even when the services list has not been retrieved yet this is a problem | 0 |
167,213 | 20,725,924,378 | IssuesEvent | 2022-03-14 01:51:23 | jinuem/node-sass | https://api.github.com/repos/jinuem/node-sass | opened | CVE-2021-37701 (High) detected in tar-2.2.1.tgz | security vulnerability | ## CVE-2021-37701 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>Path to dependency file: /node-sass/package.json</p>
<p>Path to vulnerable library: /node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- node-gyp-3.8.0.tgz (Root Library)
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\` and `/` characters as path separators, however `\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701>CVE-2021-37701</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc">https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.16</p>
<p>Direct dependency fix Resolution (node-gyp): 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37701 (High) detected in tar-2.2.1.tgz - ## CVE-2021-37701 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>Path to dependency file: /node-sass/package.json</p>
<p>Path to vulnerable library: /node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- node-gyp-3.8.0.tgz (Root Library)
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\` and `/` characters as path separators, however `\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701>CVE-2021-37701</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc">https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.16</p>
<p>Direct dependency fix Resolution (node-gyp): 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file node sass package json path to vulnerable library node modules tar package json dependency hierarchy node gyp tgz root library x tar tgz vulnerable library vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems the cache checking logic used both and characters as path separators however is a valid filename character on posix systems by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite additionally a similar confusion could arise on case insensitive filesystems if a tar archive contained a directory at foo followed by a symbolic link named foo then on case insensitive file systems the creation of the symbolic link would remove the directory from the filesystem but not from the internal directory cache as it would not be treated as a cache hit a subsequent file entry within the foo directory would then be placed in the target of the symbolic link thinking that the directory had already been created these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution node gyp step up your open source security game with whitesource | 0 |
9,205 | 3,028,539,929 | IssuesEvent | 2015-08-04 06:28:29 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | opened | Simplify BalanceUnbalancedClusterTest | test | This takes 10 seconds or more, while other allocation tests are almost instantaneous. Can we simplify this? It looks like it tries to do a basic allocation (5 shards, 1 replica) of a new index when a *ton* of indexes already exist on just 4 nodes. Perhaps we could test similar circumstances without thousands of shards? Alternatively, we could just make this an integration test (leave the impl, but rename to IT). It doesn't really seem like a unit test as it is now.
Also, as a side note, this test is the only user of CatAllocationTestCase. Perhaps we can also eliminate this abstraction and just test directly (eliminating the zipped shard state)? @s1monw do you have any thoughts here? | 1.0 | Simplify BalanceUnbalancedClusterTest - This takes 10 seconds or more, while other allocation tests are almost instantaneous. Can we simplify this? It looks like it tries to do a basic allocation (5 shards, 1 replica) of a new index when a *ton* of indexes already exist on just 4 nodes. Perhaps we could test similar circumstances without thousands of shards? Alternatively, we could just make this an integration test (leave the impl, but rename to IT). It doesn't really seem like a unit test as it is now.
Also, as a side note, this test is the only user of CatAllocationTestCase. Perhaps we can also eliminate this abstraction and just test directly (eliminating the zipped shard state)? @s1monw do you have any thoughts here? | test | simplify balanceunbalancedclustertest this takes seconds or more while other allocation tests are almost instantaneous can we simplify this it looks like it tries to do a basic allocation shards replica of a new index when a ton of indexes already exist on just nodes perhaps we could test similar circumstances without thousands of shards alternatively we could just make this an integration test leave the impl but rename to it it doesn t really seem like a unit test as it is now also as a side note this test is the only user of catallocationtestcase perhaps we can also eliminate this abstraction and just test directly eliminating the zipped shard state do you have any thoughts here | 1 |
244,051 | 20,604,614,109 | IssuesEvent | 2022-03-06 19:48:23 | metaplex-foundation/metaplex | https://api.github.com/repos/metaplex-foundation/metaplex | closed | [Bug]: Problem Upload Candy Machine devnet | needs tests bug | ### Which package is this bug report for?
candy machine cli
### Issue description
i have problem when i upload candy machine on devnet..
### Command
```shell
ts-node E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\candy-machine-v2-cli.ts upload -k E:\Project\DEVforYOU\Solana\DEV\Setjwh4yMb4kjax7n47twKadjuWMMYz44WSF1unoyaQ.json -cp E:\Project\DEVforYOU\Solana\DEV\config.json E:\Project\DEVforYOU\Solana\DEV\Assets
```
### Relevant log output
```shell
>ts-node E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\candy-machine-v2-cli.ts upload -k E:\Project\DEVforYOU\Solana\DEV\Setjwh4yMb4kjax7n47twKadjuWMMYz44WSF1unoyaQ.json -cp E:\Project\DEVforYOU\Solana\DEV\config.json E:\Project\DEVforYOU\Solana\DEV\Assets
wallet public key: Setjwh4yMb4kjax7n47twKadjuWMMYz44WSF1unoyaQ
(node:5800) ExperimentalWarning: buffer.Blob is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
WARNING: The "arweave" storage option will be going away soon. Please migrate to arweave-bundle or arweave-sol for mainnet.
Beginning the upload for 33 (img+json) pairs
started at: 1646563372322
initializing candy machine
Translating error Error: Transaction was not confirmed in 60.00 seconds. It is unknown if it succeeded or failed. Check signature 2n9iunZWuk3jrN25NVuLFeeowHzg2BSh41TYCJf6c8NyotxwFaPjccBg1Gzz6n7bpNZoBD5kG2dAPa73c5fame8F using the Solana Explorer or CLI tools.
at Connection.confirmTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\connection.ts:2781:13)
at async sendAndConfirmRawTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\util\send-and-confirm-raw-transaction.ts:33:5)
at async Provider.send (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\provider.ts:114:18)
at async Object.rpc [as initializeCandyMachine] (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\program\namespace\rpc.ts:19:23)
at async createCandyMachineV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\helpers\accounts.ts:153:11)
at async uploadV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\commands\upload.ts:141:19)
at async Command.<anonymous> (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\candy-machine-v2-cli.ts:263:7)
Error deploying config to Solana network. Error: Transaction was not confirmed in 60.00 seconds. It is unknown if it succeeded or failed. Check signature 2n9iunZWuk3jrN25NVuLFeeowHzg2BSh41TYCJf6c8NyotxwFaPjccBg1Gzz6n7bpNZoBD5kG2dAPa73c5fame8F using the Solana Explorer or CLI tools.
at Connection.confirmTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\connection.ts:2781:13)
at async sendAndConfirmRawTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\util\send-and-confirm-raw-transaction.ts:33:5)
at async Provider.send (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\provider.ts:114:18)
at async Object.rpc [as initializeCandyMachine] (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\program\namespace\rpc.ts:19:23)
at async createCandyMachineV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\helpers\accounts.ts:153:11)
at async uploadV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\commands\upload.ts:141:19)
at async Command.<anonymous> (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\candy-machine-v2-cli.ts:263:7)
upload was not successful, please re-run. Error: Transaction was not confirmed in 60.00 seconds. It is unknown if it succeeded or failed. Check signature 2n9iunZWuk3jrN25NVuLFeeowHzg2BSh41TYCJf6c8NyotxwFaPjccBg1Gzz6n7bpNZoBD5kG2dAPa73c5fame8F using the Solana Explorer or CLI tools.
at Connection.confirmTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\connection.ts:2781:13)
at async sendAndConfirmRawTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\util\send-and-confirm-raw-transaction.ts:33:5)
at async Provider.send (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\provider.ts:114:18)
at async Object.rpc [as initializeCandyMachine] (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\program\namespace\rpc.ts:19:23)
at async createCandyMachineV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\helpers\accounts.ts:153:11)
at async uploadV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\commands\upload.ts:141:19)
at async Command.<anonymous> (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\candy-machine-v2-cli.ts:263:7)
```
### Operating system
win 10
### Priority this issue should have
Medium (should be fixed soon)
### Check the Docs First
- [X] I have checked the docs and it didn't solve my issue | 1.0 | [Bug]: Problem Upload Candy Machine devnet - ### Which package is this bug report for?
candy machine cli
### Issue description
i have problem when i upload candy machine on devnet..
### Command
```shell
ts-node E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\candy-machine-v2-cli.ts upload -k E:\Project\DEVforYOU\Solana\DEV\Setjwh4yMb4kjax7n47twKadjuWMMYz44WSF1unoyaQ.json -cp E:\Project\DEVforYOU\Solana\DEV\config.json E:\Project\DEVforYOU\Solana\DEV\Assets
```
### Relevant log output
```shell
>ts-node E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\candy-machine-v2-cli.ts upload -k E:\Project\DEVforYOU\Solana\DEV\Setjwh4yMb4kjax7n47twKadjuWMMYz44WSF1unoyaQ.json -cp E:\Project\DEVforYOU\Solana\DEV\config.json E:\Project\DEVforYOU\Solana\DEV\Assets
wallet public key: Setjwh4yMb4kjax7n47twKadjuWMMYz44WSF1unoyaQ
(node:5800) ExperimentalWarning: buffer.Blob is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
WARNING: The "arweave" storage option will be going away soon. Please migrate to arweave-bundle or arweave-sol for mainnet.
Beginning the upload for 33 (img+json) pairs
started at: 1646563372322
initializing candy machine
Translating error Error: Transaction was not confirmed in 60.00 seconds. It is unknown if it succeeded or failed. Check signature 2n9iunZWuk3jrN25NVuLFeeowHzg2BSh41TYCJf6c8NyotxwFaPjccBg1Gzz6n7bpNZoBD5kG2dAPa73c5fame8F using the Solana Explorer or CLI tools.
at Connection.confirmTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\connection.ts:2781:13)
at async sendAndConfirmRawTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\util\send-and-confirm-raw-transaction.ts:33:5)
at async Provider.send (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\provider.ts:114:18)
at async Object.rpc [as initializeCandyMachine] (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\program\namespace\rpc.ts:19:23)
at async createCandyMachineV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\helpers\accounts.ts:153:11)
at async uploadV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\commands\upload.ts:141:19)
at async Command.<anonymous> (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\candy-machine-v2-cli.ts:263:7)
Error deploying config to Solana network. Error: Transaction was not confirmed in 60.00 seconds. It is unknown if it succeeded or failed. Check signature 2n9iunZWuk3jrN25NVuLFeeowHzg2BSh41TYCJf6c8NyotxwFaPjccBg1Gzz6n7bpNZoBD5kG2dAPa73c5fame8F using the Solana Explorer or CLI tools.
at Connection.confirmTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\connection.ts:2781:13)
at async sendAndConfirmRawTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\util\send-and-confirm-raw-transaction.ts:33:5)
at async Provider.send (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\provider.ts:114:18)
at async Object.rpc [as initializeCandyMachine] (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\program\namespace\rpc.ts:19:23)
at async createCandyMachineV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\helpers\accounts.ts:153:11)
at async uploadV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\commands\upload.ts:141:19)
at async Command.<anonymous> (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\candy-machine-v2-cli.ts:263:7)
upload was not successful, please re-run. Error: Transaction was not confirmed in 60.00 seconds. It is unknown if it succeeded or failed. Check signature 2n9iunZWuk3jrN25NVuLFeeowHzg2BSh41TYCJf6c8NyotxwFaPjccBg1Gzz6n7bpNZoBD5kG2dAPa73c5fame8F using the Solana Explorer or CLI tools.
at Connection.confirmTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\connection.ts:2781:13)
at async sendAndConfirmRawTransaction (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\node_modules\@solana\web3.js\src\util\send-and-confirm-raw-transaction.ts:33:5)
at async Provider.send (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\provider.ts:114:18)
at async Object.rpc [as initializeCandyMachine] (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\node_modules\@project-serum\anchor\src\program\namespace\rpc.ts:19:23)
at async createCandyMachineV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\helpers\accounts.ts:153:11)
at async uploadV2 (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\commands\upload.ts:141:19)
at async Command.<anonymous> (E:\Project\DEVforYOU\Solana\DEV\metaplex\js\packages\cli\src\candy-machine-v2-cli.ts:263:7)
```
### Operating system
win 10
### Priority this issue should have
Medium (should be fixed soon)
### Check the Docs First
- [X] I have checked the docs and it didn't solve my issue | test | problem upload candy machine devnet which package is this bug report for candy machine cli issue description i have problem when i upload candy machine on devnet command shell ts node e project devforyou solana dev metaplex js packages cli src candy machine cli ts upload k e project devforyou solana dev json cp e project devforyou solana dev config json e project devforyou solana dev assets relevant log output shell ts node e project devforyou solana dev metaplex js packages cli src candy machine cli ts upload k e project devforyou solana dev json cp e project devforyou solana dev config json e project devforyou solana dev assets wallet public key node experimentalwarning buffer blob is an experimental feature this feature could change at any time use node trace warnings to show where the warning was created warning the arweave storage option will be going away soon please migrate to arweave bundle or arweave sol for mainnet beginning the upload for img json pairs started at initializing candy machine translating error error transaction was not confirmed in seconds it is unknown if it succeeded or failed check signature using the solana explorer or cli tools at connection confirmtransaction e project devforyou solana dev metaplex js node modules solana js src connection ts at async sendandconfirmrawtransaction e project devforyou solana dev metaplex js node modules solana js src util send and confirm raw transaction ts at async provider send e project devforyou solana dev metaplex js packages cli node modules project serum anchor src provider ts at async object rpc e project devforyou solana dev metaplex js packages cli node modules project serum anchor src program namespace rpc ts at async e project devforyou solana dev metaplex js packages cli src helpers accounts ts at async e project devforyou solana dev metaplex js packages cli src commands upload ts at async command e project devforyou solana dev metaplex js packages cli src candy machine cli ts error deploying config to solana network error transaction was not confirmed in seconds it is unknown if it succeeded or failed check signature using the solana explorer or cli tools at connection confirmtransaction e project devforyou solana dev metaplex js node modules solana js src connection ts at async sendandconfirmrawtransaction e project devforyou solana dev metaplex js node modules solana js src util send and confirm raw transaction ts at async provider send e project devforyou solana dev metaplex js packages cli node modules project serum anchor src provider ts at async object rpc e project devforyou solana dev metaplex js packages cli node modules project serum anchor src program namespace rpc ts at async e project devforyou solana dev metaplex js packages cli src helpers accounts ts at async e project devforyou solana dev metaplex js packages cli src commands upload ts at async command e project devforyou solana dev metaplex js packages cli src candy machine cli ts upload was not successful please re run error transaction was not confirmed in seconds it is unknown if it succeeded or failed check signature using the solana explorer or cli tools at connection confirmtransaction e project devforyou solana dev metaplex js node modules solana js src connection ts at async sendandconfirmrawtransaction e project devforyou solana dev metaplex js node modules solana js src util send and confirm raw transaction ts at async provider send e project devforyou solana dev metaplex js packages cli node modules project serum anchor src provider ts at async object rpc e project devforyou solana dev metaplex js packages cli node modules project serum anchor src program namespace rpc ts at async e project devforyou solana dev metaplex js packages cli src helpers accounts ts at async e project devforyou solana dev metaplex js packages cli src commands upload ts at async command e project devforyou solana dev metaplex js packages cli src candy machine cli ts operating system win priority this issue should have medium should be fixed soon check the docs first i have checked the docs and it didn t solve my issue | 1 |
200,678 | 7,010,395,862 | IssuesEvent | 2017-12-19 23:03:11 | sul-dlss/preservation_catalog | https://api.github.com/repos/sul-dlss/preservation_catalog | closed | (M2C) check_exist: do not update catalog if incoming version < catalog | bug high priority needs review | It is an error state if M2C check_exist has an incoming version < catalog -- this implies the catalog somehow got ahead of the actual Moab. We are, I believe, intended to do something like (check with Julian)
- indicate UNEXPECTED VERSION on disk in results
- set status to UNEXPECTED VERSION
- error "loudly"
- complain endlessly
It is telling that this condition passed the tests -- or perhaps the tests were changed due to a misunderstanding of the spec.
I suspect it remains correct to do a moab validation in this circumstance. | 1.0 | (M2C) check_exist: do not update catalog if incoming version < catalog - It is an error state if M2C check_exist has an incoming version < catalog -- this implies the catalog somehow got ahead of the actual Moab. We are, I believe, intended to do something like (check with Julian)
- indicate UNEXPECTED VERSION on disk in results
- set status to UNEXPECTED VERSION
- error "loudly"
- complain endlessly
It is telling that this condition passed the tests -- or perhaps the tests were changed due to a misunderstanding of the spec.
I suspect it remains correct to do a moab validation in this circumstance. | non_test | check exist do not update catalog if incoming version catalog it is an error state if check exist has an incoming version catalog this implies the catalog somehow got ahead of the actual moab we are i believe intended to do something like check with julian indicate unexpected version on disk in results set status to unexpected version error loudly complain endlessly it is telling that this condition passed the tests or perhaps the tests were changed due to a misunderstanding of the spec i suspect it remains correct to do a moab validation in this circumstance | 0 |
170,927 | 13,209,466,221 | IssuesEvent | 2020-08-15 11:33:56 | dominiksalvet/asus-fan-control | https://api.github.com/repos/dominiksalvet/asus-fan-control | opened | Looking for an ASUS ZenBook Pro Duo UX581GV tester | add new device looking for tester | Hi all! :wave:
Do you **have an ASUS ZenBook Pro Duo UX581GV** running Linux and struggle with fan configuration? Do you want to contribute to open source software that is used by hundreds of ASUS laptop users? Then certainly let us know and we will help you get through the process! :rocket:
There is a decent chance of asus-fan-control working on your device out of the box. Once you verify it, we can add the laptop model on the tested list in [readme.md](https://github.com/dominiksalvet/asus-fan-control/blob/master/readme.md) file with you as its first tester. It will help other people! :heart: Please, see #53, which was originally covering an unrelated issue and later hints it may be working.
---
@romarmorales Please take a look at this issue as well and consider making asus-fan-control even better. :star: | 1.0 | Looking for an ASUS ZenBook Pro Duo UX581GV tester - Hi all! :wave:
Do you **have an ASUS ZenBook Pro Duo UX581GV** running Linux and struggle with fan configuration? Do you want to contribute to open source software that is used by hundreds of ASUS laptop users? Then certainly let us know and we will help you get through the process! :rocket:
There is a decent chance of asus-fan-control working on your device out of the box. Once you verify it, we can add the laptop model on the tested list in [readme.md](https://github.com/dominiksalvet/asus-fan-control/blob/master/readme.md) file with you as its first tester. It will help other people! :heart: Please, see #53, which was originally covering an unrelated issue and later hints it may be working.
---
@romarmorales Please take a look at this issue as well and consider making asus-fan-control even better. :star: | test | looking for an asus zenbook pro duo tester hi all wave do you have an asus zenbook pro duo running linux and struggle with fan configuration do you want to contribute to open source software that is used by hundreds of asus laptop users then certainly let us know and we will help you get through the process rocket there is a decent chance of asus fan control working on your device out of the box once you verify it we can add the laptop model on the tested list in file with you as its first tester it will help other people heart please see which was originally covering an unrelated issue and later hints it may be working romarmorales please take a look at this issue as well and consider making asus fan control even better star | 1 |
265,114 | 23,146,434,561 | IssuesEvent | 2022-07-29 01:42:29 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | closed | Teste de generalizacao para a tag Servidores - Proventos de pensรฃo - Claro dos Poรงรตes | generalization test development template-Sรญntese tecnologia informatica subtag-Proventos de Pensรฃo tag-Servidores | DoD: Realizar o teste de Generalizaรงรฃo do validador da tag Servidores - Proventos de pensรฃo para o Municรญpio de Claro dos Poรงรตes. | 1.0 | Teste de generalizacao para a tag Servidores - Proventos de pensรฃo - Claro dos Poรงรตes - DoD: Realizar o teste de Generalizaรงรฃo do validador da tag Servidores - Proventos de pensรฃo para o Municรญpio de Claro dos Poรงรตes. | test | teste de generalizacao para a tag servidores proventos de pensรฃo claro dos poรงรตes dod realizar o teste de generalizaรงรฃo do validador da tag servidores proventos de pensรฃo para o municรญpio de claro dos poรงรตes | 1 |
18,213 | 3,671,958,926 | IssuesEvent | 2016-02-22 10:18:10 | Microsoft/vscode | https://api.github.com/repos/Microsoft/vscode | opened | Install update - No progress/status while downloading an extension | testplan-item | **Issue:** #2835
**Assignees:**
- [ ] Windows
- [ ] OS X
- [ ] Linux
**Details:**
> TODO | 1.0 | Install update - No progress/status while downloading an extension - **Issue:** #2835
**Assignees:**
- [ ] Windows
- [ ] OS X
- [ ] Linux
**Details:**
> TODO | test | install update no progress status while downloading an extension issue assignees windows os x linux details todo | 1 |
161,115 | 6,109,584,003 | IssuesEvent | 2017-06-21 13:24:47 | Linaro/mr-provisioner | https://api.github.com/repos/Linaro/mr-provisioner | opened | Add inventory export support | area/asset management area/ui difficulty/easy enhancement priority/P2 | Add support to the admin UI for admins to generate & download a full inventory of all machines and all their properties (see #5) in a management-friendly format (e.g. CSV). | 1.0 | Add inventory export support - Add support to the admin UI for admins to generate & download a full inventory of all machines and all their properties (see #5) in a management-friendly format (e.g. CSV). | non_test | add inventory export support add support to the admin ui for admins to generate download a full inventory of all machines and all their properties see in a management friendly format e g csv | 0 |
74,388 | 9,037,331,266 | IssuesEvent | 2019-02-09 09:25:47 | RRZE-Webteam/FAU-Einrichtungen | https://api.github.com/repos/RRZE-Webteam/FAU-Einrichtungen | closed | Obskurer Anzeigefehler im Chrome | Design Problem Verwirrend wontfix | Bei manchen Webseiten mit kurzer Hauptnavi (nur 3 Punkte) bricht der Chrome bei einer Vergrรถรerung des Textes auf 120% den dritten Punkt um (obwohl es rechts noch genรผgend Platz gรคb) und lรคsst diesen so verschwinden.
Dies tritt nur bei Chrome auf und nur bei einigen Webseiten, wie der https://www.cs12.tf.fau.de/
Bei der Webseite der tf, welches mehr Elemente in der Hauptnavi hat, tritt das Problem nicht auf!
| 1.0 | Obskurer Anzeigefehler im Chrome - Bei manchen Webseiten mit kurzer Hauptnavi (nur 3 Punkte) bricht der Chrome bei einer Vergrรถรerung des Textes auf 120% den dritten Punkt um (obwohl es rechts noch genรผgend Platz gรคb) und lรคsst diesen so verschwinden.
Dies tritt nur bei Chrome auf und nur bei einigen Webseiten, wie der https://www.cs12.tf.fau.de/
Bei der Webseite der tf, welches mehr Elemente in der Hauptnavi hat, tritt das Problem nicht auf!
| non_test | obskurer anzeigefehler im chrome bei manchen webseiten mit kurzer hauptnavi nur punkte bricht der chrome bei einer vergrรถรerung des textes auf den dritten punkt um obwohl es rechts noch genรผgend platz gรคb und lรคsst diesen so verschwinden dies tritt nur bei chrome auf und nur bei einigen webseiten wie der bei der webseite der tf welches mehr elemente in der hauptnavi hat tritt das problem nicht auf | 0 |
131,113 | 27,824,457,756 | IssuesEvent | 2023-03-19 15:58:25 | pinterest/ktlint | https://api.github.com/repos/pinterest/ktlint | closed | Indentation change since 0.38.1 | indentation-rule conflict-with-default-intellij-formatting ktlint-official-codestyle | ## Expected Behavior
Upgrading from 0.38.1 to latest I see a change in indentation with parameter names in a call.
For example this was allowed in 0.38.1:
someFunction(
parameterName =
someValue
.someProperty
.someCall()
)
## Observed Behavior
With 0.41.1 it now demands that it be indented like this:
someFunction(
parameterName =
someValue
.someProperty
.someCall()
)
Which is not very readable.
I can make it go away by moving someValue to same line as parameter name:
someFunction(
parameterName = someValue
.someProperty
.someCall()
)
But sometimes there is a longer expression than this example and it is more readable to move it to its own line.
| 1.0 | Indentation change since 0.38.1 - ## Expected Behavior
Upgrading from 0.38.1 to latest I see a change in indentation with parameter names in a call.
For example this was allowed in 0.38.1:
someFunction(
parameterName =
someValue
.someProperty
.someCall()
)
## Observed Behavior
With 0.41.1 it now demands that it be indented like this:
someFunction(
parameterName =
someValue
.someProperty
.someCall()
)
Which is not very readable.
I can make it go away by moving someValue to same line as parameter name:
someFunction(
parameterName = someValue
.someProperty
.someCall()
)
But sometimes there is a longer expression than this example and it is more readable to move it to its own line.
| non_test | indentation change since expected behavior upgrading from to latest i see a change in indentation with parameter names in a call for example this was allowed in somefunction parametername somevalue someproperty somecall observed behavior with it now demands that it be indented like this somefunction parametername somevalue someproperty somecall which is not very readable i can make it go away by moving somevalue to same line as parameter name somefunction parametername somevalue someproperty somecall but sometimes there is a longer expression than this example and it is more readable to move it to its own line | 0 |
55,494 | 6,902,229,653 | IssuesEvent | 2017-11-25 17:57:40 | simonrepp/playvienna.com | https://api.github.com/repos/simonrepp/playvienna.com | opened | Consider play:vienna section index as well | content design | I.e. not jumping right to about, but also presenting a media backdrop with menu
Note: Don't forget to remove `store.route.page || 'about'` hack in share widget if implementing this | 1.0 | Consider play:vienna section index as well - I.e. not jumping right to about, but also presenting a media backdrop with menu
Note: Don't forget to remove `store.route.page || 'about'` hack in share widget if implementing this | non_test | consider play vienna section index as well i e not jumping right to about but also presenting a media backdrop with menu note don t forget to remove store route page about hack in share widget if implementing this | 0 |
63,203 | 6,829,240,940 | IssuesEvent | 2017-11-08 23:28:33 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Health check enabled services get recreated during IPsec upgrade | kind/bug status/reopened status/resolved status/to-test version/1.6 | **Rancher versions:**
rancher/server:Upgrade from v1.6.10 to v1.6.11-rc4
**Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)** Ubuntu
**Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)**DO
**Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)** Single node
**Environment Template: (Cattle/Kubernetes/Swarm/Mesos)**: Cattle
**Steps to Reproduce:**
1. Create a v1.6.10 setup and create health check enabled services and Loadbalancers
2. Upgrade rancher to v1.6.11-rc4
3. Upgrade Network Services stack
4. Upgrade IPsec to v0.2.0
All healthcheck enabled services and loadbalancers get recreated during IPsec upgrade.
Database shows the health check enabled services and load balancers getting removed as below:
```
mysql> select id, name,state,health_state, created, removed from instance where name like "%lb%";
+----+-----------------------------+---------+--------------+---------------------+---------------------+
| id | name | state | health_state | created | removed |
+----+-----------------------------+---------+--------------+---------------------+---------------------+
| 41 | PRESTACK-1-healthlb-1 | purged | unhealthy | 2017-10-25 21:56:58 | 2017-10-25 22:35:58 |
| 42 | PRESTACK-1-mylb-1 | purged | unhealthy | 2017-10-25 21:57:12 | 2017-10-25 22:36:03 |
| 43 | PRESTACK-1-ssllb-1 | purged | unhealthy | 2017-10-25 21:57:12 | 2017-10-25 22:35:24 |
| 45 | PRESTACK-1-globalhealthlb-1 | purged | unhealthy | 2017-10-25 21:57:12 | 2017-10-25 22:36:11 |
| 46 | PRESTACK-1-globalhealthlb-2 | purged | unhealthy | 2017-10-25 21:57:12 | 2017-10-25 22:35:26 |
| 47 | PRESTACK-1-globalhealthlb-3 | purged | unhealthy | 2017-10-25 21:57:13 | 2017-10-25 22:35:50 |
| 49 | PRESTACK-2-newstacklb-1 | purged | unhealthy | 2017-10-25 21:57:52 | 2017-10-25 22:35:24 |
| 51 | lb-test-client4020 | purged | NULL | 2017-10-25 21:58:28 | 2017-10-25 21:59:30 |
| 70 | PRESTACK-2-newstacklb-1 | purged | unhealthy | 2017-10-25 22:35:21 | 2017-10-25 22:36:09 |
| 71 | PRESTACK-1-ssllb-1 | purged | unhealthy | 2017-10-25 22:35:21 | 2017-10-25 22:36:00 |
| 74 | PRESTACK-1-globalhealthlb-2 | purged | unhealthy | 2017-10-25 22:35:33 | 2017-10-25 22:36:09 |
| 81 | PRESTACK-1-healthlb-1 | running | healthy | 2017-10-25 22:35:56 | NULL |
| 82 | PRESTACK-1-ssllb-1 | running | healthy | 2017-10-25 22:35:58 | NULL |
| 83 | PRESTACK-1-mylb-1 | running | healthy | 2017-10-25 22:36:01 | NULL |
| 89 | PRESTACK-2-newstacklb-1 | running | healthy | 2017-10-25 22:36:06 | NULL |
| 90 | PRESTACK-1-globalhealthlb-1 | running | healthy | 2017-10-25 22:36:06 | NULL |
| 91 | PRESTACK-1-globalhealthlb-2 | running | healthy | 2017-10-25 22:36:19 | NULL |
| 94 | PRESTACK-1-globalhealthlb-3 | running | healthy | 2017-10-25 22:36:19 | NULL |
+----+-----------------------------+---------+--------------+---------------------+---------------------+
mysql> select id, name,state, health_state,created, removed from instance where name like "%healthservice%";
+----+----------------------------------+---------+--------------+---------------------+---------------------+
| id | name | state | health_state | created | removed |
+----+----------------------------------+---------+--------------+---------------------+---------------------+
| 35 | PRESTACK-1-healthservice-1 | purged | unhealthy | 2017-10-25 21:56:43 | 2017-10-25 22:35:38 |
| 36 | PRESTACK-1-globalhealthservice-1 | purged | unhealthy | 2017-10-25 21:56:44 | 2017-10-25 22:35:58 |
| 39 | PRESTACK-1-globalhealthservice-2 | purged | unhealthy | 2017-10-25 21:56:44 | 2017-10-25 22:35:25 |
| 40 | PRESTACK-1-globalhealthservice-3 | purged | unhealthy | 2017-10-25 21:56:44 | 2017-10-25 22:35:45 |
| 73 | PRESTACK-1-globalhealthservice-2 | purged | unhealthy | 2017-10-25 22:35:33 | 2017-10-25 22:36:09 |
| 76 | PRESTACK-1-healthservice-1 | purged | unhealthy | 2017-10-25 22:35:37 | 2017-10-25 22:36:03 |
| 80 | PRESTACK-1-globalhealthservice-3 | running | healthy | 2017-10-25 22:35:49 | NULL |
| 84 | PRESTACK-1-healthservice-1 | running | healthy | 2017-10-25 22:36:01 | NULL |
| 87 | PRESTACK-1-globalhealthservice-1 | running | healthy | 2017-10-25 22:36:05 | NULL |
| 93 | PRESTACK-1-globalhealthservice-2 | running | healthy | 2017-10-25 22:36:19 | NULL |
+----+----------------------------------+---------+--------------+---------------------+---------------------+
10 rows in set (0.00 sec)
```
| 1.0 | Health check enabled services get recreated during IPsec upgrade - **Rancher versions:**
rancher/server:Upgrade from v1.6.10 to v1.6.11-rc4
**Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)** Ubuntu
**Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)**DO
**Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)** Single node
**Environment Template: (Cattle/Kubernetes/Swarm/Mesos)**: Cattle
**Steps to Reproduce:**
1. Create a v1.6.10 setup and create health check enabled services and Loadbalancers
2. Upgrade rancher to v1.6.11-rc4
3. Upgrade Network Services stack
4. Upgrade IPsec to v0.2.0
All healthcheck enabled services and loadbalancers get recreated during IPsec upgrade.
Database shows the health check enabled services and load balancers getting removed as below:
```
mysql> select id, name,state,health_state, created, removed from instance where name like "%lb%";
+----+-----------------------------+---------+--------------+---------------------+---------------------+
| id | name | state | health_state | created | removed |
+----+-----------------------------+---------+--------------+---------------------+---------------------+
| 41 | PRESTACK-1-healthlb-1 | purged | unhealthy | 2017-10-25 21:56:58 | 2017-10-25 22:35:58 |
| 42 | PRESTACK-1-mylb-1 | purged | unhealthy | 2017-10-25 21:57:12 | 2017-10-25 22:36:03 |
| 43 | PRESTACK-1-ssllb-1 | purged | unhealthy | 2017-10-25 21:57:12 | 2017-10-25 22:35:24 |
| 45 | PRESTACK-1-globalhealthlb-1 | purged | unhealthy | 2017-10-25 21:57:12 | 2017-10-25 22:36:11 |
| 46 | PRESTACK-1-globalhealthlb-2 | purged | unhealthy | 2017-10-25 21:57:12 | 2017-10-25 22:35:26 |
| 47 | PRESTACK-1-globalhealthlb-3 | purged | unhealthy | 2017-10-25 21:57:13 | 2017-10-25 22:35:50 |
| 49 | PRESTACK-2-newstacklb-1 | purged | unhealthy | 2017-10-25 21:57:52 | 2017-10-25 22:35:24 |
| 51 | lb-test-client4020 | purged | NULL | 2017-10-25 21:58:28 | 2017-10-25 21:59:30 |
| 70 | PRESTACK-2-newstacklb-1 | purged | unhealthy | 2017-10-25 22:35:21 | 2017-10-25 22:36:09 |
| 71 | PRESTACK-1-ssllb-1 | purged | unhealthy | 2017-10-25 22:35:21 | 2017-10-25 22:36:00 |
| 74 | PRESTACK-1-globalhealthlb-2 | purged | unhealthy | 2017-10-25 22:35:33 | 2017-10-25 22:36:09 |
| 81 | PRESTACK-1-healthlb-1 | running | healthy | 2017-10-25 22:35:56 | NULL |
| 82 | PRESTACK-1-ssllb-1 | running | healthy | 2017-10-25 22:35:58 | NULL |
| 83 | PRESTACK-1-mylb-1 | running | healthy | 2017-10-25 22:36:01 | NULL |
| 89 | PRESTACK-2-newstacklb-1 | running | healthy | 2017-10-25 22:36:06 | NULL |
| 90 | PRESTACK-1-globalhealthlb-1 | running | healthy | 2017-10-25 22:36:06 | NULL |
| 91 | PRESTACK-1-globalhealthlb-2 | running | healthy | 2017-10-25 22:36:19 | NULL |
| 94 | PRESTACK-1-globalhealthlb-3 | running | healthy | 2017-10-25 22:36:19 | NULL |
+----+-----------------------------+---------+--------------+---------------------+---------------------+
mysql> select id, name,state, health_state,created, removed from instance where name like "%healthservice%";
+----+----------------------------------+---------+--------------+---------------------+---------------------+
| id | name | state | health_state | created | removed |
+----+----------------------------------+---------+--------------+---------------------+---------------------+
| 35 | PRESTACK-1-healthservice-1 | purged | unhealthy | 2017-10-25 21:56:43 | 2017-10-25 22:35:38 |
| 36 | PRESTACK-1-globalhealthservice-1 | purged | unhealthy | 2017-10-25 21:56:44 | 2017-10-25 22:35:58 |
| 39 | PRESTACK-1-globalhealthservice-2 | purged | unhealthy | 2017-10-25 21:56:44 | 2017-10-25 22:35:25 |
| 40 | PRESTACK-1-globalhealthservice-3 | purged | unhealthy | 2017-10-25 21:56:44 | 2017-10-25 22:35:45 |
| 73 | PRESTACK-1-globalhealthservice-2 | purged | unhealthy | 2017-10-25 22:35:33 | 2017-10-25 22:36:09 |
| 76 | PRESTACK-1-healthservice-1 | purged | unhealthy | 2017-10-25 22:35:37 | 2017-10-25 22:36:03 |
| 80 | PRESTACK-1-globalhealthservice-3 | running | healthy | 2017-10-25 22:35:49 | NULL |
| 84 | PRESTACK-1-healthservice-1 | running | healthy | 2017-10-25 22:36:01 | NULL |
| 87 | PRESTACK-1-globalhealthservice-1 | running | healthy | 2017-10-25 22:36:05 | NULL |
| 93 | PRESTACK-1-globalhealthservice-2 | running | healthy | 2017-10-25 22:36:19 | NULL |
+----+----------------------------------+---------+--------------+---------------------+---------------------+
10 rows in set (0.00 sec)
```
| test | health check enabled services get recreated during ipsec upgrade rancher versions rancher server upgrade from to operating system and kernel cat etc os release uname r preferred ubuntu type provider of hosts virtualbox bare metal aws gce do do setup details single node rancher vs ha rancher internal db vs external db single node environment template cattle kubernetes swarm mesos cattle steps to reproduce create a setup and create health check enabled services and loadbalancers upgrade rancher to upgrade network services stack upgrade ipsec to all healthcheck enabled services and loadbalancers get recreated during ipsec upgrade database shows the health check enabled services and load balancers getting removed as below mysql select id name state health state created removed from instance where name like lb id name state health state created removed prestack healthlb purged unhealthy prestack mylb purged unhealthy prestack ssllb purged unhealthy prestack globalhealthlb purged unhealthy prestack globalhealthlb purged unhealthy prestack globalhealthlb purged unhealthy prestack newstacklb purged unhealthy lb test purged null prestack newstacklb purged unhealthy prestack ssllb purged unhealthy prestack globalhealthlb purged unhealthy prestack healthlb running healthy null prestack ssllb running healthy null prestack mylb running healthy null prestack newstacklb running healthy null prestack globalhealthlb running healthy null prestack globalhealthlb running healthy null prestack globalhealthlb running healthy null mysql select id name state health state created removed from instance where name like healthservice id name state health state created removed prestack healthservice purged unhealthy prestack globalhealthservice purged unhealthy prestack globalhealthservice purged unhealthy prestack globalhealthservice purged unhealthy prestack globalhealthservice purged unhealthy prestack healthservice purged unhealthy prestack globalhealthservice running healthy null prestack healthservice running healthy null prestack globalhealthservice running healthy null prestack globalhealthservice running healthy null rows in set sec | 1 |
50,238 | 6,066,514,199 | IssuesEvent | 2017-06-14 18:39:58 | Hienz/wms | https://api.github.com/repos/Hienz/wms | closed | WMS-0005 - [Design] Welcome platform | ready testing | A sztori cรฉlja, hogy elkรฉszรผljรถn egy kรถszรถntล felรผlet, ami a mรฉg nem belรฉpett (esetlegesen teljesen รบj) felhasznรกlรณkat fogadja.
**Todo**
- Alap design implementรกlรกsa, felรผlet elkรฉszรญtรฉse.
- A szรถveges rรฉszeket egyelลre "Lorem ipsum"-mal kell kitรถlteni, ez majd a kรฉsลbbiekben helyettesรญtve lesz.
- Regisztrรกciรณs รฉs belรฉpรฉsi input mezลk implementรกlรกsa, azok jรณl lรกthatรณ, megfelelล elhelyezรฉse.
- Fontos, hogy a regisztrรกciรณ รฉs belรฉpรฉs felhasznรกlรณnรฉv - jelszรณ pรกrossal mลฑkรถdjรถn รฉs ne raktรกrban gondolkodjunk.
-Form validรกciรณhoz tovรกbbi kรฉt osztรกly elkรฉszรญtรฉse (megfelelt รฉs nem megfelelt), amit majd az angulรกr rรฉszleg hozzรก tud rendelni az egyes inputokhoz. (pl. vรถrรถs keret ha nincs meg nyolc karakter, zรถld ha megvan stb)
**Fejlesztลi megjegyzรฉs**
- Mลฑkรถdnie mรฉg nem kell semminek, csak a design legyen meg, valamint hozzรก egy รผres controller.
- Az app.js-be kerรผljรถn bele a state (akรกr lehet MainCtrl is).
| 1.0 | WMS-0005 - [Design] Welcome platform - A sztori cรฉlja, hogy elkรฉszรผljรถn egy kรถszรถntล felรผlet, ami a mรฉg nem belรฉpett (esetlegesen teljesen รบj) felhasznรกlรณkat fogadja.
**Todo**
- Alap design implementรกlรกsa, felรผlet elkรฉszรญtรฉse.
- A szรถveges rรฉszeket egyelลre "Lorem ipsum"-mal kell kitรถlteni, ez majd a kรฉsลbbiekben helyettesรญtve lesz.
- Regisztrรกciรณs รฉs belรฉpรฉsi input mezลk implementรกlรกsa, azok jรณl lรกthatรณ, megfelelล elhelyezรฉse.
- Fontos, hogy a regisztrรกciรณ รฉs belรฉpรฉs felhasznรกlรณnรฉv - jelszรณ pรกrossal mลฑkรถdjรถn รฉs ne raktรกrban gondolkodjunk.
-Form validรกciรณhoz tovรกbbi kรฉt osztรกly elkรฉszรญtรฉse (megfelelt รฉs nem megfelelt), amit majd az angulรกr rรฉszleg hozzรก tud rendelni az egyes inputokhoz. (pl. vรถrรถs keret ha nincs meg nyolc karakter, zรถld ha megvan stb)
**Fejlesztลi megjegyzรฉs**
- Mลฑkรถdnie mรฉg nem kell semminek, csak a design legyen meg, valamint hozzรก egy รผres controller.
- Az app.js-be kerรผljรถn bele a state (akรกr lehet MainCtrl is).
| test | wms welcome platform a sztori cรฉlja hogy elkรฉszรผljรถn egy kรถszรถntล felรผlet ami a mรฉg nem belรฉpett esetlegesen teljesen รบj felhasznรกlรณkat fogadja todo alap design implementรกlรกsa felรผlet elkรฉszรญtรฉse a szรถveges rรฉszeket egyelลre lorem ipsum mal kell kitรถlteni ez majd a kรฉsลbbiekben helyettesรญtve lesz regisztrรกciรณs รฉs belรฉpรฉsi input mezลk implementรกlรกsa azok jรณl lรกthatรณ megfelelล elhelyezรฉse fontos hogy a regisztrรกciรณ รฉs belรฉpรฉs felhasznรกlรณnรฉv jelszรณ pรกrossal mลฑkรถdjรถn รฉs ne raktรกrban gondolkodjunk form validรกciรณhoz tovรกbbi kรฉt osztรกly elkรฉszรญtรฉse megfelelt รฉs nem megfelelt amit majd az angulรกr rรฉszleg hozzรก tud rendelni az egyes inputokhoz pl vรถrรถs keret ha nincs meg nyolc karakter zรถld ha megvan stb fejlesztลi megjegyzรฉs mลฑkรถdnie mรฉg nem kell semminek csak a design legyen meg valamint hozzรก egy รผres controller az app js be kerรผljรถn bele a state akรกr lehet mainctrl is | 1 |
41,564 | 10,519,943,190 | IssuesEvent | 2019-09-29 21:29:40 | DependencyTrack/dependency-track | https://api.github.com/repos/DependencyTrack/dependency-track | closed | Healthcheck failing due to absence of curl | defect pending release | ### Current Behavior:
The healthcheck of the dtrack Docker container never reports the container to be healthy.
### Steps to Reproduce:
Start dtrack via Docker. The healthcheck is failing:
```
"Log": [
{
"Start": "2019-09-29T10:25:00.350085986+02:00",
"End": "2019-09-29T10:25:01.116131771+02:00",
"ExitCode": 1,
"Output": "/bin/sh: curl: not found\n"
},
{
"Start": "2019-09-29T10:30:01.257803268+02:00",
"End": "2019-09-29T10:30:01.554507141+02:00",
"ExitCode": 1,
"Output": "/bin/sh: curl: not found\n"
},
{
"Start": "2019-09-29T10:35:01.579994818+02:00",
"End": "2019-09-29T10:35:01.854022367+02:00",
"ExitCode": 1,
"Output": "/bin/sh: curl: not found\n"
},
{
"Start": "2019-09-29T10:40:01.876065488+02:00",
"End": "2019-09-29T10:40:02.193820073+02:00",
"ExitCode": 1,
"Output": "/bin/sh: curl: not found\n"
},
{
"Start": "2019-09-29T10:45:02.213114511+02:00",
"End": "2019-09-29T10:45:02.533643549+02:00",
"ExitCode": 1,
"Output": "/bin/sh: curl: not found\n"
}
]
```
### Expected Behavior:
The healthcheck becomes healthy.
### Environment:
- Dependency-Track Version: latest
- Distribution: Docker
- BOM Format & Version: n/a
- Database Server: PostgreSQL
- Browser: n/a
### Additional Details:
n/a
| 1.0 | Healthcheck failing due to absence of curl - ### Current Behavior:
The healthcheck of the dtrack Docker container never reports the container to be healthy.
### Steps to Reproduce:
Start dtrack via Docker. The healthcheck is failing:
```
"Log": [
{
"Start": "2019-09-29T10:25:00.350085986+02:00",
"End": "2019-09-29T10:25:01.116131771+02:00",
"ExitCode": 1,
"Output": "/bin/sh: curl: not found\n"
},
{
"Start": "2019-09-29T10:30:01.257803268+02:00",
"End": "2019-09-29T10:30:01.554507141+02:00",
"ExitCode": 1,
"Output": "/bin/sh: curl: not found\n"
},
{
"Start": "2019-09-29T10:35:01.579994818+02:00",
"End": "2019-09-29T10:35:01.854022367+02:00",
"ExitCode": 1,
"Output": "/bin/sh: curl: not found\n"
},
{
"Start": "2019-09-29T10:40:01.876065488+02:00",
"End": "2019-09-29T10:40:02.193820073+02:00",
"ExitCode": 1,
"Output": "/bin/sh: curl: not found\n"
},
{
"Start": "2019-09-29T10:45:02.213114511+02:00",
"End": "2019-09-29T10:45:02.533643549+02:00",
"ExitCode": 1,
"Output": "/bin/sh: curl: not found\n"
}
]
```
### Expected Behavior:
The healthcheck becomes healthy.
### Environment:
- Dependency-Track Version: latest
- Distribution: Docker
- BOM Format & Version: n/a
- Database Server: PostgreSQL
- Browser: n/a
### Additional Details:
n/a
| non_test | healthcheck failing due to absence of curl current behavior the healthcheck of the dtrack docker container never reports the container to be healthy steps to reproduce start dtrack via docker the healthcheck is failing log start end exitcode output bin sh curl not found n start end exitcode output bin sh curl not found n start end exitcode output bin sh curl not found n start end exitcode output bin sh curl not found n start end exitcode output bin sh curl not found n expected behavior the healthcheck becomes healthy environment dependency track version latest distribution docker bom format version n a database server postgresql browser n a additional details n a | 0 |
35,467 | 4,988,723,053 | IssuesEvent | 2016-12-08 09:25:30 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | github.com/cockroachdb/cockroach/pkg/storage: TestStoreRangeSystemSplits failed under stress | Robot test-failure | SHA: https://github.com/cockroachdb/cockroach/commits/28262d4d123bf520ce74dea444877f9c3494eda4
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=false
TAGS=stress
GOFLAGS=-race
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=84107&tab=buildLog
```
I161208 09:25:27.626036 39698 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161208 09:25:27.627219 39698 gossip/gossip.go:248 [n?] initial resolvers: []
W161208 09:25:27.627302 39698 gossip/gossip.go:1124 [n?] no resolvers found; use --join to specify a connected node
I161208 09:25:27.627439 39698 base/node_id.go:62 NodeID set to 1
I161208 09:25:27.627581 39698 gossip/gossip.go:290 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<>
I161208 09:25:27.652368 39698 storage/store.go:1229 [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161208 09:25:27.655503 39705 storage/replica_proposal.go:348 [s1,r1/1:/M{in-ax},@c42029b680] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900.000124ms following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=1970-01-01 00:00:00.000000123 +0000 UTC]
I161208 09:25:27.658208 39628 storage/split_queue.go:103 [split,s1,r1/1:/M{in-ax},@c42029b680] splitting at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0]
I161208 09:25:27.663968 39628 storage/replica_command.go:2380 [split,s1,r1/1:/M{in-ax},@c42029b680] initiating a split of this range at key /Table/11 [r2]
E161208 09:25:27.677916 39628 storage/queue.go:598 [split,s1,r1/1:/{Min-Table/11},@c42029b680] unable to split [n1,s1,r1/1:/{Min-Table/11}] at key "/Table/12/0": key range /Table/12/0-/Table/12/0 outside of bounds of range /Min-/Max
E161208 09:25:27.678752 39629 storage/queue.go:609 [replicate,s1,r1/1:/{Min-Table/11},@c42029b680] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161208 09:25:27.695793 39628 storage/split_queue.go:103 [split,s1,r2/1:/{Table/11-Max},@c42029bb00] splitting at keys [/Table/12/0 /Table/13/0 /Table/14/0]
I161208 09:25:27.696093 39628 storage/replica_command.go:2380 [split,s1,r2/1:/{Table/11-Max},@c42029bb00] initiating a split of this range at key /Table/12 [r3]
W161208 09:25:27.715956 39628 storage/stores.go:218 range not contained in one range: [/Meta2/Table/12,/Table/12/NULL), but have [/Min,/Table/11)
W161208 09:25:27.734497 39683 storage/intent_resolver.go:332 [n1,s1,r1/1:/{Min-Table/11}]: failed to push during intent resolution: failed to push "storage/replica_command.go:2450 (*Replica).adminSplitWithDescriptor" id=c2f72fa6 key=/Local/Range/"\x93"/RangeDescriptor rw=true pri=0.00590156 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,61 orig=0.000000123,61 max=0.000000123,61 wto=false rop=false
E161208 09:25:27.774305 39628 storage/queue.go:598 [split,s1,r2/1:/Table/1{1-2},@c42029bb00] unable to split [n1,s1,r2/1:/Table/1{1-2}] at key "/Table/13/0": key range /Table/13/0-/Table/13/0 outside of bounds of range /Table/11-/Max
E161208 09:25:27.789033 39629 storage/queue.go:609 [replicate,s1,r2/1:/Table/1{1-2},@c42029bb00] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161208 09:25:27.799100 39628 storage/split_queue.go:103 [split,s1,r3/1:/{Table/12-Max},@c420094000] splitting at keys [/Table/13/0 /Table/14/0]
I161208 09:25:27.799289 39628 storage/replica_command.go:2380 [split,s1,r3/1:/{Table/12-Max},@c420094000] initiating a split of this range at key /Table/13 [r4]
E161208 09:25:27.833335 39629 storage/queue.go:609 [replicate,s1,r3/1:/Table/1{2-3},@c420094000] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:27.833626 39628 storage/queue.go:598 [split,s1,r3/1:/Table/1{2-3},@c420094000] unable to split [n1,s1,r3/1:/Table/1{2-3}] at key "/Table/14/0": key range /Table/14/0-/Table/14/0 outside of bounds of range /Table/12-/Max
I161208 09:25:27.836091 39628 storage/split_queue.go:103 [split,s1,r4/1:/{Table/13-Max},@c4202b1200] splitting at keys [/Table/14/0]
I161208 09:25:27.836300 39628 storage/replica_command.go:2380 [split,s1,r4/1:/{Table/13-Max},@c4202b1200] initiating a split of this range at key /Table/14 [r5]
E161208 09:25:27.933555 39629 storage/queue.go:609 [replicate,s1,r4/1:/Table/1{3-4},@c4202b1200] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161208 09:25:27.933682 39628 storage/split_queue.go:103 [split,s1,r5/1:/{Table/14-Max},@c4202ae900] splitting at keys [/Table/50/0 /Table/51/0 /Table/52/0 /Table/53/0 /Table/54/0 /Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:27.933874 39628 storage/replica_command.go:2380 [split,s1,r5/1:/{Table/14-Max},@c4202ae900] initiating a split of this range at key /Table/50 [r6]
E161208 09:25:27.963947 39628 storage/queue.go:598 [split,s1,r5/1:/Table/{14-50},@c4202ae900] unable to split [n1,s1,r5/1:/Table/{14-50}] at key "/Table/51/0": key range /Table/51/0-/Table/51/0 outside of bounds of range /Table/14-/Max
I161208 09:25:27.965201 39628 storage/split_queue.go:103 [split,s1,r6/1:/{Table/50-Max},@c4201f0000] splitting at keys [/Table/51/0 /Table/52/0 /Table/53/0 /Table/54/0 /Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:27.965371 39628 storage/replica_command.go:2380 [split,s1,r6/1:/{Table/50-Max},@c4201f0000] initiating a split of this range at key /Table/51 [r7]
E161208 09:25:27.968964 39629 storage/queue.go:609 [replicate,s1,r5/1:/Table/{14-50},@c4202ae900] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.029694 39629 storage/queue.go:609 [replicate,s1,r6/1:/Table/5{0-1},@c4201f0000] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.032064 39628 storage/queue.go:598 [split,s1,r6/1:/Table/5{0-1},@c4201f0000] unable to split [n1,s1,r6/1:/Table/5{0-1}] at key "/Table/52/0": key range /Table/52/0-/Table/52/0 outside of bounds of range /Table/50-/Max
I161208 09:25:28.037258 39628 storage/split_queue.go:103 [split,s1,r7/1:/{Table/51-Max},@c4202aed80] splitting at keys [/Table/52/0 /Table/53/0 /Table/54/0 /Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.037431 39628 storage/replica_command.go:2380 [split,s1,r7/1:/{Table/51-Max},@c4202aed80] initiating a split of this range at key /Table/52 [r8]
E161208 09:25:28.054991 39628 storage/queue.go:598 [split,s1,r7/1:/Table/5{1-2},@c4202aed80] unable to split [n1,s1,r7/1:/Table/5{1-2}] at key "/Table/53/0": key range /Table/53/0-/Table/53/0 outside of bounds of range /Table/51-/Max
I161208 09:25:28.059000 39628 storage/split_queue.go:103 [split,s1,r8/1:/{Table/52-Max},@c4201f0480] splitting at keys [/Table/53/0 /Table/54/0 /Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.059211 39628 storage/replica_command.go:2380 [split,s1,r8/1:/{Table/52-Max},@c4201f0480] initiating a split of this range at key /Table/53 [r9]
E161208 09:25:28.061856 39629 storage/queue.go:609 [replicate,s1,r7/1:/Table/5{1-2},@c4202aed80] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.101183 39629 storage/queue.go:609 [replicate,s1,r8/1:/Table/5{2-3},@c4201f0480] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.102626 39628 storage/queue.go:598 [split,s1,r8/1:/Table/5{2-3},@c4201f0480] unable to split [n1,s1,r8/1:/Table/5{2-3}] at key "/Table/54/0": key range /Table/54/0-/Table/54/0 outside of bounds of range /Table/52-/Max
I161208 09:25:28.104683 39628 storage/split_queue.go:103 [split,s1,r9/1:/{Table/53-Max},@c42043c000] splitting at keys [/Table/54/0 /Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.105372 39628 storage/replica_command.go:2380 [split,s1,r9/1:/{Table/53-Max},@c42043c000] initiating a split of this range at key /Table/54 [r10]
E161208 09:25:28.119707 39628 storage/queue.go:598 [split,s1,r9/1:/Table/5{3-4},@c42043c000] unable to split [n1,s1,r9/1:/Table/5{3-4}] at key "/Table/55/0": key range /Table/55/0-/Table/55/0 outside of bounds of range /Table/53-/Max
I161208 09:25:28.120609 39628 storage/split_queue.go:103 [split,s1,r10/1:/{Table/54-Max},@c4202b1b00] splitting at keys [/Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.120815 39628 storage/replica_command.go:2380 [split,s1,r10/1:/{Table/54-Max},@c4202b1b00] initiating a split of this range at key /Table/55 [r11]
E161208 09:25:28.121646 39629 storage/queue.go:609 [replicate,s1,r9/1:/Table/5{3-4},@c42043c000] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.142487 39629 storage/queue.go:609 [replicate,s1,r10/1:/Table/5{4-5},@c4202b1b00] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.142641 39628 storage/queue.go:598 [split,s1,r10/1:/Table/5{4-5},@c4202b1b00] unable to split [n1,s1,r10/1:/Table/5{4-5}] at key "/Table/56/0": key range /Table/56/0-/Table/56/0 outside of bounds of range /Table/54-/Max
I161208 09:25:28.143587 39628 storage/split_queue.go:103 [split,s1,r11/1:/{Table/55-Max},@c4201f0900] splitting at keys [/Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.143794 39628 storage/replica_command.go:2380 [split,s1,r11/1:/{Table/55-Max},@c4201f0900] initiating a split of this range at key /Table/56 [r12]
E161208 09:25:28.176545 39629 storage/queue.go:609 [replicate,s1,r11/1:/Table/5{5-6},@c4201f0900] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.179601 39628 storage/queue.go:598 [split,s1,r11/1:/Table/5{5-6},@c4201f0900] unable to split [n1,s1,r11/1:/Table/5{5-6}] at key "/Table/57/0": key range /Table/57/0-/Table/57/0 outside of bounds of range /Table/55-/Max
I161208 09:25:28.180450 39628 storage/split_queue.go:103 [split,s1,r12/1:/{Table/56-Max},@c4202b0900] splitting at keys [/Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.180649 39628 storage/replica_command.go:2380 [split,s1,r12/1:/{Table/56-Max},@c4202b0900] initiating a split of this range at key /Table/57 [r13]
E161208 09:25:28.217488 39629 storage/queue.go:609 [replicate,s1,r12/1:/Table/5{6-7},@c4202b0900] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.219016 39628 storage/queue.go:598 [split,s1,r12/1:/Table/5{6-7},@c4202b0900] unable to split [n1,s1,r12/1:/Table/5{6-7}] at key "/Table/58/0": key range /Table/58/0-/Table/58/0 outside of bounds of range /Table/56-/Max
I161208 09:25:28.221806 39628 storage/split_queue.go:103 [split,s1,r13/1:/{Table/57-Max},@c420094d80] splitting at keys [/Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.221980 39628 storage/replica_command.go:2380 [split,s1,r13/1:/{Table/57-Max},@c420094d80] initiating a split of this range at key /Table/58 [r14]
E161208 09:25:28.247535 39629 storage/queue.go:609 [replicate,s1,r13/1:/Table/5{7-8},@c420094d80] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.254206 39628 storage/queue.go:598 [split,s1,r13/1:/Table/5{7-8},@c420094d80] unable to split [n1,s1,r13/1:/Table/5{7-8}] at key "/Table/59/0": key range /Table/59/0-/Table/59/0 outside of bounds of range /Table/57-/Max
I161208 09:25:28.255571 39628 storage/split_queue.go:103 [split,s1,r14/1:/{Table/58-Max},@c420095200] splitting at keys [/Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.256273 39628 storage/replica_command.go:2380 [split,s1,r14/1:/{Table/58-Max},@c420095200] initiating a split of this range at key /Table/59 [r15]
E161208 09:25:28.293442 39629 storage/queue.go:609 [replicate,s1,r14/1:/Table/5{8-9},@c420095200] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.304811 39628 storage/queue.go:598 [split,s1,r14/1:/Table/5{8-9},@c420095200] unable to split [n1,s1,r14/1:/Table/5{8-9}] at key "/Table/60/0": key range /Table/60/0-/Table/60/0 outside of bounds of range /Table/58-/Max
I161208 09:25:28.305534 39628 storage/split_queue.go:103 [split,s1,r15/1:/{Table/59-Max},@c4202af200] splitting at keys [/Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.305706 39628 storage/replica_command.go:2380 [split,s1,r15/1:/{Table/59-Max},@c4202af200] initiating a split of this range at key /Table/60 [r16]
E161208 09:25:28.325514 39629 storage/queue.go:609 [replicate,s1,r15/1:/Table/{59-60},@c4202af200] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.351309 39628 storage/queue.go:598 [split,s1,r15/1:/Table/{59-60},@c4202af200] unable to split [n1,s1,r15/1:/Table/{59-60}] at key "/Table/61/0": key range /Table/61/0-/Table/61/0 outside of bounds of range /Table/59-/Max
I161208 09:25:28.368888 39628 storage/split_queue.go:103 [split,s1,r16/1:/{Table/60-Max},@c42043c480] splitting at keys [/Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.369082 39628 storage/replica_command.go:2380 [split,s1,r16/1:/{Table/60-Max},@c42043c480] initiating a split of this range at key /Table/61 [r17]
E161208 09:25:28.408122 39629 storage/queue.go:609 [replicate,s1,r16/1:/Table/6{0-1},@c42043c480] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.419764 39628 storage/queue.go:598 [split,s1,r16/1:/Table/6{0-1},@c42043c480] unable to split [n1,s1,r16/1:/Table/6{0-1}] at key "/Table/62/0": key range /Table/62/0-/Table/62/0 outside of bounds of range /Table/60-/Max
I161208 09:25:28.426273 39628 storage/split_queue.go:103 [split,s1,r17/1:/{Table/61-Max},@c4201f0d80] splitting at keys [/Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.426483 39628 storage/replica_command.go:2380 [split,s1,r17/1:/{Table/61-Max},@c4201f0d80] initiating a split of this range at key /Table/62 [r18]
E161208 09:25:28.444283 39629 storage/queue.go:609 [replicate,s1,r17/1:/Table/6{1-2},@c4201f0d80] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.454615 39628 storage/queue.go:598 [split,s1,r17/1:/Table/6{1-2},@c4201f0d80] unable to split [n1,s1,r17/1:/Table/6{1-2}] at key "/Table/63/0": key range /Table/63/0-/Table/63/0 outside of bounds of range /Table/61-/Max
I161208 09:25:28.463926 39628 storage/split_queue.go:103 [split,s1,r18/1:/{Table/62-Max},@c42043c900] splitting at keys [/Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.464165 39628 storage/replica_command.go:2380 [split,s1,r18/1:/{Table/62-Max},@c42043c900] initiating a split of this range at key /Table/63 [r19]
E161208 09:25:28.533532 39628 storage/queue.go:598 [split,s1,r18/1:/Table/6{2-3},@c42043c900] unable to split [n1,s1,r18/1:/Table/6{2-3}] at key "/Table/64/0": key range /Table/64/0-/Table/64/0 outside of bounds of range /Table/62-/Max
I161208 09:25:28.534393 39628 storage/split_queue.go:103 [split,s1,r19/1:/{Table/63-Max},@c4202ae480] splitting at keys [/Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.535466 39628 storage/replica_command.go:2380 [split,s1,r19/1:/{Table/63-Max},@c4202ae480] initiating a split of this range at key /Table/64 [r20]
E161208 09:25:28.535637 39629 storage/queue.go:609 [replicate,s1,r18/1:/Table/6{2-3},@c42043c900] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.572022 39629 storage/queue.go:609 [replicate,s1,r19/1:/Table/6{3-4},@c4202ae480] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.572192 39628 storage/queue.go:598 [split,s1,r19/1:/Table/6{3-4},@c4202ae480] unable to split [n1,s1,r19/1:/Table/6{3-4}] at key "/Table/65/0": key range /Table/65/0-/Table/65/0 outside of bounds of range /Table/63-/Max
I161208 09:25:28.573535 39628 storage/split_queue.go:103 [split,s1,r20/1:/{Table/64-Max},@c420094900] splitting at keys [/Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.574276 39628 storage/replica_command.go:2380 [split,s1,r20/1:/{Table/64-Max},@c420094900] initiating a split of this range at key /Table/65 [r21]
E161208 09:25:28.594522 39629 storage/queue.go:609 [replicate,s1,r20/1:/Table/6{4-5},@c420094900] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.600050 39628 storage/queue.go:598 [split,s1,r20/1:/Table/6{4-5},@c420094900] unable to split [n1,s1,r20/1:/Table/6{4-5}] at key "/Table/66/0": key range /Table/66/0-/Table/66/0 outside of bounds of range /Table/64-/Max
I161208 09:25:28.601535 39628 storage/split_queue.go:103 [split,s1,r21/1:/{Table/65-Max},@c4201f1680] splitting at keys [/Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.601846 39628 storage/replica_command.go:2380 [split,s1,r21/1:/{Table/65-Max},@c4201f1680] initiating a split of this range at key /Table/66 [r22]
E161208 09:25:28.624457 39628 storage/queue.go:598 [split,s1,r21/1:/Table/6{5-6},@c4201f1680] unable to split [n1,s1,r21/1:/Table/6{5-6}] at key "/Table/67/0": key range /Table/67/0-/Table/67/0 outside of bounds of range /Table/65-/Max
E161208 09:25:28.625101 39629 storage/queue.go:609 [replicate,s1,r21/1:/Table/6{5-6},@c4201f1680] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161208 09:25:28.625266 39628 storage/split_queue.go:103 [split,s1,r22/1:/{Table/66-Max},@c4202b0d80] splitting at keys [/Table/67/0 /Table/68/0]
I161208 09:25:28.625516 39628 storage/replica_command.go:2380 [split,s1,r22/1:/{Table/66-Max},@c4202b0d80] initiating a split of this range at key /Table/67 [r23]
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x6ab4ce]
goroutine 39633 [running]:
panic(0x1f58660, 0xc420014130)
/usr/local/go/src/runtime/panic.go:500 +0x1ae
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).Recover(0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:185 +0xd8
panic(0x1f58660, 0xc420014130)
/usr/local/go/src/runtime/panic.go:458 +0x271
github.com/cockroachdb/cockroach/pkg/storage.(*NodeLiveness).GetLiveness(0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/node_liveness.go:241 +0x6e
github.com/cockroachdb/cockroach/pkg/storage.(*NodeLiveness).IsLive(0x0, 0x1, 0x0, 0x0, 0x8bb2c97000)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/node_liveness.go:127 +0x5c
github.com/cockroachdb/cockroach/pkg/storage.(*consistencyQueue).shouldQueue(0xc421eaf880, 0x2b7505ecf390, 0xc421c99470, 0x7b, 0xc400000646, 0xc42029b680, 0xc421cd5800, 0x1c, 0x20, 0xc421d30b00, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/consistency_queue.go:77 +0x17a
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).MaybeAdd(0xc4218ce690, 0xc42029b680, 0x7b, 0xc400000646)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:358 +0x368
github.com/cockroachdb/cockroach/pkg/storage.(*consistencyQueue).MaybeAdd(0xc421eaf880, 0xc42029b680, 0x7b, 0x646)
<autogenerated>:473 +0x7c
github.com/cockroachdb/cockroach/pkg/storage.(*replicaScanner).waitAndProcess(0xc4201d03c0, 0x2b7505e96208, 0xc4200145c0, 0xecfdb1e07, 0xc426f6ae67, 0x3475c00, 0xc42160e300, 0xc421532090, 0xc42029b680, 0x1194)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scanner.go:221 +0x3b8
github.com/cockroachdb/cockroach/pkg/storage.(*replicaScanner).scanLoop.func1.1(0xc42029b680, 0x58)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scanner.go:269 +0x162
github.com/cockroachdb/cockroach/pkg/storage.(*storeReplicaVisitor).Visit(0xc4220a15c0, 0xc421fa4000)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:324 +0x5b0
github.com/cockroachdb/cockroach/pkg/storage.(*replicaScanner).scanLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scanner.go:271 +0x45a
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f18480)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 1 [chan receive]:
testing.(*T).Run(0xc42031c300, 0x21042f1, 0x1a, 0x24c2fb0, 0xa59b01)
/usr/local/go/src/testing/testing.go:647 +0x56e
testing.RunTests.func1(0xc42031c300)
/usr/local/go/src/testing/testing.go:793 +0xba
testing.tRunner(0xc42031c300, 0xc42004fb38)
/usr/local/go/src/testing/testing.go:610 +0xca
testing.RunTests(0x24c4a70, 0x2f94f00, 0x168, 0x168, 0xc42012a618)
/usr/local/go/src/testing/testing.go:799 +0x4bb
testing.(*M).Run(0xc42004fef0, 0xc420279cf0)
/usr/local/go/src/testing/testing.go:743 +0x130
github.com/cockroachdb/cockroach/pkg/storage_test.TestMain(0xc42004fef0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/main_test.go:57 +0x287
main.main()
github.com/cockroachdb/cockroach/pkg/storage/_test/_testmain.go:784 +0x1b6
goroutine 17 [syscall, 1 minutes, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2086 +0x1
goroutine 8 [chan receive]:
github.com/cockroachdb/cockroach/pkg/util/log.(*loggingT).flushDaemon(0x3477160)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1016 +0x85
created by github.com/cockroachdb/cockroach/pkg/util/log.init.1
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:581 +0xc3
goroutine 39632 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:474 +0x495
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f183a0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39700 [chan receive]:
github.com/cockroachdb/cockroach/pkg/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:137 +0x95
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421eae1e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39721 [semacquire]:
sync.runtime_Semacquire(0xc420486ba4)
/usr/local/go/src/runtime/sema.go:47 +0x30
sync.(*WaitGroup).Wait(0xc420486b98)
/usr/local/go/src/sync/waitgroup.go:131 +0xbf
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Wait(0xc420486b00)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:187 +0x42
github.com/cockroachdb/cockroach/pkg/storage.(*Store).processRaft.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3357 +0x67
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc420d443d0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39704 [chan receive]:
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:171 +0x74
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232eee0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39714 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000141)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f1e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39715 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc40000014b)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f200)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39698 [sleep]:
time.Sleep(0x8000000)
/usr/local/go/src/runtime/time.go:59 +0xe1
github.com/cockroachdb/cockroach/pkg/util.RetryForDuration(0xa7a358200, 0xc420937c00, 0xeae942, 0x30)
/go/src/github.com/cockroachdb/cockroach/pkg/util/testing.go:138 +0x116
github.com/cockroachdb/cockroach/pkg/util.SucceedsSoonDepth(0x1, 0x2fb6660, 0xc42269e3c0, 0xc420937c00)
/go/src/github.com/cockroachdb/cockroach/pkg/util/testing.go:117 +0x55
github.com/cockroachdb/cockroach/pkg/storage_test.TestStoreRangeSystemSplits.func2(0x13)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/client_split_test.go:880 +0x92d
github.com/cockroachdb/cockroach/pkg/storage_test.TestStoreRangeSystemSplits(0xc42269e3c0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/client_split_test.go:883 +0x49f
testing.tRunner(0xc42269e3c0, 0x24c2fb0)
/usr/local/go/src/testing/testing.go:610 +0xca
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:646 +0x530
goroutine 39713 [syscall, locked to thread]:
github.com/cockroachdb/cockroach/pkg/storage/engine._Cfunc_DBApplyBatchRepr(0x2b75092c5c40, 0xc4221c4000, 0x1da, 0x0, 0x0)
??:0 +0x75
github.com/cockroachdb/cockroach/pkg/storage/engine.dbApplyBatchRepr(0x2b75092c5c40, 0xc4221c4000, 0x1da, 0x400, 0x211ddc5, 0xc420e4b1c8)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/engine/rocksdb.go:1392 +0xf2
github.com/cockroachdb/cockroach/pkg/storage/engine.(*RocksDB).ApplyBatchRepr(0xd49e81, 0x1fe44a0, 0x688306, 0x5ecbd0, 0xc422366d90, 0x1fe44a0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/engine/rocksdb.go:436 +0x6f
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39702 [select]:
github.com/cockroachdb/cockroach/pkg/kv.(*TxnCoordSender).printStatsLoop(0xc42008f400, 0x2b7505e96208, 0xc4200145c0)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:209 +0x1011
github.com/cockroachdb/cockroach/pkg/kv.NewTxnCoordSender.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:193 +0xc0
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc4220926d0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39726 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).startGossip.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1284 +0x4b6
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421d381b0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39629 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:474 +0x495
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f18320)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39712 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000144)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f100)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39720 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc40000014a)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f320)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39386 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*idAllocator).start.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/id_alloc.go:133 +0xaa3
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc422318f20)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39705 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc40000014e)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232ef40)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39719 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc40000014d)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f300)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39725 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).Start.func3()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1198 +0x19d
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f400)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39722 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).raftTickLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3383 +0x482
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc420d443e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39709 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000142)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f020)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39627 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:474 +0x495
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f18260)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39723 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).startCoalescedHeartbeatsLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3414 +0x1c5
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc420d443f0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39763 [select]:
github.com/cockroachdb/cockroach/pkg/kv.(*TxnCoordSender).heartbeatLoop(0xc42008f400, 0x2b7505ecf390, 0xc421ff15c0, 0xda421b1e19fa730e, 0x1e8b24c1511cc5b4)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:647 +0x4ff
github.com/cockroachdb/cockroach/pkg/kv.(*TxnCoordSender).updateState.func2(0x2b7505ecf390, 0xc421ed8900)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:919 +0x69
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask.func1(0xc421532090, 0x2c8b7d6, 0x16, 0x398, 0x2fbe400, 0xc420b40140, 0xc42253aee0, 0x2b7505ecf390, 0xc421ed8900)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:264 +0xed
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:265 +0x2b0
goroutine 39631 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:474 +0x495
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f18380)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39710 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000143)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f040)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39727 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).startGossip.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1284 +0x4b6
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421d381e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39699 [chan receive]:
github.com/cockroachdb/cockroach/pkg/storage/engine.(*RocksDB).open.func1(0xc421992000)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/engine/rocksdb.go:373 +0x64
created by github.com/cockroachdb/cockroach/pkg/storage/engine.(*RocksDB).open
/go/src/github.com/cockroachdb/cockroach/pkg/storage/engine/rocksdb.go:374 +0x8fb
goroutine 39717 [runnable]:
sync.(*Mutex).Unlock(0xc421fb21a0)
/usr/local/go/src/sync/mutex.go:102
github.com/cockroachdb/cockroach/pkg/util/syncutil.(*TimedMutex).Unlock(0xc4201f16f8)
/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:95 +0xa1
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).tick(0xc4201f1680, 0xc4220a1500, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:2596 +0xb1
github.com/cockroachdb/cockroach/pkg/storage.(*Store).processTick(0xc42016ee00, 0x15, 0xc42182bed0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3339 +0x157
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:225 +0x3bf
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f280)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39630 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:474 +0x495
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f18360)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39388 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).maybeAddToPurgatory.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:633 +0xf66
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc422392ee0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39716 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc40000014c)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f260)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39628 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).tryAddWriteCmd(0xc4202b0d80, 0x2b7505ecf390, 0xc421ed97a0, 0x7b, 0x623, 0x100000001, 0x1, 0x16, 0x0, 0xc42269dc80, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1941 +0xdd4
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).addWriteCmd(0xc4202b0d80, 0x2b7505ecf390, 0xc421ed97a0, 0x7b, 0x623, 0x100000001, 0x1, 0x16, 0x0, 0xc42269dc80, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1772 +0xa1
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).Send(0xc4202b0d80, 0x2b7505ecf390, 0xc421ed97a0, 0x7b, 0x623, 0x100000001, 0x1, 0x16, 0x0, 0xc42269dc80, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1224 +0x26f
github.com/cockroachdb/cockroach/pkg/storage.(*Store).Send(0xc42016ee00, 0x2b7505ecf390, 0xc421ed9740, 0x7b, 0x623, 0x100000001, 0x1, 0x16, 0x0, 0xc42269dc80, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:2471 +0x86a
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).Send(0xc420ea3ce0, 0x2b7505ecf390, 0xc4220e6de0, 0x0, 0x0, 0x100000001, 0x1, 0x16, 0x0, 0xc42269dc80, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:187 +0x24b
github.com/cockroachdb/cockroach/pkg/kv.(*senderTransport).SendNext(0xc421cd8320, 0xc420a72cc0)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/transport.go:309 +0x2c3
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).sendToReplicas(0xc4218ce0f0, 0x2b7505ecf318, 0xc420cbae40, 0x1dcd6500, 0xc4220a0b70, 0xc4218ce128, 0x3, 0xc421582760, 0x1, 0x1, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:1142 +0x376
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).sendRPC(0xc4218ce0f0, 0x2b7505ecf318, 0xc420cbae40, 0x3, 0xc421582760, 0x1, 0x1, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:410 +0x418
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).sendSingleRange(0xc4218ce0f0, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc4210686c0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:479 +0x1ab
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).sendPartialBatch(0xc4218ce0f0, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc4210686c0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:931 +0x3b4
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).divideAndSendBatchToRanges(0xc4218ce0f0, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc4210686c0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:810 +0x568
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).Send(0xc4218ce0f0, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc4210683c0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:625 +0x382
github.com/cockroachdb/cockroach/pkg/kv.(*TxnCoordSender).Send(0xc42008f400, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc420b25908, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:418 +0x7f2
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).sendInternal(0xc420b258c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc420b25908, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:587 +0x16f
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).send(0xc420b258c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:711 +0x63d
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).(github.com/cockroachdb/cockroach/pkg/internal/client.send)-fm(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:313 +0x7d
github.com/cockroachdb/cockroach/pkg/internal/client.sendAndFill(0xc421036f58, 0xc421980000, 0x0, 0xc421ce66e0)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:418 +0x1ac
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).Run(0xc420b258c0, 0xc421980000, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:313 +0xfe
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).adminSplitWithDescriptor.func1(0xc420b258c0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:2449 +0x9fd
github.com/cockroachdb/cockroach/pkg/internal/client.(*DB).Txn.func1(0xc420b258c0, 0xc421d26d00, 0x4000000000000000, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:468 +0x47
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).Exec(0xc420b258c0, 0xc421d20101, 0x0, 0xc421d26cf0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:520 +0x234
github.com/cockroachdb/cockroach/pkg/internal/client.(*DB).Txn(0xc421992af0, 0x2b7505ed2eb8, 0xc4205e6f00, 0xc4208797c0, 0xc4210377a8, 0x2)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:469 +0x298
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).adminSplitWithDescriptor(0xc4202b0d80, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc421ce61c8, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:2450 +0xe13
github.com/cockroachdb/cockroach/pkg/storage.(*splitQueue).process(0xc422093220, 0x2b7505ed2eb8, 0xc4205e6f00, 0x7b, 0x620, 0xc4202b0d80, 0xc421cd5800, 0x1c, 0x20, 0xc420eca340, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/split_queue.go:111 +0x331
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processReplica(0xc4218ce2d0, 0x2b7505ecf390, 0xc421ed8150, 0xc4202b0d80, 0xc42160e300, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:575 +0x5fc
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1.2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:498 +0x119
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunTask(0xc421532090, 0xc421037e40, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:224 +0x10e
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:505 +0x45a
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f182e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39703 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*StorePool).start.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store_pool.go:356 +0x51e
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421eaf460)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39707 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000146)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232efe0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39708 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000147)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f000)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39711 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000145)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f080)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39706 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000148)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232efc0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39718 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000149)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f2a0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
ERROR: exit status 2
make: *** [stress] Error 1
1 runs completed, 1 failures, over 1m25s
Makefile:138: recipe for target 'stress' failed
``` | 1.0 | github.com/cockroachdb/cockroach/pkg/storage: TestStoreRangeSystemSplits failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/28262d4d123bf520ce74dea444877f9c3494eda4
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=false
TAGS=stress
GOFLAGS=-race
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=84107&tab=buildLog
```
I161208 09:25:27.626036 39698 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161208 09:25:27.627219 39698 gossip/gossip.go:248 [n?] initial resolvers: []
W161208 09:25:27.627302 39698 gossip/gossip.go:1124 [n?] no resolvers found; use --join to specify a connected node
I161208 09:25:27.627439 39698 base/node_id.go:62 NodeID set to 1
I161208 09:25:27.627581 39698 gossip/gossip.go:290 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<>
I161208 09:25:27.652368 39698 storage/store.go:1229 [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161208 09:25:27.655503 39705 storage/replica_proposal.go:348 [s1,r1/1:/M{in-ax},@c42029b680] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900.000124ms following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=1970-01-01 00:00:00.000000123 +0000 UTC]
I161208 09:25:27.658208 39628 storage/split_queue.go:103 [split,s1,r1/1:/M{in-ax},@c42029b680] splitting at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0]
I161208 09:25:27.663968 39628 storage/replica_command.go:2380 [split,s1,r1/1:/M{in-ax},@c42029b680] initiating a split of this range at key /Table/11 [r2]
E161208 09:25:27.677916 39628 storage/queue.go:598 [split,s1,r1/1:/{Min-Table/11},@c42029b680] unable to split [n1,s1,r1/1:/{Min-Table/11}] at key "/Table/12/0": key range /Table/12/0-/Table/12/0 outside of bounds of range /Min-/Max
E161208 09:25:27.678752 39629 storage/queue.go:609 [replicate,s1,r1/1:/{Min-Table/11},@c42029b680] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161208 09:25:27.695793 39628 storage/split_queue.go:103 [split,s1,r2/1:/{Table/11-Max},@c42029bb00] splitting at keys [/Table/12/0 /Table/13/0 /Table/14/0]
I161208 09:25:27.696093 39628 storage/replica_command.go:2380 [split,s1,r2/1:/{Table/11-Max},@c42029bb00] initiating a split of this range at key /Table/12 [r3]
W161208 09:25:27.715956 39628 storage/stores.go:218 range not contained in one range: [/Meta2/Table/12,/Table/12/NULL), but have [/Min,/Table/11)
W161208 09:25:27.734497 39683 storage/intent_resolver.go:332 [n1,s1,r1/1:/{Min-Table/11}]: failed to push during intent resolution: failed to push "storage/replica_command.go:2450 (*Replica).adminSplitWithDescriptor" id=c2f72fa6 key=/Local/Range/"\x93"/RangeDescriptor rw=true pri=0.00590156 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,61 orig=0.000000123,61 max=0.000000123,61 wto=false rop=false
E161208 09:25:27.774305 39628 storage/queue.go:598 [split,s1,r2/1:/Table/1{1-2},@c42029bb00] unable to split [n1,s1,r2/1:/Table/1{1-2}] at key "/Table/13/0": key range /Table/13/0-/Table/13/0 outside of bounds of range /Table/11-/Max
E161208 09:25:27.789033 39629 storage/queue.go:609 [replicate,s1,r2/1:/Table/1{1-2},@c42029bb00] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161208 09:25:27.799100 39628 storage/split_queue.go:103 [split,s1,r3/1:/{Table/12-Max},@c420094000] splitting at keys [/Table/13/0 /Table/14/0]
I161208 09:25:27.799289 39628 storage/replica_command.go:2380 [split,s1,r3/1:/{Table/12-Max},@c420094000] initiating a split of this range at key /Table/13 [r4]
E161208 09:25:27.833335 39629 storage/queue.go:609 [replicate,s1,r3/1:/Table/1{2-3},@c420094000] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:27.833626 39628 storage/queue.go:598 [split,s1,r3/1:/Table/1{2-3},@c420094000] unable to split [n1,s1,r3/1:/Table/1{2-3}] at key "/Table/14/0": key range /Table/14/0-/Table/14/0 outside of bounds of range /Table/12-/Max
I161208 09:25:27.836091 39628 storage/split_queue.go:103 [split,s1,r4/1:/{Table/13-Max},@c4202b1200] splitting at keys [/Table/14/0]
I161208 09:25:27.836300 39628 storage/replica_command.go:2380 [split,s1,r4/1:/{Table/13-Max},@c4202b1200] initiating a split of this range at key /Table/14 [r5]
E161208 09:25:27.933555 39629 storage/queue.go:609 [replicate,s1,r4/1:/Table/1{3-4},@c4202b1200] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161208 09:25:27.933682 39628 storage/split_queue.go:103 [split,s1,r5/1:/{Table/14-Max},@c4202ae900] splitting at keys [/Table/50/0 /Table/51/0 /Table/52/0 /Table/53/0 /Table/54/0 /Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:27.933874 39628 storage/replica_command.go:2380 [split,s1,r5/1:/{Table/14-Max},@c4202ae900] initiating a split of this range at key /Table/50 [r6]
E161208 09:25:27.963947 39628 storage/queue.go:598 [split,s1,r5/1:/Table/{14-50},@c4202ae900] unable to split [n1,s1,r5/1:/Table/{14-50}] at key "/Table/51/0": key range /Table/51/0-/Table/51/0 outside of bounds of range /Table/14-/Max
I161208 09:25:27.965201 39628 storage/split_queue.go:103 [split,s1,r6/1:/{Table/50-Max},@c4201f0000] splitting at keys [/Table/51/0 /Table/52/0 /Table/53/0 /Table/54/0 /Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:27.965371 39628 storage/replica_command.go:2380 [split,s1,r6/1:/{Table/50-Max},@c4201f0000] initiating a split of this range at key /Table/51 [r7]
E161208 09:25:27.968964 39629 storage/queue.go:609 [replicate,s1,r5/1:/Table/{14-50},@c4202ae900] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.029694 39629 storage/queue.go:609 [replicate,s1,r6/1:/Table/5{0-1},@c4201f0000] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.032064 39628 storage/queue.go:598 [split,s1,r6/1:/Table/5{0-1},@c4201f0000] unable to split [n1,s1,r6/1:/Table/5{0-1}] at key "/Table/52/0": key range /Table/52/0-/Table/52/0 outside of bounds of range /Table/50-/Max
I161208 09:25:28.037258 39628 storage/split_queue.go:103 [split,s1,r7/1:/{Table/51-Max},@c4202aed80] splitting at keys [/Table/52/0 /Table/53/0 /Table/54/0 /Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.037431 39628 storage/replica_command.go:2380 [split,s1,r7/1:/{Table/51-Max},@c4202aed80] initiating a split of this range at key /Table/52 [r8]
E161208 09:25:28.054991 39628 storage/queue.go:598 [split,s1,r7/1:/Table/5{1-2},@c4202aed80] unable to split [n1,s1,r7/1:/Table/5{1-2}] at key "/Table/53/0": key range /Table/53/0-/Table/53/0 outside of bounds of range /Table/51-/Max
I161208 09:25:28.059000 39628 storage/split_queue.go:103 [split,s1,r8/1:/{Table/52-Max},@c4201f0480] splitting at keys [/Table/53/0 /Table/54/0 /Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.059211 39628 storage/replica_command.go:2380 [split,s1,r8/1:/{Table/52-Max},@c4201f0480] initiating a split of this range at key /Table/53 [r9]
E161208 09:25:28.061856 39629 storage/queue.go:609 [replicate,s1,r7/1:/Table/5{1-2},@c4202aed80] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.101183 39629 storage/queue.go:609 [replicate,s1,r8/1:/Table/5{2-3},@c4201f0480] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.102626 39628 storage/queue.go:598 [split,s1,r8/1:/Table/5{2-3},@c4201f0480] unable to split [n1,s1,r8/1:/Table/5{2-3}] at key "/Table/54/0": key range /Table/54/0-/Table/54/0 outside of bounds of range /Table/52-/Max
I161208 09:25:28.104683 39628 storage/split_queue.go:103 [split,s1,r9/1:/{Table/53-Max},@c42043c000] splitting at keys [/Table/54/0 /Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.105372 39628 storage/replica_command.go:2380 [split,s1,r9/1:/{Table/53-Max},@c42043c000] initiating a split of this range at key /Table/54 [r10]
E161208 09:25:28.119707 39628 storage/queue.go:598 [split,s1,r9/1:/Table/5{3-4},@c42043c000] unable to split [n1,s1,r9/1:/Table/5{3-4}] at key "/Table/55/0": key range /Table/55/0-/Table/55/0 outside of bounds of range /Table/53-/Max
I161208 09:25:28.120609 39628 storage/split_queue.go:103 [split,s1,r10/1:/{Table/54-Max},@c4202b1b00] splitting at keys [/Table/55/0 /Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.120815 39628 storage/replica_command.go:2380 [split,s1,r10/1:/{Table/54-Max},@c4202b1b00] initiating a split of this range at key /Table/55 [r11]
E161208 09:25:28.121646 39629 storage/queue.go:609 [replicate,s1,r9/1:/Table/5{3-4},@c42043c000] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.142487 39629 storage/queue.go:609 [replicate,s1,r10/1:/Table/5{4-5},@c4202b1b00] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.142641 39628 storage/queue.go:598 [split,s1,r10/1:/Table/5{4-5},@c4202b1b00] unable to split [n1,s1,r10/1:/Table/5{4-5}] at key "/Table/56/0": key range /Table/56/0-/Table/56/0 outside of bounds of range /Table/54-/Max
I161208 09:25:28.143587 39628 storage/split_queue.go:103 [split,s1,r11/1:/{Table/55-Max},@c4201f0900] splitting at keys [/Table/56/0 /Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.143794 39628 storage/replica_command.go:2380 [split,s1,r11/1:/{Table/55-Max},@c4201f0900] initiating a split of this range at key /Table/56 [r12]
E161208 09:25:28.176545 39629 storage/queue.go:609 [replicate,s1,r11/1:/Table/5{5-6},@c4201f0900] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.179601 39628 storage/queue.go:598 [split,s1,r11/1:/Table/5{5-6},@c4201f0900] unable to split [n1,s1,r11/1:/Table/5{5-6}] at key "/Table/57/0": key range /Table/57/0-/Table/57/0 outside of bounds of range /Table/55-/Max
I161208 09:25:28.180450 39628 storage/split_queue.go:103 [split,s1,r12/1:/{Table/56-Max},@c4202b0900] splitting at keys [/Table/57/0 /Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.180649 39628 storage/replica_command.go:2380 [split,s1,r12/1:/{Table/56-Max},@c4202b0900] initiating a split of this range at key /Table/57 [r13]
E161208 09:25:28.217488 39629 storage/queue.go:609 [replicate,s1,r12/1:/Table/5{6-7},@c4202b0900] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.219016 39628 storage/queue.go:598 [split,s1,r12/1:/Table/5{6-7},@c4202b0900] unable to split [n1,s1,r12/1:/Table/5{6-7}] at key "/Table/58/0": key range /Table/58/0-/Table/58/0 outside of bounds of range /Table/56-/Max
I161208 09:25:28.221806 39628 storage/split_queue.go:103 [split,s1,r13/1:/{Table/57-Max},@c420094d80] splitting at keys [/Table/58/0 /Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.221980 39628 storage/replica_command.go:2380 [split,s1,r13/1:/{Table/57-Max},@c420094d80] initiating a split of this range at key /Table/58 [r14]
E161208 09:25:28.247535 39629 storage/queue.go:609 [replicate,s1,r13/1:/Table/5{7-8},@c420094d80] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.254206 39628 storage/queue.go:598 [split,s1,r13/1:/Table/5{7-8},@c420094d80] unable to split [n1,s1,r13/1:/Table/5{7-8}] at key "/Table/59/0": key range /Table/59/0-/Table/59/0 outside of bounds of range /Table/57-/Max
I161208 09:25:28.255571 39628 storage/split_queue.go:103 [split,s1,r14/1:/{Table/58-Max},@c420095200] splitting at keys [/Table/59/0 /Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.256273 39628 storage/replica_command.go:2380 [split,s1,r14/1:/{Table/58-Max},@c420095200] initiating a split of this range at key /Table/59 [r15]
E161208 09:25:28.293442 39629 storage/queue.go:609 [replicate,s1,r14/1:/Table/5{8-9},@c420095200] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.304811 39628 storage/queue.go:598 [split,s1,r14/1:/Table/5{8-9},@c420095200] unable to split [n1,s1,r14/1:/Table/5{8-9}] at key "/Table/60/0": key range /Table/60/0-/Table/60/0 outside of bounds of range /Table/58-/Max
I161208 09:25:28.305534 39628 storage/split_queue.go:103 [split,s1,r15/1:/{Table/59-Max},@c4202af200] splitting at keys [/Table/60/0 /Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.305706 39628 storage/replica_command.go:2380 [split,s1,r15/1:/{Table/59-Max},@c4202af200] initiating a split of this range at key /Table/60 [r16]
E161208 09:25:28.325514 39629 storage/queue.go:609 [replicate,s1,r15/1:/Table/{59-60},@c4202af200] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.351309 39628 storage/queue.go:598 [split,s1,r15/1:/Table/{59-60},@c4202af200] unable to split [n1,s1,r15/1:/Table/{59-60}] at key "/Table/61/0": key range /Table/61/0-/Table/61/0 outside of bounds of range /Table/59-/Max
I161208 09:25:28.368888 39628 storage/split_queue.go:103 [split,s1,r16/1:/{Table/60-Max},@c42043c480] splitting at keys [/Table/61/0 /Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.369082 39628 storage/replica_command.go:2380 [split,s1,r16/1:/{Table/60-Max},@c42043c480] initiating a split of this range at key /Table/61 [r17]
E161208 09:25:28.408122 39629 storage/queue.go:609 [replicate,s1,r16/1:/Table/6{0-1},@c42043c480] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.419764 39628 storage/queue.go:598 [split,s1,r16/1:/Table/6{0-1},@c42043c480] unable to split [n1,s1,r16/1:/Table/6{0-1}] at key "/Table/62/0": key range /Table/62/0-/Table/62/0 outside of bounds of range /Table/60-/Max
I161208 09:25:28.426273 39628 storage/split_queue.go:103 [split,s1,r17/1:/{Table/61-Max},@c4201f0d80] splitting at keys [/Table/62/0 /Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.426483 39628 storage/replica_command.go:2380 [split,s1,r17/1:/{Table/61-Max},@c4201f0d80] initiating a split of this range at key /Table/62 [r18]
E161208 09:25:28.444283 39629 storage/queue.go:609 [replicate,s1,r17/1:/Table/6{1-2},@c4201f0d80] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.454615 39628 storage/queue.go:598 [split,s1,r17/1:/Table/6{1-2},@c4201f0d80] unable to split [n1,s1,r17/1:/Table/6{1-2}] at key "/Table/63/0": key range /Table/63/0-/Table/63/0 outside of bounds of range /Table/61-/Max
I161208 09:25:28.463926 39628 storage/split_queue.go:103 [split,s1,r18/1:/{Table/62-Max},@c42043c900] splitting at keys [/Table/63/0 /Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.464165 39628 storage/replica_command.go:2380 [split,s1,r18/1:/{Table/62-Max},@c42043c900] initiating a split of this range at key /Table/63 [r19]
E161208 09:25:28.533532 39628 storage/queue.go:598 [split,s1,r18/1:/Table/6{2-3},@c42043c900] unable to split [n1,s1,r18/1:/Table/6{2-3}] at key "/Table/64/0": key range /Table/64/0-/Table/64/0 outside of bounds of range /Table/62-/Max
I161208 09:25:28.534393 39628 storage/split_queue.go:103 [split,s1,r19/1:/{Table/63-Max},@c4202ae480] splitting at keys [/Table/64/0 /Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.535466 39628 storage/replica_command.go:2380 [split,s1,r19/1:/{Table/63-Max},@c4202ae480] initiating a split of this range at key /Table/64 [r20]
E161208 09:25:28.535637 39629 storage/queue.go:609 [replicate,s1,r18/1:/Table/6{2-3},@c42043c900] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.572022 39629 storage/queue.go:609 [replicate,s1,r19/1:/Table/6{3-4},@c4202ae480] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.572192 39628 storage/queue.go:598 [split,s1,r19/1:/Table/6{3-4},@c4202ae480] unable to split [n1,s1,r19/1:/Table/6{3-4}] at key "/Table/65/0": key range /Table/65/0-/Table/65/0 outside of bounds of range /Table/63-/Max
I161208 09:25:28.573535 39628 storage/split_queue.go:103 [split,s1,r20/1:/{Table/64-Max},@c420094900] splitting at keys [/Table/65/0 /Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.574276 39628 storage/replica_command.go:2380 [split,s1,r20/1:/{Table/64-Max},@c420094900] initiating a split of this range at key /Table/65 [r21]
E161208 09:25:28.594522 39629 storage/queue.go:609 [replicate,s1,r20/1:/Table/6{4-5},@c420094900] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161208 09:25:28.600050 39628 storage/queue.go:598 [split,s1,r20/1:/Table/6{4-5},@c420094900] unable to split [n1,s1,r20/1:/Table/6{4-5}] at key "/Table/66/0": key range /Table/66/0-/Table/66/0 outside of bounds of range /Table/64-/Max
I161208 09:25:28.601535 39628 storage/split_queue.go:103 [split,s1,r21/1:/{Table/65-Max},@c4201f1680] splitting at keys [/Table/66/0 /Table/67/0 /Table/68/0]
I161208 09:25:28.601846 39628 storage/replica_command.go:2380 [split,s1,r21/1:/{Table/65-Max},@c4201f1680] initiating a split of this range at key /Table/66 [r22]
E161208 09:25:28.624457 39628 storage/queue.go:598 [split,s1,r21/1:/Table/6{5-6},@c4201f1680] unable to split [n1,s1,r21/1:/Table/6{5-6}] at key "/Table/67/0": key range /Table/67/0-/Table/67/0 outside of bounds of range /Table/65-/Max
E161208 09:25:28.625101 39629 storage/queue.go:609 [replicate,s1,r21/1:/Table/6{5-6},@c4201f1680] purgatory: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161208 09:25:28.625266 39628 storage/split_queue.go:103 [split,s1,r22/1:/{Table/66-Max},@c4202b0d80] splitting at keys [/Table/67/0 /Table/68/0]
I161208 09:25:28.625516 39628 storage/replica_command.go:2380 [split,s1,r22/1:/{Table/66-Max},@c4202b0d80] initiating a split of this range at key /Table/67 [r23]
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x6ab4ce]
goroutine 39633 [running]:
panic(0x1f58660, 0xc420014130)
/usr/local/go/src/runtime/panic.go:500 +0x1ae
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).Recover(0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:185 +0xd8
panic(0x1f58660, 0xc420014130)
/usr/local/go/src/runtime/panic.go:458 +0x271
github.com/cockroachdb/cockroach/pkg/storage.(*NodeLiveness).GetLiveness(0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/node_liveness.go:241 +0x6e
github.com/cockroachdb/cockroach/pkg/storage.(*NodeLiveness).IsLive(0x0, 0x1, 0x0, 0x0, 0x8bb2c97000)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/node_liveness.go:127 +0x5c
github.com/cockroachdb/cockroach/pkg/storage.(*consistencyQueue).shouldQueue(0xc421eaf880, 0x2b7505ecf390, 0xc421c99470, 0x7b, 0xc400000646, 0xc42029b680, 0xc421cd5800, 0x1c, 0x20, 0xc421d30b00, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/consistency_queue.go:77 +0x17a
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).MaybeAdd(0xc4218ce690, 0xc42029b680, 0x7b, 0xc400000646)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:358 +0x368
github.com/cockroachdb/cockroach/pkg/storage.(*consistencyQueue).MaybeAdd(0xc421eaf880, 0xc42029b680, 0x7b, 0x646)
<autogenerated>:473 +0x7c
github.com/cockroachdb/cockroach/pkg/storage.(*replicaScanner).waitAndProcess(0xc4201d03c0, 0x2b7505e96208, 0xc4200145c0, 0xecfdb1e07, 0xc426f6ae67, 0x3475c00, 0xc42160e300, 0xc421532090, 0xc42029b680, 0x1194)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scanner.go:221 +0x3b8
github.com/cockroachdb/cockroach/pkg/storage.(*replicaScanner).scanLoop.func1.1(0xc42029b680, 0x58)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scanner.go:269 +0x162
github.com/cockroachdb/cockroach/pkg/storage.(*storeReplicaVisitor).Visit(0xc4220a15c0, 0xc421fa4000)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:324 +0x5b0
github.com/cockroachdb/cockroach/pkg/storage.(*replicaScanner).scanLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scanner.go:271 +0x45a
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f18480)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 1 [chan receive]:
testing.(*T).Run(0xc42031c300, 0x21042f1, 0x1a, 0x24c2fb0, 0xa59b01)
/usr/local/go/src/testing/testing.go:647 +0x56e
testing.RunTests.func1(0xc42031c300)
/usr/local/go/src/testing/testing.go:793 +0xba
testing.tRunner(0xc42031c300, 0xc42004fb38)
/usr/local/go/src/testing/testing.go:610 +0xca
testing.RunTests(0x24c4a70, 0x2f94f00, 0x168, 0x168, 0xc42012a618)
/usr/local/go/src/testing/testing.go:799 +0x4bb
testing.(*M).Run(0xc42004fef0, 0xc420279cf0)
/usr/local/go/src/testing/testing.go:743 +0x130
github.com/cockroachdb/cockroach/pkg/storage_test.TestMain(0xc42004fef0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/main_test.go:57 +0x287
main.main()
github.com/cockroachdb/cockroach/pkg/storage/_test/_testmain.go:784 +0x1b6
goroutine 17 [syscall, 1 minutes, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2086 +0x1
goroutine 8 [chan receive]:
github.com/cockroachdb/cockroach/pkg/util/log.(*loggingT).flushDaemon(0x3477160)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1016 +0x85
created by github.com/cockroachdb/cockroach/pkg/util/log.init.1
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:581 +0xc3
goroutine 39632 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:474 +0x495
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f183a0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39700 [chan receive]:
github.com/cockroachdb/cockroach/pkg/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:137 +0x95
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421eae1e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39721 [semacquire]:
sync.runtime_Semacquire(0xc420486ba4)
/usr/local/go/src/runtime/sema.go:47 +0x30
sync.(*WaitGroup).Wait(0xc420486b98)
/usr/local/go/src/sync/waitgroup.go:131 +0xbf
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Wait(0xc420486b00)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:187 +0x42
github.com/cockroachdb/cockroach/pkg/storage.(*Store).processRaft.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3357 +0x67
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc420d443d0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39704 [chan receive]:
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:171 +0x74
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232eee0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39714 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000141)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f1e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39715 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc40000014b)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f200)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39698 [sleep]:
time.Sleep(0x8000000)
/usr/local/go/src/runtime/time.go:59 +0xe1
github.com/cockroachdb/cockroach/pkg/util.RetryForDuration(0xa7a358200, 0xc420937c00, 0xeae942, 0x30)
/go/src/github.com/cockroachdb/cockroach/pkg/util/testing.go:138 +0x116
github.com/cockroachdb/cockroach/pkg/util.SucceedsSoonDepth(0x1, 0x2fb6660, 0xc42269e3c0, 0xc420937c00)
/go/src/github.com/cockroachdb/cockroach/pkg/util/testing.go:117 +0x55
github.com/cockroachdb/cockroach/pkg/storage_test.TestStoreRangeSystemSplits.func2(0x13)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/client_split_test.go:880 +0x92d
github.com/cockroachdb/cockroach/pkg/storage_test.TestStoreRangeSystemSplits(0xc42269e3c0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/client_split_test.go:883 +0x49f
testing.tRunner(0xc42269e3c0, 0x24c2fb0)
/usr/local/go/src/testing/testing.go:610 +0xca
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:646 +0x530
goroutine 39713 [syscall, locked to thread]:
github.com/cockroachdb/cockroach/pkg/storage/engine._Cfunc_DBApplyBatchRepr(0x2b75092c5c40, 0xc4221c4000, 0x1da, 0x0, 0x0)
??:0 +0x75
github.com/cockroachdb/cockroach/pkg/storage/engine.dbApplyBatchRepr(0x2b75092c5c40, 0xc4221c4000, 0x1da, 0x400, 0x211ddc5, 0xc420e4b1c8)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/engine/rocksdb.go:1392 +0xf2
github.com/cockroachdb/cockroach/pkg/storage/engine.(*RocksDB).ApplyBatchRepr(0xd49e81, 0x1fe44a0, 0x688306, 0x5ecbd0, 0xc422366d90, 0x1fe44a0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/engine/rocksdb.go:436 +0x6f
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39702 [select]:
github.com/cockroachdb/cockroach/pkg/kv.(*TxnCoordSender).printStatsLoop(0xc42008f400, 0x2b7505e96208, 0xc4200145c0)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:209 +0x1011
github.com/cockroachdb/cockroach/pkg/kv.NewTxnCoordSender.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:193 +0xc0
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc4220926d0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39726 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).startGossip.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1284 +0x4b6
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421d381b0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39629 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:474 +0x495
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f18320)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39712 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000144)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f100)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39720 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc40000014a)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f320)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39386 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*idAllocator).start.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/id_alloc.go:133 +0xaa3
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc422318f20)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39705 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc40000014e)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232ef40)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39719 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc40000014d)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f300)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39725 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).Start.func3()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1198 +0x19d
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f400)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39722 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).raftTickLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3383 +0x482
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc420d443e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39709 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000142)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f020)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39627 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:474 +0x495
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f18260)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39723 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).startCoalescedHeartbeatsLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3414 +0x1c5
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc420d443f0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39763 [select]:
github.com/cockroachdb/cockroach/pkg/kv.(*TxnCoordSender).heartbeatLoop(0xc42008f400, 0x2b7505ecf390, 0xc421ff15c0, 0xda421b1e19fa730e, 0x1e8b24c1511cc5b4)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:647 +0x4ff
github.com/cockroachdb/cockroach/pkg/kv.(*TxnCoordSender).updateState.func2(0x2b7505ecf390, 0xc421ed8900)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:919 +0x69
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask.func1(0xc421532090, 0x2c8b7d6, 0x16, 0x398, 0x2fbe400, 0xc420b40140, 0xc42253aee0, 0x2b7505ecf390, 0xc421ed8900)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:264 +0xed
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:265 +0x2b0
goroutine 39631 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:474 +0x495
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f18380)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39710 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000143)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f040)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39727 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).startGossip.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1284 +0x4b6
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421d381e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39699 [chan receive]:
github.com/cockroachdb/cockroach/pkg/storage/engine.(*RocksDB).open.func1(0xc421992000)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/engine/rocksdb.go:373 +0x64
created by github.com/cockroachdb/cockroach/pkg/storage/engine.(*RocksDB).open
/go/src/github.com/cockroachdb/cockroach/pkg/storage/engine/rocksdb.go:374 +0x8fb
goroutine 39717 [runnable]:
sync.(*Mutex).Unlock(0xc421fb21a0)
/usr/local/go/src/sync/mutex.go:102
github.com/cockroachdb/cockroach/pkg/util/syncutil.(*TimedMutex).Unlock(0xc4201f16f8)
/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:95 +0xa1
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).tick(0xc4201f1680, 0xc4220a1500, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:2596 +0xb1
github.com/cockroachdb/cockroach/pkg/storage.(*Store).processTick(0xc42016ee00, 0x15, 0xc42182bed0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3339 +0x157
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:225 +0x3bf
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f280)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39630 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:474 +0x495
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f18360)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39388 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).maybeAddToPurgatory.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:633 +0xf66
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc422392ee0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39716 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc40000014c)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f260)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39628 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).tryAddWriteCmd(0xc4202b0d80, 0x2b7505ecf390, 0xc421ed97a0, 0x7b, 0x623, 0x100000001, 0x1, 0x16, 0x0, 0xc42269dc80, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1941 +0xdd4
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).addWriteCmd(0xc4202b0d80, 0x2b7505ecf390, 0xc421ed97a0, 0x7b, 0x623, 0x100000001, 0x1, 0x16, 0x0, 0xc42269dc80, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1772 +0xa1
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).Send(0xc4202b0d80, 0x2b7505ecf390, 0xc421ed97a0, 0x7b, 0x623, 0x100000001, 0x1, 0x16, 0x0, 0xc42269dc80, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1224 +0x26f
github.com/cockroachdb/cockroach/pkg/storage.(*Store).Send(0xc42016ee00, 0x2b7505ecf390, 0xc421ed9740, 0x7b, 0x623, 0x100000001, 0x1, 0x16, 0x0, 0xc42269dc80, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:2471 +0x86a
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).Send(0xc420ea3ce0, 0x2b7505ecf390, 0xc4220e6de0, 0x0, 0x0, 0x100000001, 0x1, 0x16, 0x0, 0xc42269dc80, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:187 +0x24b
github.com/cockroachdb/cockroach/pkg/kv.(*senderTransport).SendNext(0xc421cd8320, 0xc420a72cc0)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/transport.go:309 +0x2c3
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).sendToReplicas(0xc4218ce0f0, 0x2b7505ecf318, 0xc420cbae40, 0x1dcd6500, 0xc4220a0b70, 0xc4218ce128, 0x3, 0xc421582760, 0x1, 0x1, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:1142 +0x376
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).sendRPC(0xc4218ce0f0, 0x2b7505ecf318, 0xc420cbae40, 0x3, 0xc421582760, 0x1, 0x1, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:410 +0x418
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).sendSingleRange(0xc4218ce0f0, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc4210686c0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:479 +0x1ab
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).sendPartialBatch(0xc4218ce0f0, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc4210686c0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:931 +0x3b4
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).divideAndSendBatchToRanges(0xc4218ce0f0, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc4210686c0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:810 +0x568
github.com/cockroachdb/cockroach/pkg/kv.(*DistSender).Send(0xc4218ce0f0, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc4210683c0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/dist_sender.go:625 +0x382
github.com/cockroachdb/cockroach/pkg/kv.(*TxnCoordSender).Send(0xc42008f400, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc420b25908, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:418 +0x7f2
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).sendInternal(0xc420b258c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc420b25908, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:587 +0x16f
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).send(0xc420b258c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:711 +0x63d
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).(github.com/cockroachdb/cockroach/pkg/internal/client.send)-fm(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:313 +0x7d
github.com/cockroachdb/cockroach/pkg/internal/client.sendAndFill(0xc421036f58, 0xc421980000, 0x0, 0xc421ce66e0)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:418 +0x1ac
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).Run(0xc420b258c0, 0xc421980000, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:313 +0xfe
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).adminSplitWithDescriptor.func1(0xc420b258c0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:2449 +0x9fd
github.com/cockroachdb/cockroach/pkg/internal/client.(*DB).Txn.func1(0xc420b258c0, 0xc421d26d00, 0x4000000000000000, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:468 +0x47
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).Exec(0xc420b258c0, 0xc421d20101, 0x0, 0xc421d26cf0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:520 +0x234
github.com/cockroachdb/cockroach/pkg/internal/client.(*DB).Txn(0xc421992af0, 0x2b7505ed2eb8, 0xc4205e6f00, 0xc4208797c0, 0xc4210377a8, 0x2)
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:469 +0x298
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).adminSplitWithDescriptor(0xc4202b0d80, 0x2b7505ed2eb8, 0xc4205e6f00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc421ce61c8, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:2450 +0xe13
github.com/cockroachdb/cockroach/pkg/storage.(*splitQueue).process(0xc422093220, 0x2b7505ed2eb8, 0xc4205e6f00, 0x7b, 0x620, 0xc4202b0d80, 0xc421cd5800, 0x1c, 0x20, 0xc420eca340, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/split_queue.go:111 +0x331
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processReplica(0xc4218ce2d0, 0x2b7505ecf390, 0xc421ed8150, 0xc4202b0d80, 0xc42160e300, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:575 +0x5fc
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1.2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:498 +0x119
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunTask(0xc421532090, 0xc421037e40, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:224 +0x10e
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:505 +0x45a
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421f182e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39703 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*StorePool).start.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store_pool.go:356 +0x51e
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc421eaf460)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39707 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000146)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232efe0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39708 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000147)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f000)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39711 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000145)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f080)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39706 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000148)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232efc0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
goroutine 39718 [semacquire]:
sync.runtime_notifyListWait(0xc421235d10, 0xc400000149)
/usr/local/go/src/runtime/sema.go:267 +0x12f
sync.(*Cond).Wait(0xc421235d00)
/usr/local/go/src/sync/cond.go:57 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420486b00, 0xc421532090)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:212 +0xfd
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x4b
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421532090, 0xc42232f2a0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x8b
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x74
ERROR: exit status 2
make: *** [stress] Error 1
1 runs completed, 1 failures, over 1m25s
Makefile:138: recipe for target 'stress' failed
``` | test | github com cockroachdb cockroach pkg storage teststorerangesystemsplits failed under stress sha parameters cockroach proposer evaluated kv false tags stress goflags race stress build found a failed test storage engine rocksdb go opening in memory rocksdb instance gossip gossip go initial resolvers gossip gossip go no resolvers found use join to specify a connected node base node id go nodeid set to gossip gossip go nodedescriptor set to node id address attrs locality storage store go failed initial metrics computation system config not yet available storage replica proposal go new range lease replica utc following replica utc storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go unable to split at key table key range table table outside of bounds of range min max storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage stores go range not contained in one range table table null but have min table storage intent resolver go failed to push during intent resolution failed to push storage replica command go replica adminsplitwithdescriptor id key local range rangedescriptor rw true pri iso serializable stat pending epo ts orig max wto false rop false storage queue go unable to split at key table key range table table outside of bounds of range table max storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go unable to split at key table key range table table outside of bounds of range table max storage queue go purgatory of stores with an attribute matching likely not enough nodes in cluster storage split queue go splitting at keys storage replica command go initiating a split of this range at key table panic runtime error invalid memory address or nil pointer dereference panic runtime error invalid memory address or nil pointer dereference goroutine panic usr local go src runtime panic go github com cockroachdb cockroach pkg util stop stopper recover go src github com cockroachdb cockroach pkg util stop stopper go panic usr local go src runtime panic go github com cockroachdb cockroach pkg storage nodeliveness getliveness go src github com cockroachdb cockroach pkg storage node liveness go github com cockroachdb cockroach pkg storage nodeliveness islive go src github com cockroachdb cockroach pkg storage node liveness go github com cockroachdb cockroach pkg storage consistencyqueue shouldqueue go src github com cockroachdb cockroach pkg storage consistency queue go github com cockroachdb cockroach pkg storage basequeue maybeadd go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg storage consistencyqueue maybeadd github com cockroachdb cockroach pkg storage replicascanner waitandprocess go src github com cockroachdb cockroach pkg storage scanner go github com cockroachdb cockroach pkg storage replicascanner scanloop go src github com cockroachdb cockroach pkg storage scanner go github com cockroachdb cockroach pkg storage storereplicavisitor visit go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg storage replicascanner scanloop go src github com cockroachdb cockroach pkg storage scanner go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine testing t run usr local go src testing testing go testing runtests usr local go src testing testing go testing trunner usr local go src testing testing go testing runtests usr local go src testing testing go testing m run usr local go src testing testing go github com cockroachdb cockroach pkg storage test testmain go src github com cockroachdb cockroach pkg storage main test go main main github com cockroachdb cockroach pkg storage test testmain go goroutine runtime goexit usr local go src runtime asm s goroutine github com cockroachdb cockroach pkg util log loggingt flushdaemon go src github com cockroachdb cockroach pkg util log clog go created by github com cockroachdb cockroach pkg util log init go src github com cockroachdb cockroach pkg util log clog go goroutine github com cockroachdb cockroach pkg storage basequeue processloop go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg rpc newcontext go src github com cockroachdb cockroach pkg rpc context go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime semacquire usr local go src runtime sema go sync waitgroup wait usr local go src sync waitgroup go github com cockroachdb cockroach pkg storage raftscheduler wait go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage store processraft go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine time sleep usr local go src runtime time go github com cockroachdb cockroach pkg util retryforduration go src github com cockroachdb cockroach pkg util testing go github com cockroachdb cockroach pkg util succeedssoondepth go src github com cockroachdb cockroach pkg util testing go github com cockroachdb cockroach pkg storage test teststorerangesystemsplits go src github com cockroachdb cockroach pkg storage client split test go github com cockroachdb cockroach pkg storage test teststorerangesystemsplits go src github com cockroachdb cockroach pkg storage client split test go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go goroutine github com cockroachdb cockroach pkg storage engine cfunc dbapplybatchrepr github com cockroachdb cockroach pkg storage engine dbapplybatchrepr go src github com cockroachdb cockroach pkg storage engine rocksdb go github com cockroachdb cockroach pkg storage engine rocksdb applybatchrepr go src github com cockroachdb cockroach pkg storage engine rocksdb go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg kv txncoordsender printstatsloop go src github com cockroachdb cockroach pkg kv txn coord sender go github com cockroachdb cockroach pkg kv newtxncoordsender go src github com cockroachdb cockroach pkg kv txn coord sender go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage store startgossip go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage basequeue processloop go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage idallocator start go src github com cockroachdb cockroach pkg storage id alloc go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage store start go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage store rafttickloop go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage basequeue processloop go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage store startcoalescedheartbeatsloop go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg kv txncoordsender heartbeatloop go src github com cockroachdb cockroach pkg kv txn coord sender go github com cockroachdb cockroach pkg kv txncoordsender updatestate go src github com cockroachdb cockroach pkg kv txn coord sender go github com cockroachdb cockroach pkg util stop stopper runasynctask go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runasynctask go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage basequeue processloop go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage store startgossip go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage engine rocksdb open go src github com cockroachdb cockroach pkg storage engine rocksdb go created by github com cockroachdb cockroach pkg storage engine rocksdb open go src github com cockroachdb cockroach pkg storage engine rocksdb go goroutine sync mutex unlock usr local go src sync mutex go github com cockroachdb cockroach pkg util syncutil timedmutex unlock go src github com cockroachdb cockroach pkg util syncutil timedmutex go github com cockroachdb cockroach pkg storage replica tick go src github com cockroachdb cockroach pkg storage replica go github com cockroachdb cockroach pkg storage store processtick go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage basequeue processloop go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage basequeue maybeaddtopurgatory go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage replica tryaddwritecmd go src github com cockroachdb cockroach pkg storage replica go github com cockroachdb cockroach pkg storage replica addwritecmd go src github com cockroachdb cockroach pkg storage replica go github com cockroachdb cockroach pkg storage replica send go src github com cockroachdb cockroach pkg storage replica go github com cockroachdb cockroach pkg storage store send go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg storage stores send go src github com cockroachdb cockroach pkg storage stores go github com cockroachdb cockroach pkg kv sendertransport sendnext go src github com cockroachdb cockroach pkg kv transport go github com cockroachdb cockroach pkg kv distsender sendtoreplicas go src github com cockroachdb cockroach pkg kv dist sender go github com cockroachdb cockroach pkg kv distsender sendrpc go src github com cockroachdb cockroach pkg kv dist sender go github com cockroachdb cockroach pkg kv distsender sendsinglerange go src github com cockroachdb cockroach pkg kv dist sender go github com cockroachdb cockroach pkg kv distsender sendpartialbatch go src github com cockroachdb cockroach pkg kv dist sender go github com cockroachdb cockroach pkg kv distsender divideandsendbatchtoranges go src github com cockroachdb cockroach pkg kv dist sender go github com cockroachdb cockroach pkg kv distsender send go src github com cockroachdb cockroach pkg kv dist sender go github com cockroachdb cockroach pkg kv txncoordsender send go src github com cockroachdb cockroach pkg kv txn coord sender go github com cockroachdb cockroach pkg internal client txn sendinternal go src github com cockroachdb cockroach pkg internal client txn go github com cockroachdb cockroach pkg internal client txn send go src github com cockroachdb cockroach pkg internal client txn go github com cockroachdb cockroach pkg internal client txn github com cockroachdb cockroach pkg internal client send fm go src github com cockroachdb cockroach pkg internal client txn go github com cockroachdb cockroach pkg internal client sendandfill go src github com cockroachdb cockroach pkg internal client db go github com cockroachdb cockroach pkg internal client txn run go src github com cockroachdb cockroach pkg internal client txn go github com cockroachdb cockroach pkg storage replica adminsplitwithdescriptor go src github com cockroachdb cockroach pkg storage replica command go github com cockroachdb cockroach pkg internal client db txn go src github com cockroachdb cockroach pkg internal client db go github com cockroachdb cockroach pkg internal client txn exec go src github com cockroachdb cockroach pkg internal client txn go github com cockroachdb cockroach pkg internal client db txn go src github com cockroachdb cockroach pkg internal client db go github com cockroachdb cockroach pkg storage replica adminsplitwithdescriptor go src github com cockroachdb cockroach pkg storage replica command go github com cockroachdb cockroach pkg storage splitqueue process go src github com cockroachdb cockroach pkg storage split queue go github com cockroachdb cockroach pkg storage basequeue processreplica go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg storage basequeue processloop go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg util stop stopper runtask go src github com cockroachdb cockroach pkg util stop stopper go github com cockroachdb cockroach pkg storage basequeue processloop go src github com cockroachdb cockroach pkg storage queue go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg storage storepool start go src github com cockroachdb cockroach pkg storage store pool go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine sync runtime notifylistwait usr local go src runtime sema go sync cond wait usr local go src sync cond go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go error exit status make error runs completed failures over makefile recipe for target stress failed | 1 |
302,795 | 26,163,177,468 | IssuesEvent | 2022-12-31 22:49:54 | apache/beam | https://api.github.com/repos/apache/beam | closed | [Bug]: beam_PerformanceTests_InfluxDbIO_IT Flaky > 50 % Fail | io P1 bug failing test | ### What happened?
This performance test has always been markedly flaky with timeout error:
```
21:17:33 org.apache.beam.sdk.io.influxdb.InfluxDbIOIT > testWriteAndReadWithMultipleMetric FAILED
21:17:33 org.influxdb.InfluxDBIOException: java.net.SocketTimeoutException: connect timed out
21:17:33 at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:836)
21:17:33 at org.influxdb.impl.InfluxDBImpl.executeQuery(InfluxDBImpl.java:819)
21:17:33 at org.influxdb.impl.InfluxDBImpl.query(InfluxDBImpl.java:554)
21:17:33 at org.apache.beam.sdk.io.influxdb.InfluxDbIOIT.initTest(InfluxDbIOIT.java:99)
21:17:33
21:17:33 Caused by:
21:17:33 java.net.SocketTimeoutException: connect timed out
21:17:33 at java.net.PlainSocketImpl.socketConnect(Native Method)
21:17:33 at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
21:17:33 at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
21:17:33 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
21:17:33 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
21:17:33 at java.net.Socket.connect(Socket.java:607)
21:17:33 at okhttp3.internal.platform.Platform.connectSocket(Platform.kt:119)
21:17:33 at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.kt:283)
21:17:33 at okhttp3.internal.connection.RealConnection.connect(RealConnection.kt:195)
21:17:33 at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.kt:249)
21:17:33 at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.kt:108)
21:17:33 at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.kt:76)
21:17:33 at okhttp3.internal.connection.RealCall.initExchange$okhttp(RealCall.kt:245)
21:17:33 at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:32)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:82)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at org.influxdb.impl.BasicAuthInterceptor.intercept(BasicAuthInterceptor.java:22)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at org.influxdb.impl.GzipRequestInterceptor.intercept(GzipRequestInterceptor.java:42)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.kt:152)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:197)
21:17:33 at okhttp3.internal.connection.RealCall.execute(RealCall.kt:148)
21:17:33 at retrofit2.OkHttpCall.execute(OkHttpCall.java:190)
21:17:33 at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:824)
21:17:33 ... 3 more
```
See https://ci-beam.apache.org/view/PerformanceTests/job/beam_PerformanceTests_InfluxDbIO_IT/
### Issue Priority
Priority: 2
### Issue Component
Component: test-failures | 1.0 | [Bug]: beam_PerformanceTests_InfluxDbIO_IT Flaky > 50 % Fail - ### What happened?
This performance test has always been markedly flaky with timeout error:
```
21:17:33 org.apache.beam.sdk.io.influxdb.InfluxDbIOIT > testWriteAndReadWithMultipleMetric FAILED
21:17:33 org.influxdb.InfluxDBIOException: java.net.SocketTimeoutException: connect timed out
21:17:33 at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:836)
21:17:33 at org.influxdb.impl.InfluxDBImpl.executeQuery(InfluxDBImpl.java:819)
21:17:33 at org.influxdb.impl.InfluxDBImpl.query(InfluxDBImpl.java:554)
21:17:33 at org.apache.beam.sdk.io.influxdb.InfluxDbIOIT.initTest(InfluxDbIOIT.java:99)
21:17:33
21:17:33 Caused by:
21:17:33 java.net.SocketTimeoutException: connect timed out
21:17:33 at java.net.PlainSocketImpl.socketConnect(Native Method)
21:17:33 at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
21:17:33 at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
21:17:33 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
21:17:33 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
21:17:33 at java.net.Socket.connect(Socket.java:607)
21:17:33 at okhttp3.internal.platform.Platform.connectSocket(Platform.kt:119)
21:17:33 at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.kt:283)
21:17:33 at okhttp3.internal.connection.RealConnection.connect(RealConnection.kt:195)
21:17:33 at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.kt:249)
21:17:33 at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.kt:108)
21:17:33 at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.kt:76)
21:17:33 at okhttp3.internal.connection.RealCall.initExchange$okhttp(RealCall.kt:245)
21:17:33 at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:32)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:82)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at org.influxdb.impl.BasicAuthInterceptor.intercept(BasicAuthInterceptor.java:22)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at org.influxdb.impl.GzipRequestInterceptor.intercept(GzipRequestInterceptor.java:42)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.kt:152)
21:17:33 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:100)
21:17:33 at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:197)
21:17:33 at okhttp3.internal.connection.RealCall.execute(RealCall.kt:148)
21:17:33 at retrofit2.OkHttpCall.execute(OkHttpCall.java:190)
21:17:33 at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:824)
21:17:33 ... 3 more
```
See https://ci-beam.apache.org/view/PerformanceTests/job/beam_PerformanceTests_InfluxDbIO_IT/
### Issue Priority
Priority: 2
### Issue Component
Component: test-failures | test | beam performancetests influxdbio it flaky fail what happened this performance test has always been markedly flaky with timeout error org apache beam sdk io influxdb influxdbioit testwriteandreadwithmultiplemetric failed org influxdb influxdbioexception java net sockettimeoutexception connect timed out at org influxdb impl influxdbimpl execute influxdbimpl java at org influxdb impl influxdbimpl executequery influxdbimpl java at org influxdb impl influxdbimpl query influxdbimpl java at org apache beam sdk io influxdb influxdbioit inittest influxdbioit java caused by java net sockettimeoutexception connect timed out at java net plainsocketimpl socketconnect native method at java net abstractplainsocketimpl doconnect abstractplainsocketimpl java at java net abstractplainsocketimpl connecttoaddress abstractplainsocketimpl java at java net abstractplainsocketimpl connect abstractplainsocketimpl java at java net sockssocketimpl connect sockssocketimpl java at java net socket connect socket java at internal platform platform connectsocket platform kt at internal connection realconnection connectsocket realconnection kt at internal connection realconnection connect realconnection kt at internal connection exchangefinder findconnection exchangefinder kt at internal connection exchangefinder findhealthyconnection exchangefinder kt at internal connection exchangefinder find exchangefinder kt at internal connection realcall initexchange okhttp realcall kt at internal connection connectinterceptor intercept connectinterceptor kt at internal http realinterceptorchain proceed realinterceptorchain kt at internal cache cacheinterceptor intercept cacheinterceptor kt at internal http realinterceptorchain proceed realinterceptorchain kt at internal http bridgeinterceptor intercept bridgeinterceptor kt at internal http realinterceptorchain proceed realinterceptorchain kt at internal http retryandfollowupinterceptor intercept retryandfollowupinterceptor kt at internal http realinterceptorchain proceed realinterceptorchain kt at org influxdb impl basicauthinterceptor intercept basicauthinterceptor java at internal http realinterceptorchain proceed realinterceptorchain kt at org influxdb impl gziprequestinterceptor intercept gziprequestinterceptor java at internal http realinterceptorchain proceed realinterceptorchain kt at logging httplogginginterceptor intercept httplogginginterceptor kt at internal http realinterceptorchain proceed realinterceptorchain kt at internal connection realcall getresponsewithinterceptorchain okhttp realcall kt at internal connection realcall execute realcall kt at okhttpcall execute okhttpcall java at org influxdb impl influxdbimpl execute influxdbimpl java more see issue priority priority issue component component test failures | 1 |
212,964 | 16,490,884,796 | IssuesEvent | 2021-05-25 03:31:47 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | test:mimxrt1010_evk: tests/kernel/sched/schedule_api - kernel_threads_sched_userspace cases meet out our space | area: Kernel area: Tests area: Userspace bug platform: NXP priority: low | **Describe the bug**
kernel_threads_sched_userspace cases meet out our space
**To Reproduce**
Steps to reproduce the behavior:
tests/kernel/sched/schedule_api
1. mkdir build; cd build
2. cmake -DBOARD=mimxrt1010_evk ..
3. make
4. See error
**Expected behavior**
in former build this cases are PASS,but now the code size is too large
**Impact**
unknown
**Logs and console output**
```
+ docker exec confident_sinoussi build_zephyr_elf.sh mimxrt1010_evk_kernel3_master tests/kernel/sched/schedule_api mimxrt1010_evk build_900f0a3 -DCONF_FILE=prj_dumb.conf -DCONFIG_TIMESLICING=n kernel.scheduler.dumb_no_timeslicing tests/kernel/sched/schedule_api -DCONF_FILE=prj_dumb.conf -DCONFIG_TIMESLICING=n
/build/src/workspace/mimxrt1010_evk_kernel3_master
Including boilerplate (Zephyr base): /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/cmake/app/boilerplate.cmake
-- Application: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api
-- Zephyr version: 2.5.99 (/build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr)
-- Found Python3: /usr/bin/python3.8 (found suitable exact version "3.8.5") found components: Interpreter
-- Found west (found suitable version "0.9.0", minimum required is "0.7.1")
-- Board: mimxrt1010_evk
-- Cache files will be written to: /root/.cache/zephyr
-- Using toolchain: zephyr 0.12.2 (/opt/zephyr-sdk)
-- Found dtc: /opt/zephyr-sdk/sysroots/x86_64-pokysdk-linux/usr/bin/dtc (found suitable version "1.5.0", minimum required is "1.4.6")
-- Found BOARD.dts: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/boards/arm/mimxrt1010_evk/mimxrt1010_evk.dts
-- Generated zephyr.dts: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/zephyr.dts
-- Generated devicetree_unfixed.h: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/include/generated/devicetree_unfixed.h
-- Generated device_extern.h: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/include/generated/device_extern.h
Parsing /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/Kconfig
Loaded configuration '/build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/boards/arm/mimxrt1010_evk/mimxrt1010_evk_defconfig'
Merged configuration 'prj_dumb.conf'
Merged configuration '/build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/misc/generated/extra_kconfig_options.conf'
Configuration saved to '/build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/.config'
Kconfig header saved to '/build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/include/generated/autoconf.h'
-- The C compiler identification is GNU 10.2.0
-- The CXX compiler identification is GNU 10.2.0
-- The ASM compiler identification is GNU
-- Found assembler: /opt/zephyr-sdk/arm-zephyr-eabi/bin/arm-zephyr-eabi-gcc
-- Configuring done
-- Generating done
-- Build files have been written to: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3
Scanning dependencies of target parse_syscalls_target
[ 0%] Generating misc/generated/syscalls.json, misc/generated/struct_tags.json
[ 1%] Built target parse_syscalls_target
Scanning dependencies of target syscall_list_h_target
[ 2%] Generating include/generated/syscall_dispatch.c, include/generated/syscall_list.h
[ 2%] Built target syscall_list_h_target
Scanning dependencies of target driver_validation_h_target
[ 3%] Generating include/generated/driver-validation.h
[ 3%] Built target driver_validation_h_target
Scanning dependencies of target kobj_types_h_target
[ 4%] Generating include/generated/kobj-types-enum.h, include/generated/otype-to-str.h, include/generated/otype-to-size.h
[ 4%] Built target kobj_types_h_target
Scanning dependencies of target offsets
[ 4%] Building C object zephyr/CMakeFiles/offsets.dir/arch/arm/core/offsets/offsets.c.obj
[ 4%] Built target offsets
Scanning dependencies of target offsets_h
[ 5%] Generating include/generated/offsets.h
[ 5%] Built target offsets_h
Scanning dependencies of target zephyr_generated_headers
[ 5%] Built target zephyr_generated_headers
Scanning dependencies of target app
[ 5%] Building C object CMakeFiles/app.dir/src/main.c.obj
[ 6%] Building C object CMakeFiles/app.dir/src/test_priority_scheduling.c.obj
[ 7%] Building C object CMakeFiles/app.dir/src/test_sched_is_preempt_thread.c.obj
[ 7%] Building C object CMakeFiles/app.dir/src/test_sched_priority.c.obj
[ 8%] Building C object CMakeFiles/app.dir/src/test_sched_timeslice_and_lock.c.obj
[ 8%] Building C object CMakeFiles/app.dir/src/test_sched_timeslice_reset.c.obj
[ 9%] Building C object CMakeFiles/app.dir/src/test_slice_scheduling.c.obj
[ 9%] Building C object CMakeFiles/app.dir/src/user_api.c.obj
[ 10%] Linking C static library app/libapp.a
[ 10%] Built target app
Scanning dependencies of target kernel
[ 10%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/device.c.obj
[ 11%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/errno.c.obj
[ 11%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/fatal.c.obj
[ 12%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/idle.c.obj
[ 12%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/init.c.obj
[ 13%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/kheap.c.obj
[ 13%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/mailbox.c.obj
[ 14%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/mem_slab.c.obj
[ 15%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/msg_q.c.obj
[ 15%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/mutex.c.obj
[ 16%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/pipes.c.obj
[ 16%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/queue.c.obj
[ 17%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/sched.c.obj
[ 17%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/sem.c.obj
[ 18%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/stack.c.obj
[ 19%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/system_work_q.c.obj
[ 19%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/thread.c.obj
[ 20%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/version.c.obj
[ 20%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/condvar.c.obj
[ 21%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/work.c.obj
[ 21%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/smp.c.obj
[ 22%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/banner.c.obj
[ 22%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/xip.c.obj
[ 23%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/timeout.c.obj
[ 25%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/timer.c.obj
[ 25%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/mempool.c.obj
[ 26%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/futex.c.obj
[ 26%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/mem_domain.c.obj
[ 27%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/cache_handlers.c.obj
[ 27%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/userspace_handler.c.obj
[ 28%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/userspace.c.obj
[ 28%] Linking C static library libkernel.a
[ 28%] Built target kernel
Scanning dependencies of target zephyr
[ 28%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/cbprintf.c.obj
[ 29%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/cbprintf_packaged.c.obj
[ 29%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/crc32c_sw.c.obj
[ 30%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/crc32_sw.c.obj
[ 31%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/crc16_sw.c.obj
[ 31%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/crc8_sw.c.obj
[ 32%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/crc7_sw.c.obj
[ 32%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/dec.c.obj
[ 33%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/fdtable.c.obj
[ 33%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/hex.c.obj
[ 34%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/notify.c.obj
[ 34%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/printk.c.obj
[ 35%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/onoff.c.obj
[ 36%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/rb.c.obj
[ 36%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/sem.c.obj
[ 37%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/thread_entry.c.obj
[ 37%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/timeutil.c.obj
[ 38%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/heap.c.obj
[ 38%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/heap-validate.c.obj
[ 39%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/cbprintf_complete.c.obj
[ 39%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/assert.c.obj
[ 40%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/mutex.c.obj
[ 41%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/user_work.c.obj
[ 41%] Building C object zephyr/CMakeFiles/zephyr.dir/misc/generated/configs.c.obj
[ 42%] Building C object zephyr/CMakeFiles/zephyr.dir/soc/arm/nxp_imx/rt/soc.c.obj
[ 42%] Building C object zephyr/CMakeFiles/zephyr.dir/subsys/logging/log_minimal.c.obj
[ 43%] Building C object zephyr/CMakeFiles/zephyr.dir/drivers/console/uart_console.c.obj
[ 43%] Building C object zephyr/CMakeFiles/zephyr.dir/drivers/clock_control/clock_control_mcux_ccm.c.obj
[ 44%] Building C object zephyr/CMakeFiles/zephyr.dir/drivers/timer/sys_clock_init.c.obj
[ 44%] Building C object zephyr/CMakeFiles/zephyr.dir/drivers/timer/cortex_m_systick.c.obj
[ 45%] Linking C static library libzephyr.a
[ 45%] Built target zephyr
Scanning dependencies of target isr_tables
[ 46%] Building C object zephyr/arch/common/CMakeFiles/isr_tables.dir/isr_tables.c.obj
[ 47%] Linking C static library libisr_tables.a
[ 47%] Built target isr_tables
Scanning dependencies of target arch__common
[ 48%] Building C object zephyr/arch/common/CMakeFiles/arch__common.dir/sw_isr_common.c.obj
[ 48%] Linking C static library libarch__common.a
[ 48%] Built target arch__common
Scanning dependencies of target arch__arm__core__aarch32
[ 50%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/swap.c.obj
[ 50%] Building ASM object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/swap_helper.S.obj
[ 51%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/irq_manage.c.obj
[ 51%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/thread.c.obj
[ 52%] Building ASM object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/cpu_idle.S.obj
[ 52%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/fatal.c.obj
[ 53%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/nmi.c.obj
[ 54%] Building ASM object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/nmi_on_reset.S.obj
[ 54%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/prep_c.c.obj
[ 55%] Building ASM object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/isr_wrapper.S.obj
[ 55%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/irq_offload.c.obj
[ 56%] Building ASM object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/userspace.S.obj
[ 56%] Linking C static library libarch__arm__core__aarch32.a
[ 56%] Built target arch__arm__core__aarch32
Scanning dependencies of target arch__arm__core__aarch32__cortex_m
[ 57%] Building ASM object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/vector_table.S.obj
[ 57%] Building ASM object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/reset.S.obj
[ 58%] Building ASM object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/fault_s.S.obj
[ 59%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/fault.c.obj
[ 59%] Building ASM object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/exc_exit.S.obj
[ 60%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/scb.c.obj
[ 60%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/irq_init.c.obj
[ 61%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/thread_abort.c.obj
[ 61%] Linking C static library libarch__arm__core__aarch32__cortex_m.a
[ 61%] Built target arch__arm__core__aarch32__cortex_m
Scanning dependencies of target arch__arm__core__aarch32__cortex_m__mpu
[ 62%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/mpu/CMakeFiles/arch__arm__core__aarch32__cortex_m__mpu.dir/arm_core_mpu.c.obj
[ 63%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/mpu/CMakeFiles/arch__arm__core__aarch32__cortex_m__mpu.dir/arm_mpu.c.obj
[ 63%] Linking C static library libarch__arm__core__aarch32__cortex_m__mpu.a
[ 63%] Built target arch__arm__core__aarch32__cortex_m__mpu
Scanning dependencies of target lib__libc__minimal
[ 63%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/abort.c.obj
[ 64%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/atoi.c.obj
[ 64%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/strtol.c.obj
[ 65%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/strtoul.c.obj
[ 66%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/malloc.c.obj
[ 66%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/bsearch.c.obj
[ 67%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/exit.c.obj
[ 67%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/string/strncasecmp.c.obj
[ 68%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/string/strstr.c.obj
[ 68%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/string/string.c.obj
[ 69%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/string/strspn.c.obj
[ 69%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdout/stdout_console.c.obj
[ 70%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdout/sprintf.c.obj
[ 71%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdout/fprintf.c.obj
[ 71%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/time/gmtime.c.obj
[ 72%] Linking C static library liblib__libc__minimal.a
[ 72%] Built target lib__libc__minimal
Scanning dependencies of target lib__posix
[ 72%] Building C object zephyr/lib/posix/CMakeFiles/lib__posix.dir/pthread_common.c.obj
[ 73%] Building C object zephyr/lib/posix/CMakeFiles/lib__posix.dir/nanosleep.c.obj
[ 73%] Linking C static library liblib__posix.a
[ 73%] Built target lib__posix
Scanning dependencies of target soc__arm__common__cortex_m
[ 75%] Building C object zephyr/soc/arm/common/cortex_m/CMakeFiles/soc__arm__common__cortex_m.dir/arm_mpu_regions.c.obj
[ 75%] Linking C static library libsoc__arm__common__cortex_m.a
[ 75%] Built target soc__arm__common__cortex_m
Scanning dependencies of target boards__arm__mimxrt1010_evk
[ 75%] Building C object zephyr/boards/arm/mimxrt1010_evk/CMakeFiles/boards__arm__mimxrt1010_evk.dir/pinmux.c.obj
[ 76%] Linking C static library libboards__arm__mimxrt1010_evk.a
[ 76%] Built target boards__arm__mimxrt1010_evk
Scanning dependencies of target subsys__testsuite__ztest
[ 77%] Building C object zephyr/subsys/testsuite/ztest/CMakeFiles/subsys__testsuite__ztest.dir/src/ztest.c.obj
[ 78%] Building C object zephyr/subsys/testsuite/ztest/CMakeFiles/subsys__testsuite__ztest.dir/src/ztest_error_hook.c.obj
[ 78%] Linking C static library libsubsys__testsuite__ztest.a
[ 78%] Built target subsys__testsuite__ztest
Scanning dependencies of target drivers__gpio
[ 78%] Building C object zephyr/drivers/gpio/CMakeFiles/drivers__gpio.dir/gpio_mcux_igpio.c.obj
[ 79%] Building C object zephyr/drivers/gpio/CMakeFiles/drivers__gpio.dir/gpio_handlers.c.obj
[ 79%] Linking C static library libdrivers__gpio.a
[ 79%] Built target drivers__gpio
Scanning dependencies of target drivers__serial
[ 80%] Building C object zephyr/drivers/serial/CMakeFiles/drivers__serial.dir/uart_mcux_lpuart.c.obj
[ 80%] Building C object zephyr/drivers/serial/CMakeFiles/drivers__serial.dir/uart_handlers.c.obj
[ 81%] Linking C static library libdrivers__serial.a
[ 81%] Built target drivers__serial
Scanning dependencies of target ..__modules__hal__nxp
[ 81%] Building C object modules/nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/devices/MIMXRT1011/fsl_clock.c.obj
[ 82%] Building C object modules/nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/boards/evkmimxrt1010/evkmimxrt1010_flexspi_nor_config.c.obj
[ 82%] Building C object modules/nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/drivers/imx/fsl_gpio.c.obj
[ 83%] Building C object modules/nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/drivers/imx/fsl_cache.c.obj
[ 83%] Building C object modules/nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/drivers/imx/fsl_lpuart.c.obj
[ 84%] Linking C static library lib..__modules__hal__nxp.a
[ 84%] Built target ..__modules__hal__nxp
Scanning dependencies of target app_smem_unaligned_linker
[ 85%] Generating app_smem_unaligned linker section
[ 85%] Built target app_smem_unaligned_linker
Scanning dependencies of target linker_app_smem_unaligned_script
[ 86%] Generating linker_app_smem_unaligned.cmd
[ 86%] Built target linker_app_smem_unaligned_script
Scanning dependencies of target app_smem_unaligned_prebuilt
[ 87%] Building C object zephyr/CMakeFiles/app_smem_unaligned_prebuilt.dir/misc/empty_file.c.obj
[ 87%] Linking C executable app_smem_unaligned_prebuilt.elf
Logical command for additional byproducts on target: app_smem_unaligned_prebuilt
[ 87%] Built target app_smem_unaligned_prebuilt
Scanning dependencies of target app_smem_aligned_linker
[ 87%] Generating app_smem_aligned linker section
[ 87%] Built target app_smem_aligned_linker
Scanning dependencies of target linker_zephyr_prebuilt_script_target
[ 88%] Generating linker_zephyr_prebuilt.cmd
[ 88%] Built target linker_zephyr_prebuilt_script_target
Scanning dependencies of target zephyr_prebuilt
[ 89%] Building C object zephyr/CMakeFiles/zephyr_prebuilt.dir/misc/empty_file.c.obj
[ 90%] Linking C executable zephyr_prebuilt.elf
Logical command for additional byproducts on target: zephyr_prebuilt
[ 90%] Built target zephyr_prebuilt
Scanning dependencies of target kobj_hash_list
[ 91%] Generating kobject_hash.gperf
[ 91%] Built target kobj_hash_list
Scanning dependencies of target kobj_hash_output_src_pre
[ 92%] Generating kobject_hash_preprocessed.c
[ 92%] Built target kobj_hash_output_src_pre
[ 93%] Generating kobject_hash.c
Scanning dependencies of target kobj_hash_output_lib
[ 93%] Building C object zephyr/CMakeFiles/kobj_hash_output_lib.dir/kobject_hash.c.obj
[ 94%] Linking C static library libkobj_hash_output_lib.a
[ 95%] Built target kobj_hash_output_lib
Scanning dependencies of target kobj_hash_output_obj_renamed
[ 95%] Generating kobject_hash_renamed.o
[ 95%] Built target kobj_hash_output_obj_renamed
Scanning dependencies of target linker_zephyr_final_script_target
[ 96%] Generating linker.cmd
[ 96%] Built target linker_zephyr_final_script_target
[ 96%] Generating dev_handles.c
[ 97%] Generating isr_tables.c, isrList.bin
Scanning dependencies of target zephyr_final
[ 98%] Building C object zephyr/CMakeFiles/zephyr_final.dir/misc/empty_file.c.obj
[ 98%] Building C object zephyr/CMakeFiles/zephyr_final.dir/isr_tables.c.obj
[100%] Building C object zephyr/CMakeFiles/zephyr_final.dir/dev_handles.c.obj
[100%] Linking C executable zephyr.elf
/opt/zephyr-sdk/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/10.2.0/../../../../arm-zephyr-eabi/bin/ld: zephyr.elf section `priv_stacks_noinit' will not fit in region `SRAM'
/opt/zephyr-sdk/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/10.2.0/../../../../arm-zephyr-eabi/bin/ld: region `SRAM' overflowed by 6880 bytes
Memory region Used Size Region Size %age Used
OCRAM: 0 GB 64 KB 0.00%
FLASH: 94812 B 16 MB 0.57%
ITCM: 0 GB 32 KB 0.00%
SRAM: 39648 B 32 KB 121.00%
IDT_LIST: 0 GB 2 KB 0.00%
collect2: error: ld returned 1 exit status
make[2]: *** [zephyr/CMakeFiles/zephyr_final.dir/build.make:146: zephyr/zephyr.elf] Error 1
make[1]: *** [CMakeFiles/Makefile2:2562: zephyr/CMakeFiles/zephyr_final.dir/all] Error 2
make: *** [Makefile:84: all] Error 2
script returned exit code 2
```
**Environment (please complete the following information):**
- OS: (e.g. Linux, )
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used: zephyr-v2.5.0-1463-gc59cf6008b60
| 1.0 | test:mimxrt1010_evk: tests/kernel/sched/schedule_api - kernel_threads_sched_userspace cases meet out our space - **Describe the bug**
kernel_threads_sched_userspace cases meet out our space
**To Reproduce**
Steps to reproduce the behavior:
tests/kernel/sched/schedule_api
1. mkdir build; cd build
2. cmake -DBOARD=mimxrt1010_evk ..
3. make
4. See error
**Expected behavior**
in former build this cases are PASS,but now the code size is too large
**Impact**
unknown
**Logs and console output**
```
+ docker exec confident_sinoussi build_zephyr_elf.sh mimxrt1010_evk_kernel3_master tests/kernel/sched/schedule_api mimxrt1010_evk build_900f0a3 -DCONF_FILE=prj_dumb.conf -DCONFIG_TIMESLICING=n kernel.scheduler.dumb_no_timeslicing tests/kernel/sched/schedule_api -DCONF_FILE=prj_dumb.conf -DCONFIG_TIMESLICING=n
/build/src/workspace/mimxrt1010_evk_kernel3_master
Including boilerplate (Zephyr base): /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/cmake/app/boilerplate.cmake
-- Application: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api
-- Zephyr version: 2.5.99 (/build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr)
-- Found Python3: /usr/bin/python3.8 (found suitable exact version "3.8.5") found components: Interpreter
-- Found west (found suitable version "0.9.0", minimum required is "0.7.1")
-- Board: mimxrt1010_evk
-- Cache files will be written to: /root/.cache/zephyr
-- Using toolchain: zephyr 0.12.2 (/opt/zephyr-sdk)
-- Found dtc: /opt/zephyr-sdk/sysroots/x86_64-pokysdk-linux/usr/bin/dtc (found suitable version "1.5.0", minimum required is "1.4.6")
-- Found BOARD.dts: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/boards/arm/mimxrt1010_evk/mimxrt1010_evk.dts
-- Generated zephyr.dts: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/zephyr.dts
-- Generated devicetree_unfixed.h: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/include/generated/devicetree_unfixed.h
-- Generated device_extern.h: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/include/generated/device_extern.h
Parsing /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/Kconfig
Loaded configuration '/build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/boards/arm/mimxrt1010_evk/mimxrt1010_evk_defconfig'
Merged configuration 'prj_dumb.conf'
Merged configuration '/build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/misc/generated/extra_kconfig_options.conf'
Configuration saved to '/build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/.config'
Kconfig header saved to '/build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3/zephyr/include/generated/autoconf.h'
-- The C compiler identification is GNU 10.2.0
-- The CXX compiler identification is GNU 10.2.0
-- The ASM compiler identification is GNU
-- Found assembler: /opt/zephyr-sdk/arm-zephyr-eabi/bin/arm-zephyr-eabi-gcc
-- Configuring done
-- Generating done
-- Build files have been written to: /build/src/workspace/mimxrt1010_evk_kernel3_master/zephyr/tests/kernel/sched/schedule_api/build_900f0a3
Scanning dependencies of target parse_syscalls_target
[ 0%] Generating misc/generated/syscalls.json, misc/generated/struct_tags.json
[ 1%] Built target parse_syscalls_target
Scanning dependencies of target syscall_list_h_target
[ 2%] Generating include/generated/syscall_dispatch.c, include/generated/syscall_list.h
[ 2%] Built target syscall_list_h_target
Scanning dependencies of target driver_validation_h_target
[ 3%] Generating include/generated/driver-validation.h
[ 3%] Built target driver_validation_h_target
Scanning dependencies of target kobj_types_h_target
[ 4%] Generating include/generated/kobj-types-enum.h, include/generated/otype-to-str.h, include/generated/otype-to-size.h
[ 4%] Built target kobj_types_h_target
Scanning dependencies of target offsets
[ 4%] Building C object zephyr/CMakeFiles/offsets.dir/arch/arm/core/offsets/offsets.c.obj
[ 4%] Built target offsets
Scanning dependencies of target offsets_h
[ 5%] Generating include/generated/offsets.h
[ 5%] Built target offsets_h
Scanning dependencies of target zephyr_generated_headers
[ 5%] Built target zephyr_generated_headers
Scanning dependencies of target app
[ 5%] Building C object CMakeFiles/app.dir/src/main.c.obj
[ 6%] Building C object CMakeFiles/app.dir/src/test_priority_scheduling.c.obj
[ 7%] Building C object CMakeFiles/app.dir/src/test_sched_is_preempt_thread.c.obj
[ 7%] Building C object CMakeFiles/app.dir/src/test_sched_priority.c.obj
[ 8%] Building C object CMakeFiles/app.dir/src/test_sched_timeslice_and_lock.c.obj
[ 8%] Building C object CMakeFiles/app.dir/src/test_sched_timeslice_reset.c.obj
[ 9%] Building C object CMakeFiles/app.dir/src/test_slice_scheduling.c.obj
[ 9%] Building C object CMakeFiles/app.dir/src/user_api.c.obj
[ 10%] Linking C static library app/libapp.a
[ 10%] Built target app
Scanning dependencies of target kernel
[ 10%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/device.c.obj
[ 11%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/errno.c.obj
[ 11%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/fatal.c.obj
[ 12%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/idle.c.obj
[ 12%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/init.c.obj
[ 13%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/kheap.c.obj
[ 13%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/mailbox.c.obj
[ 14%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/mem_slab.c.obj
[ 15%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/msg_q.c.obj
[ 15%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/mutex.c.obj
[ 16%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/pipes.c.obj
[ 16%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/queue.c.obj
[ 17%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/sched.c.obj
[ 17%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/sem.c.obj
[ 18%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/stack.c.obj
[ 19%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/system_work_q.c.obj
[ 19%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/thread.c.obj
[ 20%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/version.c.obj
[ 20%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/condvar.c.obj
[ 21%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/work.c.obj
[ 21%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/smp.c.obj
[ 22%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/banner.c.obj
[ 22%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/xip.c.obj
[ 23%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/timeout.c.obj
[ 25%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/timer.c.obj
[ 25%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/mempool.c.obj
[ 26%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/futex.c.obj
[ 26%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/mem_domain.c.obj
[ 27%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/cache_handlers.c.obj
[ 27%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/userspace_handler.c.obj
[ 28%] Building C object zephyr/kernel/CMakeFiles/kernel.dir/userspace.c.obj
[ 28%] Linking C static library libkernel.a
[ 28%] Built target kernel
Scanning dependencies of target zephyr
[ 28%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/cbprintf.c.obj
[ 29%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/cbprintf_packaged.c.obj
[ 29%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/crc32c_sw.c.obj
[ 30%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/crc32_sw.c.obj
[ 31%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/crc16_sw.c.obj
[ 31%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/crc8_sw.c.obj
[ 32%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/crc7_sw.c.obj
[ 32%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/dec.c.obj
[ 33%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/fdtable.c.obj
[ 33%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/hex.c.obj
[ 34%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/notify.c.obj
[ 34%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/printk.c.obj
[ 35%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/onoff.c.obj
[ 36%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/rb.c.obj
[ 36%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/sem.c.obj
[ 37%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/thread_entry.c.obj
[ 37%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/timeutil.c.obj
[ 38%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/heap.c.obj
[ 38%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/heap-validate.c.obj
[ 39%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/cbprintf_complete.c.obj
[ 39%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/assert.c.obj
[ 40%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/mutex.c.obj
[ 41%] Building C object zephyr/CMakeFiles/zephyr.dir/lib/os/user_work.c.obj
[ 41%] Building C object zephyr/CMakeFiles/zephyr.dir/misc/generated/configs.c.obj
[ 42%] Building C object zephyr/CMakeFiles/zephyr.dir/soc/arm/nxp_imx/rt/soc.c.obj
[ 42%] Building C object zephyr/CMakeFiles/zephyr.dir/subsys/logging/log_minimal.c.obj
[ 43%] Building C object zephyr/CMakeFiles/zephyr.dir/drivers/console/uart_console.c.obj
[ 43%] Building C object zephyr/CMakeFiles/zephyr.dir/drivers/clock_control/clock_control_mcux_ccm.c.obj
[ 44%] Building C object zephyr/CMakeFiles/zephyr.dir/drivers/timer/sys_clock_init.c.obj
[ 44%] Building C object zephyr/CMakeFiles/zephyr.dir/drivers/timer/cortex_m_systick.c.obj
[ 45%] Linking C static library libzephyr.a
[ 45%] Built target zephyr
Scanning dependencies of target isr_tables
[ 46%] Building C object zephyr/arch/common/CMakeFiles/isr_tables.dir/isr_tables.c.obj
[ 47%] Linking C static library libisr_tables.a
[ 47%] Built target isr_tables
Scanning dependencies of target arch__common
[ 48%] Building C object zephyr/arch/common/CMakeFiles/arch__common.dir/sw_isr_common.c.obj
[ 48%] Linking C static library libarch__common.a
[ 48%] Built target arch__common
Scanning dependencies of target arch__arm__core__aarch32
[ 50%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/swap.c.obj
[ 50%] Building ASM object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/swap_helper.S.obj
[ 51%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/irq_manage.c.obj
[ 51%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/thread.c.obj
[ 52%] Building ASM object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/cpu_idle.S.obj
[ 52%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/fatal.c.obj
[ 53%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/nmi.c.obj
[ 54%] Building ASM object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/nmi_on_reset.S.obj
[ 54%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/prep_c.c.obj
[ 55%] Building ASM object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/isr_wrapper.S.obj
[ 55%] Building C object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/irq_offload.c.obj
[ 56%] Building ASM object zephyr/arch/arch/arm/core/aarch32/CMakeFiles/arch__arm__core__aarch32.dir/userspace.S.obj
[ 56%] Linking C static library libarch__arm__core__aarch32.a
[ 56%] Built target arch__arm__core__aarch32
Scanning dependencies of target arch__arm__core__aarch32__cortex_m
[ 57%] Building ASM object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/vector_table.S.obj
[ 57%] Building ASM object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/reset.S.obj
[ 58%] Building ASM object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/fault_s.S.obj
[ 59%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/fault.c.obj
[ 59%] Building ASM object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/exc_exit.S.obj
[ 60%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/scb.c.obj
[ 60%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/irq_init.c.obj
[ 61%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/CMakeFiles/arch__arm__core__aarch32__cortex_m.dir/thread_abort.c.obj
[ 61%] Linking C static library libarch__arm__core__aarch32__cortex_m.a
[ 61%] Built target arch__arm__core__aarch32__cortex_m
Scanning dependencies of target arch__arm__core__aarch32__cortex_m__mpu
[ 62%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/mpu/CMakeFiles/arch__arm__core__aarch32__cortex_m__mpu.dir/arm_core_mpu.c.obj
[ 63%] Building C object zephyr/arch/arch/arm/core/aarch32/cortex_m/mpu/CMakeFiles/arch__arm__core__aarch32__cortex_m__mpu.dir/arm_mpu.c.obj
[ 63%] Linking C static library libarch__arm__core__aarch32__cortex_m__mpu.a
[ 63%] Built target arch__arm__core__aarch32__cortex_m__mpu
Scanning dependencies of target lib__libc__minimal
[ 63%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/abort.c.obj
[ 64%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/atoi.c.obj
[ 64%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/strtol.c.obj
[ 65%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/strtoul.c.obj
[ 66%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/malloc.c.obj
[ 66%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/bsearch.c.obj
[ 67%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdlib/exit.c.obj
[ 67%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/string/strncasecmp.c.obj
[ 68%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/string/strstr.c.obj
[ 68%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/string/string.c.obj
[ 69%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/string/strspn.c.obj
[ 69%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdout/stdout_console.c.obj
[ 70%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdout/sprintf.c.obj
[ 71%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/stdout/fprintf.c.obj
[ 71%] Building C object zephyr/lib/libc/minimal/CMakeFiles/lib__libc__minimal.dir/source/time/gmtime.c.obj
[ 72%] Linking C static library liblib__libc__minimal.a
[ 72%] Built target lib__libc__minimal
Scanning dependencies of target lib__posix
[ 72%] Building C object zephyr/lib/posix/CMakeFiles/lib__posix.dir/pthread_common.c.obj
[ 73%] Building C object zephyr/lib/posix/CMakeFiles/lib__posix.dir/nanosleep.c.obj
[ 73%] Linking C static library liblib__posix.a
[ 73%] Built target lib__posix
Scanning dependencies of target soc__arm__common__cortex_m
[ 75%] Building C object zephyr/soc/arm/common/cortex_m/CMakeFiles/soc__arm__common__cortex_m.dir/arm_mpu_regions.c.obj
[ 75%] Linking C static library libsoc__arm__common__cortex_m.a
[ 75%] Built target soc__arm__common__cortex_m
Scanning dependencies of target boards__arm__mimxrt1010_evk
[ 75%] Building C object zephyr/boards/arm/mimxrt1010_evk/CMakeFiles/boards__arm__mimxrt1010_evk.dir/pinmux.c.obj
[ 76%] Linking C static library libboards__arm__mimxrt1010_evk.a
[ 76%] Built target boards__arm__mimxrt1010_evk
Scanning dependencies of target subsys__testsuite__ztest
[ 77%] Building C object zephyr/subsys/testsuite/ztest/CMakeFiles/subsys__testsuite__ztest.dir/src/ztest.c.obj
[ 78%] Building C object zephyr/subsys/testsuite/ztest/CMakeFiles/subsys__testsuite__ztest.dir/src/ztest_error_hook.c.obj
[ 78%] Linking C static library libsubsys__testsuite__ztest.a
[ 78%] Built target subsys__testsuite__ztest
Scanning dependencies of target drivers__gpio
[ 78%] Building C object zephyr/drivers/gpio/CMakeFiles/drivers__gpio.dir/gpio_mcux_igpio.c.obj
[ 79%] Building C object zephyr/drivers/gpio/CMakeFiles/drivers__gpio.dir/gpio_handlers.c.obj
[ 79%] Linking C static library libdrivers__gpio.a
[ 79%] Built target drivers__gpio
Scanning dependencies of target drivers__serial
[ 80%] Building C object zephyr/drivers/serial/CMakeFiles/drivers__serial.dir/uart_mcux_lpuart.c.obj
[ 80%] Building C object zephyr/drivers/serial/CMakeFiles/drivers__serial.dir/uart_handlers.c.obj
[ 81%] Linking C static library libdrivers__serial.a
[ 81%] Built target drivers__serial
Scanning dependencies of target ..__modules__hal__nxp
[ 81%] Building C object modules/nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/devices/MIMXRT1011/fsl_clock.c.obj
[ 82%] Building C object modules/nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/boards/evkmimxrt1010/evkmimxrt1010_flexspi_nor_config.c.obj
[ 82%] Building C object modules/nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/drivers/imx/fsl_gpio.c.obj
[ 83%] Building C object modules/nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/drivers/imx/fsl_cache.c.obj
[ 83%] Building C object modules/nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/drivers/imx/fsl_lpuart.c.obj
[ 84%] Linking C static library lib..__modules__hal__nxp.a
[ 84%] Built target ..__modules__hal__nxp
Scanning dependencies of target app_smem_unaligned_linker
[ 85%] Generating app_smem_unaligned linker section
[ 85%] Built target app_smem_unaligned_linker
Scanning dependencies of target linker_app_smem_unaligned_script
[ 86%] Generating linker_app_smem_unaligned.cmd
[ 86%] Built target linker_app_smem_unaligned_script
Scanning dependencies of target app_smem_unaligned_prebuilt
[ 87%] Building C object zephyr/CMakeFiles/app_smem_unaligned_prebuilt.dir/misc/empty_file.c.obj
[ 87%] Linking C executable app_smem_unaligned_prebuilt.elf
Logical command for additional byproducts on target: app_smem_unaligned_prebuilt
[ 87%] Built target app_smem_unaligned_prebuilt
Scanning dependencies of target app_smem_aligned_linker
[ 87%] Generating app_smem_aligned linker section
[ 87%] Built target app_smem_aligned_linker
Scanning dependencies of target linker_zephyr_prebuilt_script_target
[ 88%] Generating linker_zephyr_prebuilt.cmd
[ 88%] Built target linker_zephyr_prebuilt_script_target
Scanning dependencies of target zephyr_prebuilt
[ 89%] Building C object zephyr/CMakeFiles/zephyr_prebuilt.dir/misc/empty_file.c.obj
[ 90%] Linking C executable zephyr_prebuilt.elf
Logical command for additional byproducts on target: zephyr_prebuilt
[ 90%] Built target zephyr_prebuilt
Scanning dependencies of target kobj_hash_list
[ 91%] Generating kobject_hash.gperf
[ 91%] Built target kobj_hash_list
Scanning dependencies of target kobj_hash_output_src_pre
[ 92%] Generating kobject_hash_preprocessed.c
[ 92%] Built target kobj_hash_output_src_pre
[ 93%] Generating kobject_hash.c
Scanning dependencies of target kobj_hash_output_lib
[ 93%] Building C object zephyr/CMakeFiles/kobj_hash_output_lib.dir/kobject_hash.c.obj
[ 94%] Linking C static library libkobj_hash_output_lib.a
[ 95%] Built target kobj_hash_output_lib
Scanning dependencies of target kobj_hash_output_obj_renamed
[ 95%] Generating kobject_hash_renamed.o
[ 95%] Built target kobj_hash_output_obj_renamed
Scanning dependencies of target linker_zephyr_final_script_target
[ 96%] Generating linker.cmd
[ 96%] Built target linker_zephyr_final_script_target
[ 96%] Generating dev_handles.c
[ 97%] Generating isr_tables.c, isrList.bin
Scanning dependencies of target zephyr_final
[ 98%] Building C object zephyr/CMakeFiles/zephyr_final.dir/misc/empty_file.c.obj
[ 98%] Building C object zephyr/CMakeFiles/zephyr_final.dir/isr_tables.c.obj
[100%] Building C object zephyr/CMakeFiles/zephyr_final.dir/dev_handles.c.obj
[100%] Linking C executable zephyr.elf
/opt/zephyr-sdk/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/10.2.0/../../../../arm-zephyr-eabi/bin/ld: zephyr.elf section `priv_stacks_noinit' will not fit in region `SRAM'
/opt/zephyr-sdk/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/10.2.0/../../../../arm-zephyr-eabi/bin/ld: region `SRAM' overflowed by 6880 bytes
Memory region Used Size Region Size %age Used
OCRAM: 0 GB 64 KB 0.00%
FLASH: 94812 B 16 MB 0.57%
ITCM: 0 GB 32 KB 0.00%
SRAM: 39648 B 32 KB 121.00%
IDT_LIST: 0 GB 2 KB 0.00%
collect2: error: ld returned 1 exit status
make[2]: *** [zephyr/CMakeFiles/zephyr_final.dir/build.make:146: zephyr/zephyr.elf] Error 1
make[1]: *** [CMakeFiles/Makefile2:2562: zephyr/CMakeFiles/zephyr_final.dir/all] Error 2
make: *** [Makefile:84: all] Error 2
script returned exit code 2
```
**Environment (please complete the following information):**
- OS: (e.g. Linux, )
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used: zephyr-v2.5.0-1463-gc59cf6008b60
| test | test evk tests kernel sched schedule api kernel threads sched userspace cases meet out our space describe the bug kernel threads sched userspace cases meet out our space to reproduce steps to reproduce the behavior tests kernel sched schedule api mkdir build cd build cmake dboard evk make see error expected behavior in former build this cases are pass but now the code size is too large impact unknown logs and console output docker exec confident sinoussi build zephyr elf sh evk master tests kernel sched schedule api evk build dconf file prj dumb conf dconfig timeslicing n kernel scheduler dumb no timeslicing tests kernel sched schedule api dconf file prj dumb conf dconfig timeslicing n build src workspace evk master including boilerplate zephyr base build src workspace evk master zephyr cmake app boilerplate cmake application build src workspace evk master zephyr tests kernel sched schedule api zephyr version build src workspace evk master zephyr found usr bin found suitable exact version found components interpreter found west found suitable version minimum required is board evk cache files will be written to root cache zephyr using toolchain zephyr opt zephyr sdk found dtc opt zephyr sdk sysroots pokysdk linux usr bin dtc found suitable version minimum required is found board dts build src workspace evk master zephyr boards arm evk evk dts generated zephyr dts build src workspace evk master zephyr tests kernel sched schedule api build zephyr zephyr dts generated devicetree unfixed h build src workspace evk master zephyr tests kernel sched schedule api build zephyr include generated devicetree unfixed h generated device extern h build src workspace evk master zephyr tests kernel sched schedule api build zephyr include generated device extern h parsing build src workspace evk master zephyr kconfig loaded configuration build src workspace evk master zephyr boards arm evk evk defconfig merged configuration prj dumb conf merged configuration build src workspace evk master zephyr tests kernel sched schedule api build zephyr misc generated extra kconfig options conf configuration saved to build src workspace evk master zephyr tests kernel sched schedule api build zephyr config kconfig header saved to build src workspace evk master zephyr tests kernel sched schedule api build zephyr include generated autoconf h the c compiler identification is gnu the cxx compiler identification is gnu the asm compiler identification is gnu found assembler opt zephyr sdk arm zephyr eabi bin arm zephyr eabi gcc configuring done generating done build files have been written to build src workspace evk master zephyr tests kernel sched schedule api build scanning dependencies of target parse syscalls target generating misc generated syscalls json misc generated struct tags json built target parse syscalls target scanning dependencies of target syscall list h target generating include generated syscall dispatch c include generated syscall list h built target syscall list h target scanning dependencies of target driver validation h target generating include generated driver validation h built target driver validation h target scanning dependencies of target kobj types h target generating include generated kobj types enum h include generated otype to str h include generated otype to size h built target kobj types h target scanning dependencies of target offsets building c object zephyr cmakefiles offsets dir arch arm core offsets offsets c obj built target offsets scanning dependencies of target offsets h generating include generated offsets h built target offsets h scanning dependencies of target zephyr generated headers built target zephyr generated headers scanning dependencies of target app building c object cmakefiles app dir src main c obj building c object cmakefiles app dir src test priority scheduling c obj building c object cmakefiles app dir src test sched is preempt thread c obj building c object cmakefiles app dir src test sched priority c obj building c object cmakefiles app dir src test sched timeslice and lock c obj building c object cmakefiles app dir src test sched timeslice reset c obj building c object cmakefiles app dir src test slice scheduling c obj building c object cmakefiles app dir src user api c obj linking c static library app libapp a built target app scanning dependencies of target kernel building c object zephyr kernel cmakefiles kernel dir device c obj building c object zephyr kernel cmakefiles kernel dir errno c obj building c object zephyr kernel cmakefiles kernel dir fatal c obj building c object zephyr kernel cmakefiles kernel dir idle c obj building c object zephyr kernel cmakefiles kernel dir init c obj building c object zephyr kernel cmakefiles kernel dir kheap c obj building c object zephyr kernel cmakefiles kernel dir mailbox c obj building c object zephyr kernel cmakefiles kernel dir mem slab c obj building c object zephyr kernel cmakefiles kernel dir msg q c obj building c object zephyr kernel cmakefiles kernel dir mutex c obj building c object zephyr kernel cmakefiles kernel dir pipes c obj building c object zephyr kernel cmakefiles kernel dir queue c obj building c object zephyr kernel cmakefiles kernel dir sched c obj building c object zephyr kernel cmakefiles kernel dir sem c obj building c object zephyr kernel cmakefiles kernel dir stack c obj building c object zephyr kernel cmakefiles kernel dir system work q c obj building c object zephyr kernel cmakefiles kernel dir thread c obj building c object zephyr kernel cmakefiles kernel dir version c obj building c object zephyr kernel cmakefiles kernel dir condvar c obj building c object zephyr kernel cmakefiles kernel dir work c obj building c object zephyr kernel cmakefiles kernel dir smp c obj building c object zephyr kernel cmakefiles kernel dir banner c obj building c object zephyr kernel cmakefiles kernel dir xip c obj building c object zephyr kernel cmakefiles kernel dir timeout c obj building c object zephyr kernel cmakefiles kernel dir timer c obj building c object zephyr kernel cmakefiles kernel dir mempool c obj building c object zephyr kernel cmakefiles kernel dir futex c obj building c object zephyr kernel cmakefiles kernel dir mem domain c obj building c object zephyr kernel cmakefiles kernel dir cache handlers c obj building c object zephyr kernel cmakefiles kernel dir userspace handler c obj building c object zephyr kernel cmakefiles kernel dir userspace c obj linking c static library libkernel a built target kernel scanning dependencies of target zephyr building c object zephyr cmakefiles zephyr dir lib os cbprintf c obj building c object zephyr cmakefiles zephyr dir lib os cbprintf packaged c obj building c object zephyr cmakefiles zephyr dir lib os sw c obj building c object zephyr cmakefiles zephyr dir lib os sw c obj building c object zephyr cmakefiles zephyr dir lib os sw c obj building c object zephyr cmakefiles zephyr dir lib os sw c obj building c object zephyr cmakefiles zephyr dir lib os sw c obj building c object zephyr cmakefiles zephyr dir lib os dec c obj building c object zephyr cmakefiles zephyr dir lib os fdtable c obj building c object zephyr cmakefiles zephyr dir lib os hex c obj building c object zephyr cmakefiles zephyr dir lib os notify c obj building c object zephyr cmakefiles zephyr dir lib os printk c obj building c object zephyr cmakefiles zephyr dir lib os onoff c obj building c object zephyr cmakefiles zephyr dir lib os rb c obj building c object zephyr cmakefiles zephyr dir lib os sem c obj building c object zephyr cmakefiles zephyr dir lib os thread entry c obj building c object zephyr cmakefiles zephyr dir lib os timeutil c obj building c object zephyr cmakefiles zephyr dir lib os heap c obj building c object zephyr cmakefiles zephyr dir lib os heap validate c obj building c object zephyr cmakefiles zephyr dir lib os cbprintf complete c obj building c object zephyr cmakefiles zephyr dir lib os assert c obj building c object zephyr cmakefiles zephyr dir lib os mutex c obj building c object zephyr cmakefiles zephyr dir lib os user work c obj building c object zephyr cmakefiles zephyr dir misc generated configs c obj building c object zephyr cmakefiles zephyr dir soc arm nxp imx rt soc c obj building c object zephyr cmakefiles zephyr dir subsys logging log minimal c obj building c object zephyr cmakefiles zephyr dir drivers console uart console c obj building c object zephyr cmakefiles zephyr dir drivers clock control clock control mcux ccm c obj building c object zephyr cmakefiles zephyr dir drivers timer sys clock init c obj building c object zephyr cmakefiles zephyr dir drivers timer cortex m systick c obj linking c static library libzephyr a built target zephyr scanning dependencies of target isr tables building c object zephyr arch common cmakefiles isr tables dir isr tables c obj linking c static library libisr tables a built target isr tables scanning dependencies of target arch common building c object zephyr arch common cmakefiles arch common dir sw isr common c obj linking c static library libarch common a built target arch common scanning dependencies of target arch arm core building c object zephyr arch arch arm core cmakefiles arch arm core dir swap c obj building asm object zephyr arch arch arm core cmakefiles arch arm core dir swap helper s obj building c object zephyr arch arch arm core cmakefiles arch arm core dir irq manage c obj building c object zephyr arch arch arm core cmakefiles arch arm core dir thread c obj building asm object zephyr arch arch arm core cmakefiles arch arm core dir cpu idle s obj building c object zephyr arch arch arm core cmakefiles arch arm core dir fatal c obj building c object zephyr arch arch arm core cmakefiles arch arm core dir nmi c obj building asm object zephyr arch arch arm core cmakefiles arch arm core dir nmi on reset s obj building c object zephyr arch arch arm core cmakefiles arch arm core dir prep c c obj building asm object zephyr arch arch arm core cmakefiles arch arm core dir isr wrapper s obj building c object zephyr arch arch arm core cmakefiles arch arm core dir irq offload c obj building asm object zephyr arch arch arm core cmakefiles arch arm core dir userspace s obj linking c static library libarch arm core a built target arch arm core scanning dependencies of target arch arm core cortex m building asm object zephyr arch arch arm core cortex m cmakefiles arch arm core cortex m dir vector table s obj building asm object zephyr arch arch arm core cortex m cmakefiles arch arm core cortex m dir reset s obj building asm object zephyr arch arch arm core cortex m cmakefiles arch arm core cortex m dir fault s s obj building c object zephyr arch arch arm core cortex m cmakefiles arch arm core cortex m dir fault c obj building asm object zephyr arch arch arm core cortex m cmakefiles arch arm core cortex m dir exc exit s obj building c object zephyr arch arch arm core cortex m cmakefiles arch arm core cortex m dir scb c obj building c object zephyr arch arch arm core cortex m cmakefiles arch arm core cortex m dir irq init c obj building c object zephyr arch arch arm core cortex m cmakefiles arch arm core cortex m dir thread abort c obj linking c static library libarch arm core cortex m a built target arch arm core cortex m scanning dependencies of target arch arm core cortex m mpu building c object zephyr arch arch arm core cortex m mpu cmakefiles arch arm core cortex m mpu dir arm core mpu c obj building c object zephyr arch arch arm core cortex m mpu cmakefiles arch arm core cortex m mpu dir arm mpu c obj linking c static library libarch arm core cortex m mpu a built target arch arm core cortex m mpu scanning dependencies of target lib libc minimal building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source stdlib abort c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source stdlib atoi c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source stdlib strtol c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source stdlib strtoul c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source stdlib malloc c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source stdlib bsearch c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source stdlib exit c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source string strncasecmp c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source string strstr c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source string string c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source string strspn c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source stdout stdout console c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source stdout sprintf c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source stdout fprintf c obj building c object zephyr lib libc minimal cmakefiles lib libc minimal dir source time gmtime c obj linking c static library liblib libc minimal a built target lib libc minimal scanning dependencies of target lib posix building c object zephyr lib posix cmakefiles lib posix dir pthread common c obj building c object zephyr lib posix cmakefiles lib posix dir nanosleep c obj linking c static library liblib posix a built target lib posix scanning dependencies of target soc arm common cortex m building c object zephyr soc arm common cortex m cmakefiles soc arm common cortex m dir arm mpu regions c obj linking c static library libsoc arm common cortex m a built target soc arm common cortex m scanning dependencies of target boards arm evk building c object zephyr boards arm evk cmakefiles boards arm evk dir pinmux c obj linking c static library libboards arm evk a built target boards arm evk scanning dependencies of target subsys testsuite ztest building c object zephyr subsys testsuite ztest cmakefiles subsys testsuite ztest dir src ztest c obj building c object zephyr subsys testsuite ztest cmakefiles subsys testsuite ztest dir src ztest error hook c obj linking c static library libsubsys testsuite ztest a built target subsys testsuite ztest scanning dependencies of target drivers gpio building c object zephyr drivers gpio cmakefiles drivers gpio dir gpio mcux igpio c obj building c object zephyr drivers gpio cmakefiles drivers gpio dir gpio handlers c obj linking c static library libdrivers gpio a built target drivers gpio scanning dependencies of target drivers serial building c object zephyr drivers serial cmakefiles drivers serial dir uart mcux lpuart c obj building c object zephyr drivers serial cmakefiles drivers serial dir uart handlers c obj linking c static library libdrivers serial a built target drivers serial scanning dependencies of target modules hal nxp building c object modules nxp cmakefiles modules hal nxp dir mcux devices fsl clock c obj building c object modules nxp cmakefiles modules hal nxp dir mcux boards flexspi nor config c obj building c object modules nxp cmakefiles modules hal nxp dir mcux drivers imx fsl gpio c obj building c object modules nxp cmakefiles modules hal nxp dir mcux drivers imx fsl cache c obj building c object modules nxp cmakefiles modules hal nxp dir mcux drivers imx fsl lpuart c obj linking c static library lib modules hal nxp a built target modules hal nxp scanning dependencies of target app smem unaligned linker generating app smem unaligned linker section built target app smem unaligned linker scanning dependencies of target linker app smem unaligned script generating linker app smem unaligned cmd built target linker app smem unaligned script scanning dependencies of target app smem unaligned prebuilt building c object zephyr cmakefiles app smem unaligned prebuilt dir misc empty file c obj linking c executable app smem unaligned prebuilt elf logical command for additional byproducts on target app smem unaligned prebuilt built target app smem unaligned prebuilt scanning dependencies of target app smem aligned linker generating app smem aligned linker section built target app smem aligned linker scanning dependencies of target linker zephyr prebuilt script target generating linker zephyr prebuilt cmd built target linker zephyr prebuilt script target scanning dependencies of target zephyr prebuilt building c object zephyr cmakefiles zephyr prebuilt dir misc empty file c obj linking c executable zephyr prebuilt elf logical command for additional byproducts on target zephyr prebuilt built target zephyr prebuilt scanning dependencies of target kobj hash list generating kobject hash gperf built target kobj hash list scanning dependencies of target kobj hash output src pre generating kobject hash preprocessed c built target kobj hash output src pre generating kobject hash c scanning dependencies of target kobj hash output lib building c object zephyr cmakefiles kobj hash output lib dir kobject hash c obj linking c static library libkobj hash output lib a built target kobj hash output lib scanning dependencies of target kobj hash output obj renamed generating kobject hash renamed o built target kobj hash output obj renamed scanning dependencies of target linker zephyr final script target generating linker cmd built target linker zephyr final script target generating dev handles c generating isr tables c isrlist bin scanning dependencies of target zephyr final building c object zephyr cmakefiles zephyr final dir misc empty file c obj building c object zephyr cmakefiles zephyr final dir isr tables c obj building c object zephyr cmakefiles zephyr final dir dev handles c obj linking c executable zephyr elf opt zephyr sdk arm zephyr eabi bin lib gcc arm zephyr eabi arm zephyr eabi bin ld zephyr elf section priv stacks noinit will not fit in region sram opt zephyr sdk arm zephyr eabi bin lib gcc arm zephyr eabi arm zephyr eabi bin ld region sram overflowed by bytes memory region used size region size age used ocram gb kb flash b mb itcm gb kb sram b kb idt list gb kb error ld returned exit status make error make error make error script returned exit code environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used zephyr | 1 |
57,000 | 7,022,944,657 | IssuesEvent | 2017-12-22 13:11:23 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Drop Cap is on a different number of lines on front end vs. vackend | Customization Needs Design Feedback | <!--
BEFORE POSTING YOUR ISSUE:
- These comments won't show up when you submit the issue.
- Try to add as much detail as possible. Be specific!
- Please add the version of Gutenberg you are using in the description
- If you're requesting a new feature, explain why you'd like it to be added.
- Search this repository for the issue and whether it has been fixed or reported already.
- Ensure you are using the latest code before logging bugs.
- Disable all plugins to ensure it's not a plugin conflict issue.
-->
Gutenberg version: 0.3
## Issue Overview
<!-- This is a brief overview of the issue. --->
Inside Gutenberg, drop caps cover two lines. On the front end, they cover three.
## Steps to Reproduce (for bugs)
<!-- Provide a link to a live example, or an unambiguous set of steps to -->
<!-- reproduce this bug. Include code to reproduce, if relevant -->
1. Apply a drop cap
2. View the post
<!-- Provide what browser you are using and any other specfics to your setup -->
## Expected Behavior
<!-- If you're describing a bug, tell us what should happen -->
<!-- If you're suggesting a change/improvement, tell us how it should work -->
I expect the same number of lines to be covered by the drop cap on both the front end and the back end
## Current Behavior
<!-- If describing a bug, tell us what happens instead of the expected behavior -->
<!-- If suggesting a change/improvement, explain the difference from current behavior -->
Inside Gutenberg:
<img width="109" alt="screen shot 2017-07-05 at 1 41 30 pm" src="https://user-images.githubusercontent.com/622599/27879632-e4bcedbc-6187-11e7-950a-766392467fd0.png">
Inside my theme:
<img width="170" alt="screen shot 2017-07-05 at 1 41 44 pm" src="https://user-images.githubusercontent.com/622599/27879631-e4b4cd44-6187-11e7-9d79-bd0c7014f376.png">
## Possible Solution
<!-- Not obligatory, but suggest a fix/reason for the bug, -->
<!-- or ideas how to implement the addition or change -->
## Related Issues and/or PRs
<!-- List related issues or PRs against other branches: -->
## Todos
- [ ] Tests
- [ ] Documentation
| 1.0 | Drop Cap is on a different number of lines on front end vs. vackend - <!--
BEFORE POSTING YOUR ISSUE:
- These comments won't show up when you submit the issue.
- Try to add as much detail as possible. Be specific!
- Please add the version of Gutenberg you are using in the description
- If you're requesting a new feature, explain why you'd like it to be added.
- Search this repository for the issue and whether it has been fixed or reported already.
- Ensure you are using the latest code before logging bugs.
- Disable all plugins to ensure it's not a plugin conflict issue.
-->
Gutenberg version: 0.3
## Issue Overview
<!-- This is a brief overview of the issue. --->
Inside Gutenberg, drop caps cover two lines. On the front end, they cover three.
## Steps to Reproduce (for bugs)
<!-- Provide a link to a live example, or an unambiguous set of steps to -->
<!-- reproduce this bug. Include code to reproduce, if relevant -->
1. Apply a drop cap
2. View the post
<!-- Provide what browser you are using and any other specfics to your setup -->
## Expected Behavior
<!-- If you're describing a bug, tell us what should happen -->
<!-- If you're suggesting a change/improvement, tell us how it should work -->
I expect the same number of lines to be covered by the drop cap on both the front end and the back end
## Current Behavior
<!-- If describing a bug, tell us what happens instead of the expected behavior -->
<!-- If suggesting a change/improvement, explain the difference from current behavior -->
Inside Gutenberg:
<img width="109" alt="screen shot 2017-07-05 at 1 41 30 pm" src="https://user-images.githubusercontent.com/622599/27879632-e4bcedbc-6187-11e7-950a-766392467fd0.png">
Inside my theme:
<img width="170" alt="screen shot 2017-07-05 at 1 41 44 pm" src="https://user-images.githubusercontent.com/622599/27879631-e4b4cd44-6187-11e7-9d79-bd0c7014f376.png">
## Possible Solution
<!-- Not obligatory, but suggest a fix/reason for the bug, -->
<!-- or ideas how to implement the addition or change -->
## Related Issues and/or PRs
<!-- List related issues or PRs against other branches: -->
## Todos
- [ ] Tests
- [ ] Documentation
| non_test | drop cap is on a different number of lines on front end vs vackend before posting your issue these comments won t show up when you submit the issue try to add as much detail as possible be specific please add the version of gutenberg you are using in the description if you re requesting a new feature explain why you d like it to be added search this repository for the issue and whether it has been fixed or reported already ensure you are using the latest code before logging bugs disable all plugins to ensure it s not a plugin conflict issue gutenberg version issue overview inside gutenberg drop caps cover two lines on the front end they cover three steps to reproduce for bugs apply a drop cap view the post expected behavior i expect the same number of lines to be covered by the drop cap on both the front end and the back end current behavior inside gutenberg img width alt screen shot at pm src inside my theme img width alt screen shot at pm src possible solution related issues and or prs todos tests documentation | 0 |
71,707 | 7,255,429,985 | IssuesEvent | 2018-02-16 14:52:14 | hazelcast/hazelcast-jet | https://api.github.com/repos/hazelcast/hazelcast-jet | closed | [TEST-FAILURE] JetInstanceTest.when_hazelcastClientCreated_then_doesNotConnectToJetCluster | core test-failure | ```
Error Message
Expected test to throw (an instance of java.lang.IllegalStateException and exception with message a string containing "Unable to connect")
Stacktrace
java.lang.AssertionError: Expected test to throw (an instance of java.lang.IllegalStateException and exception with message a string containing "Unable to connect")
```
https://hazelcast-l337.ci.cloudbees.com/view/Jet/job/Jet-sonar/com.hazelcast.jet$hazelcast-jet-core/1030/testReport/junit/com.hazelcast.jet/JetInstanceTest/when_hazelcastClientCreated_then_doesNotConnectToJetCluster/ | 1.0 | [TEST-FAILURE] JetInstanceTest.when_hazelcastClientCreated_then_doesNotConnectToJetCluster - ```
Error Message
Expected test to throw (an instance of java.lang.IllegalStateException and exception with message a string containing "Unable to connect")
Stacktrace
java.lang.AssertionError: Expected test to throw (an instance of java.lang.IllegalStateException and exception with message a string containing "Unable to connect")
```
https://hazelcast-l337.ci.cloudbees.com/view/Jet/job/Jet-sonar/com.hazelcast.jet$hazelcast-jet-core/1030/testReport/junit/com.hazelcast.jet/JetInstanceTest/when_hazelcastClientCreated_then_doesNotConnectToJetCluster/ | test | jetinstancetest when hazelcastclientcreated then doesnotconnecttojetcluster error message expected test to throw an instance of java lang illegalstateexception and exception with message a string containing unable to connect stacktrace java lang assertionerror expected test to throw an instance of java lang illegalstateexception and exception with message a string containing unable to connect | 1 |
21,915 | 3,926,148,980 | IssuesEvent | 2016-04-22 21:58:24 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | TestHarness should have an environment option | C: TestHarness P: normal T: task | ### Description of the enhancement or error report
We need to be able to run tests based on the existence or possibly value of environment variables. This is necessary for running tests if optional libraries are present.
### Rationale for the enhancement or information for reproducing the error
Grizzly wants to be able to conditionally run tests if RAVEN is present
### Identified impact
(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
New feature in the TestHarness | 1.0 | TestHarness should have an environment option - ### Description of the enhancement or error report
We need to be able to run tests based on the existence or possibly value of environment variables. This is necessary for running tests if optional libraries are present.
### Rationale for the enhancement or information for reproducing the error
Grizzly wants to be able to conditionally run tests if RAVEN is present
### Identified impact
(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
New feature in the TestHarness | test | testharness should have an environment option description of the enhancement or error report we need to be able to run tests based on the existence or possibly value of environment variables this is necessary for running tests if optional libraries are present rationale for the enhancement or information for reproducing the error grizzly wants to be able to conditionally run tests if raven is present identified impact i e internal object changes limited interface changes public api change or a list of specific applications impacted new feature in the testharness | 1 |
35,491 | 14,709,996,055 | IssuesEvent | 2021-01-05 03:54:59 | microsoft/BotFramework-Composer | https://api.github.com/repos/microsoft/BotFramework-Composer | closed | Cannot build composer from source | Bot Services Support Type: Bug customer-replied-to customer-reported | <!-- Complete the necessary portions of this template and delete the rest. -->
I tried this : https://docs.microsoft.com/en-us/composer/install-composer#build-composer-from-source
But got problems when run **yarn** at terminal in ...\BotFramework-Composer\Composer

It didn't run anymore...
Then , I tried other way with running **yarn install** && **yarn build**

<!-- Give a clear and concise description of what the bug is. -->
## Version
<!-- What version of the Composer are you using? Paste the build SHA found on the about page (`/about`). -->
## Browser
<!-- What browser are you using? -->
- [ ] Electron distribution
- [ ] Chrome
- [ ] Safari
- [ ] Firefox
- [ ] Edge
## OS
<!-- What operating system are you using? -->
- [ ] macOS
- [ ] Windows
- [ ] Ubuntu
## To Reproduce
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## Expected behavior
<!-- Give a clear and concise description of what you expected to happen. -->
## Screenshots
<!-- If applicable, add screenshots/gif/video to help explain your problem. -->
## Additional context
<!-- Add any other context about the problem here. -->
| 1.0 | Cannot build composer from source - <!-- Complete the necessary portions of this template and delete the rest. -->
I tried this : https://docs.microsoft.com/en-us/composer/install-composer#build-composer-from-source
But got problems when run **yarn** at terminal in ...\BotFramework-Composer\Composer

It didn't run anymore...
Then , I tried other way with running **yarn install** && **yarn build**

<!-- Give a clear and concise description of what the bug is. -->
## Version
<!-- What version of the Composer are you using? Paste the build SHA found on the about page (`/about`). -->
## Browser
<!-- What browser are you using? -->
- [ ] Electron distribution
- [ ] Chrome
- [ ] Safari
- [ ] Firefox
- [ ] Edge
## OS
<!-- What operating system are you using? -->
- [ ] macOS
- [ ] Windows
- [ ] Ubuntu
## To Reproduce
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## Expected behavior
<!-- Give a clear and concise description of what you expected to happen. -->
## Screenshots
<!-- If applicable, add screenshots/gif/video to help explain your problem. -->
## Additional context
<!-- Add any other context about the problem here. -->
| non_test | cannot build composer from source i tried this but got problems when run yarn at terminal in botframework composer composer it didn t run anymore then i tried other way with running yarn install yarn build version browser electron distribution chrome safari firefox edge os macos windows ubuntu to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior screenshots additional context | 0 |
127,574 | 27,078,312,914 | IssuesEvent | 2023-02-14 12:14:47 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | [Feature]: Snowflake plugin: use connection pool | Enhancement Backend snowflake BE Coders Pod Test Plan Approved Integrations Pod Connection pool | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Summary
Currenlty the Snowflake plugin does not use connection pool, but relies on single JDBC based connections. We can use a connection pool like in Postgres plugin so that our plugin connection can become thread safe and we can also re-use connections instead of creating a new one each time.
### Why should this be worked on?
* To make sure that our plugin is thread safe.
* To make sure that we don't end up creating too many connections to the DB. | 1.0 | [Feature]: Snowflake plugin: use connection pool - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Summary
Currenlty the Snowflake plugin does not use connection pool, but relies on single JDBC based connections. We can use a connection pool like in Postgres plugin so that our plugin connection can become thread safe and we can also re-use connections instead of creating a new one each time.
### Why should this be worked on?
* To make sure that our plugin is thread safe.
* To make sure that we don't end up creating too many connections to the DB. | non_test | snowflake plugin use connection pool is there an existing issue for this i have searched the existing issues summary currenlty the snowflake plugin does not use connection pool but relies on single jdbc based connections we can use a connection pool like in postgres plugin so that our plugin connection can become thread safe and we can also re use connections instead of creating a new one each time why should this be worked on to make sure that our plugin is thread safe to make sure that we don t end up creating too many connections to the db | 0 |
278,073 | 24,122,061,633 | IssuesEvent | 2022-09-20 19:40:49 | mozilla-mobile/mobile-test-eng | https://api.github.com/repos/mozilla-mobile/mobile-test-eng | closed | [META] Create a means for auto-filing github issues when flaky tests are detected | intermittent-test iOS android multiplier infra:ui-test proj:data-github META | NEW
1. When a flaky test report is triggered...
1. Pull all open issues against the given project with the search string "<test name>" and ["intermittent" | "flaky" ]. (we'll need to establish a consistent nomenclature for this to work).
2. If issue doesn't already exist, file new issues
(if issue does exist, see UPDATE)
UPDATE
1. update an existing issues with new logs
CLOSE OUT
Create a separate filter to close out old intermittents after 30 days
NOTES:
- this will require a GITHUB_TOKEN w/ issue write permissions against the given project
- we should first verify this against a dummy github project
- need to consider writing out to the DB as a cache to track flakiness, etc. (is this a newly introduced intermittent or not) | 2.0 | [META] Create a means for auto-filing github issues when flaky tests are detected - NEW
1. When a flaky test report is triggered...
1. Pull all open issues against the given project with the search string "<test name>" and ["intermittent" | "flaky" ]. (we'll need to establish a consistent nomenclature for this to work).
2. If issue doesn't already exist, file new issues
(if issue does exist, see UPDATE)
UPDATE
1. update an existing issues with new logs
CLOSE OUT
Create a separate filter to close out old intermittents after 30 days
NOTES:
- this will require a GITHUB_TOKEN w/ issue write permissions against the given project
- we should first verify this against a dummy github project
- need to consider writing out to the DB as a cache to track flakiness, etc. (is this a newly introduced intermittent or not) | test | create a means for auto filing github issues when flaky tests are detected new when a flaky test report is triggered pull all open issues against the given project with the search string and we ll need to establish a consistent nomenclature for this to work if issue doesn t already exist file new issues if issue does exist see update update update an existing issues with new logs close out create a separate filter to close out old intermittents after days notes this will require a github token w issue write permissions against the given project we should first verify this against a dummy github project need to consider writing out to the db as a cache to track flakiness etc is this a newly introduced intermittent or not | 1 |
176,744 | 28,147,616,128 | IssuesEvent | 2023-04-02 17:11:14 | dohyeons/dohyeonsu-portfolio | https://api.github.com/repos/dohyeons/dohyeonsu-portfolio | closed | [Design] Contact & Channel ์น์
ํ๋กํ ํ์
๊ตฌํ | ๐ design | ## ๐ ์ค๋ช
Contact & Channel ์น์
๊ณผ ์น์
์ ์ด๋ฃจ๋ ์ปดํฌ๋ํธ๋ค์ ํ๋กํ ํ์
์ผ๋ก ์ ์ํ๋ค.
## ๐ ์ฒดํฌ๋ฆฌ์คํธ
> ๊ตฌํํด์ผ ํ๋ ์ด์ ์ฒดํฌ๋ฆฌ์คํธ
- [x] ์น์
๋ฐ์ค ๋ง๋ค๊ธฐ
- [x] Contact ๋ฐ์ค
- [x] Channel ๋ฐ์ค
- [x] โญ๏ธ๋ฐ์ํ ์ ์ฉโญ๏ธ
| 1.0 | [Design] Contact & Channel ์น์
ํ๋กํ ํ์
๊ตฌํ - ## ๐ ์ค๋ช
Contact & Channel ์น์
๊ณผ ์น์
์ ์ด๋ฃจ๋ ์ปดํฌ๋ํธ๋ค์ ํ๋กํ ํ์
์ผ๋ก ์ ์ํ๋ค.
## ๐ ์ฒดํฌ๋ฆฌ์คํธ
> ๊ตฌํํด์ผ ํ๋ ์ด์ ์ฒดํฌ๋ฆฌ์คํธ
- [x] ์น์
๋ฐ์ค ๋ง๋ค๊ธฐ
- [x] Contact ๋ฐ์ค
- [x] Channel ๋ฐ์ค
- [x] โญ๏ธ๋ฐ์ํ ์ ์ฉโญ๏ธ
| non_test | contact channel ์น์
ํ๋กํ ํ์
๊ตฌํ ๐ ์ค๋ช
contact channel ์น์
๊ณผ ์น์
์ ์ด๋ฃจ๋ ์ปดํฌ๋ํธ๋ค์ ํ๋กํ ํ์
์ผ๋ก ์ ์ํ๋ค ๐ ์ฒดํฌ๋ฆฌ์คํธ ๊ตฌํํด์ผ ํ๋ ์ด์ ์ฒดํฌ๋ฆฌ์คํธ ์น์
๋ฐ์ค ๋ง๋ค๊ธฐ contact ๋ฐ์ค channel ๋ฐ์ค โญ๏ธ๋ฐ์ํ ์ ์ฉโญ๏ธ | 0 |
358,982 | 25,211,521,386 | IssuesEvent | 2022-11-14 04:37:44 | SigNoz/signoz-website | https://api.github.com/repos/SigNoz/signoz-website | closed | Linking internal pages | documentation | Will this type of linking work?
<img width="841" alt="Screenshot 2022-10-28 at 4 52 39 PM" src="https://user-images.githubusercontent.com/83692067/198575728-69341b32-1786-43ab-92e7-00b31b269ef5.png">
Currently, I am using the entire link like shown below. The below opens in a new page. I think for docs section opening in the same tab makes more sense just like our current behaviour.
<img width="942" alt="Screenshot 2022-10-28 at 4 54 12 PM" src="https://user-images.githubusercontent.com/83692067/198575916-e64924b6-ab07-47b2-94f0-8770e717e90c.png">
| 1.0 | Linking internal pages - Will this type of linking work?
<img width="841" alt="Screenshot 2022-10-28 at 4 52 39 PM" src="https://user-images.githubusercontent.com/83692067/198575728-69341b32-1786-43ab-92e7-00b31b269ef5.png">
Currently, I am using the entire link like shown below. The below opens in a new page. I think for docs section opening in the same tab makes more sense just like our current behaviour.
<img width="942" alt="Screenshot 2022-10-28 at 4 54 12 PM" src="https://user-images.githubusercontent.com/83692067/198575916-e64924b6-ab07-47b2-94f0-8770e717e90c.png">
| non_test | linking internal pages will this type of linking work img width alt screenshot at pm src currently i am using the entire link like shown below the below opens in a new page i think for docs section opening in the same tab makes more sense just like our current behaviour img width alt screenshot at pm src | 0 |
341,588 | 30,593,956,000 | IssuesEvent | 2023-07-21 19:48:52 | ethereum/solidity | https://api.github.com/repos/ethereum/solidity | closed | Add Foundry support to external tests scripts | testing :hammer: selected for development medium effort high impact should have | The [prb-math](https://github.com/paulrberg/prb-math/releases) migrated from Hardhat to Foundry in version `3.0.0` and requires that we update our external test scripts to support Foundry builds.
It may also be a good opportunity to migrate those scripts from shell script to python and better organize the external tests. | 1.0 | Add Foundry support to external tests scripts - The [prb-math](https://github.com/paulrberg/prb-math/releases) migrated from Hardhat to Foundry in version `3.0.0` and requires that we update our external test scripts to support Foundry builds.
It may also be a good opportunity to migrate those scripts from shell script to python and better organize the external tests. | test | add foundry support to external tests scripts the migrated from hardhat to foundry in version and requires that we update our external test scripts to support foundry builds it may also be a good opportunity to migrate those scripts from shell script to python and better organize the external tests | 1 |
228,025 | 18,151,780,231 | IssuesEvent | 2021-09-26 11:45:14 | InnoTutor/Backend | https://api.github.com/repos/InnoTutor/Backend | opened | Write tests for a User profile controller | Tests | Write unit tests for a controller which is responsible for handling requests on the user profile page. | 1.0 | Write tests for a User profile controller - Write unit tests for a controller which is responsible for handling requests on the user profile page. | test | write tests for a user profile controller write unit tests for a controller which is responsible for handling requests on the user profile page | 1 |
343,761 | 24,782,273,048 | IssuesEvent | 2022-10-24 06:42:43 | gboehl/DIMESampler.jl | https://api.github.com/repos/gboehl/DIMESampler.jl | closed | More documentation | documentation | Especially of the function arguments. Should be automatically deployable... | 1.0 | More documentation - Especially of the function arguments. Should be automatically deployable... | non_test | more documentation especially of the function arguments should be automatically deployable | 0 |
41,713 | 21,914,465,890 | IssuesEvent | 2022-05-21 15:41:40 | hajimehoshi/ebiten | https://api.github.com/repos/hajimehoshi/ebiten | closed | FPSModeVsyncOffMaximum capped FPS | bug os:windows performance | Following code on Windows prints:
* `fps: 2117.62` on v2.3.1
* `fps: 240.04` on v2.4.0-alpha.3
```go
package main
import (
"fmt"
"image/color"
"os"
"github.com/hajimehoshi/ebiten/v2"
)
type Game struct{}
func (g *Game) Update() error {
return nil
}
func (g *Game) Draw(screen *ebiten.Image) {
screen.Fill(color.Black)
fmt.Printf("fps: %.2f\n", ebiten.CurrentFPS())
}
func (g *Game) Layout(w, h int) (int, int) {
return w, h
}
func main() {
os.Setenv("EBITEN_GRAPHICS_LIBRARY", "opengl")
ebiten.SetFPSMode(ebiten.FPSModeVsyncOffMaximum)
ebiten.RunGame(&Game{})
}
``` | True | FPSModeVsyncOffMaximum capped FPS - Following code on Windows prints:
* `fps: 2117.62` on v2.3.1
* `fps: 240.04` on v2.4.0-alpha.3
```go
package main
import (
"fmt"
"image/color"
"os"
"github.com/hajimehoshi/ebiten/v2"
)
type Game struct{}
func (g *Game) Update() error {
return nil
}
func (g *Game) Draw(screen *ebiten.Image) {
screen.Fill(color.Black)
fmt.Printf("fps: %.2f\n", ebiten.CurrentFPS())
}
func (g *Game) Layout(w, h int) (int, int) {
return w, h
}
func main() {
os.Setenv("EBITEN_GRAPHICS_LIBRARY", "opengl")
ebiten.SetFPSMode(ebiten.FPSModeVsyncOffMaximum)
ebiten.RunGame(&Game{})
}
``` | non_test | fpsmodevsyncoffmaximum capped fps following code on windows prints fps on fps on alpha go package main import fmt image color os github com hajimehoshi ebiten type game struct func g game update error return nil func g game draw screen ebiten image screen fill color black fmt printf fps n ebiten currentfps func g game layout w h int int int return w h func main os setenv ebiten graphics library opengl ebiten setfpsmode ebiten fpsmodevsyncoffmaximum ebiten rungame game | 0 |
153,519 | 12,149,582,124 | IssuesEvent | 2020-04-24 16:24:07 | naixinzhang/naixinzhang.github.io | https://api.github.com/repos/naixinzhang/naixinzhang.github.io | opened | AB_test_prep | Naixin's blog | 2020/02/24/abtest/ab-test-interveiw-prep/ Gitalk | https://naixinzhang.github.io/2020/02/24/abtest/ab-test-interveiw-prep/
What is A/B testing?
A/B testing (sometimes called split testing) is basically statistical hypothesis testing applied t | 2.0 | AB_test_prep | Naixin's blog - https://naixinzhang.github.io/2020/02/24/abtest/ab-test-interveiw-prep/
What is A/B testing?
A/B testing (sometimes called split testing) is basically statistical hypothesis testing applied t | test | ab test prep naixin s blog what is a b testing a b testing sometimes called split testing is basically statistical hypothesis testing applied t | 1 |
55,309 | 11,422,590,650 | IssuesEvent | 2020-02-03 14:28:20 | eclipse/codewind | https://api.github.com/repos/eclipse/codewind | closed | VSCode reconnection issues for remote connections | area/docs area/vscode-ide kind/bug priority/hot | To recreate
- Create a remote connection (can be docker desktop) from VScode
- Swap your network connection from IBM to IBM Intranet
- Wait a few seconds
- Swap network back to IBM
- connection is now lost in VScode and we fail to reauthenticate
To fix on Mac, bring up the keychain and delete the most recent codewind refresh token, then stop and start the remote connection to force a login. | 1.0 | VSCode reconnection issues for remote connections - To recreate
- Create a remote connection (can be docker desktop) from VScode
- Swap your network connection from IBM to IBM Intranet
- Wait a few seconds
- Swap network back to IBM
- connection is now lost in VScode and we fail to reauthenticate
To fix on Mac, bring up the keychain and delete the most recent codewind refresh token, then stop and start the remote connection to force a login. | non_test | vscode reconnection issues for remote connections to recreate create a remote connection can be docker desktop from vscode swap your network connection from ibm to ibm intranet wait a few seconds swap network back to ibm connection is now lost in vscode and we fail to reauthenticate to fix on mac bring up the keychain and delete the most recent codewind refresh token then stop and start the remote connection to force a login | 0 |
50,304 | 6,077,559,298 | IssuesEvent | 2017-06-16 04:38:58 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Test: System.ComponentModel.Tests.ArrayConverterTests/ConvertTo_WithContext failed with "System.ArgumentNullException" | area-System.ComponentModel os-windows-uwp test-run-uwp-coreclr | Opened on behalf of @Jiayili1
The test `System.ComponentModel.Tests.ArrayConverterTests/ConvertTo_WithContext` has failed.
System.ArgumentNullException : Value cannot be null.\r
Parameter name: format
Stack Trace:
at System.String.FormatHelper(IFormatProvider provider, String format, ParamsArray args)
at System.String.Format(String format, Object arg0)
at System.SR.Format(String resourceFormat, Object p1)
at System.ComponentModel.ArrayConverter.ConvertTo(ITypeDescriptorContext context, CultureInfo culture, Object value, Type destinationType)
at System.ComponentModel.Tests.ConverterTestBase.ConvertTo_WithContext(Object[,] data, TypeConverter converter)
at System.ComponentModel.Tests.ArrayConverterTests.ConvertTo_WithContext()
Build : Master - 20170609.02 (UWP F5 Tests)
Failing configurations:
- Windows.10.Amd64-x64
- Debug
- Release
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fuwp~2F/build/20170609.02/workItem/System.ComponentModel.TypeConverter.Tests/analysis/xunit/System.ComponentModel.Tests.ArrayConverterTests~2FConvertTo_WithContext | 1.0 | Test: System.ComponentModel.Tests.ArrayConverterTests/ConvertTo_WithContext failed with "System.ArgumentNullException" - Opened on behalf of @Jiayili1
The test `System.ComponentModel.Tests.ArrayConverterTests/ConvertTo_WithContext` has failed.
System.ArgumentNullException : Value cannot be null.\r
Parameter name: format
Stack Trace:
at System.String.FormatHelper(IFormatProvider provider, String format, ParamsArray args)
at System.String.Format(String format, Object arg0)
at System.SR.Format(String resourceFormat, Object p1)
at System.ComponentModel.ArrayConverter.ConvertTo(ITypeDescriptorContext context, CultureInfo culture, Object value, Type destinationType)
at System.ComponentModel.Tests.ConverterTestBase.ConvertTo_WithContext(Object[,] data, TypeConverter converter)
at System.ComponentModel.Tests.ArrayConverterTests.ConvertTo_WithContext()
Build : Master - 20170609.02 (UWP F5 Tests)
Failing configurations:
- Windows.10.Amd64-x64
- Debug
- Release
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fuwp~2F/build/20170609.02/workItem/System.ComponentModel.TypeConverter.Tests/analysis/xunit/System.ComponentModel.Tests.ArrayConverterTests~2FConvertTo_WithContext | test | test system componentmodel tests arrayconvertertests convertto withcontext failed with system argumentnullexception opened on behalf of the test system componentmodel tests arrayconvertertests convertto withcontext has failed system argumentnullexception value cannot be null r parameter name format stack trace at system string formathelper iformatprovider provider string format paramsarray args at system string format string format object at system sr format string resourceformat object at system componentmodel arrayconverter convertto itypedescriptorcontext context cultureinfo culture object value type destinationtype at system componentmodel tests convertertestbase convertto withcontext object data typeconverter converter at system componentmodel tests arrayconvertertests convertto withcontext build master uwp tests failing configurations windows debug release detail | 1 |
75,765 | 7,482,301,326 | IssuesEvent | 2018-04-05 00:29:47 | tempesta-tech/tempesta | https://api.github.com/repos/tempesta-tech/tempesta | closed | Waiting for port free in tests | test | When we run multiple tests, sometimes we run next test before ports, used in previous test, releases. So we have fails of running nginx or other server.
Reproducing: run_tests.py
`error: [Errno 98] Address already in use`
| 1.0 | Waiting for port free in tests - When we run multiple tests, sometimes we run next test before ports, used in previous test, releases. So we have fails of running nginx or other server.
Reproducing: run_tests.py
`error: [Errno 98] Address already in use`
| test | waiting for port free in tests when we run multiple tests sometimes we run next test before ports used in previous test releases so we have fails of running nginx or other server reproducing run tests py error address already in use | 1 |
126,226 | 10,413,011,822 | IssuesEvent | 2019-09-13 17:26:19 | futest-test/fu | https://api.github.com/repos/futest-test/fu | opened | Vulnerability - Content Security Policy (CSP) not implemented | FoundByAcunetix360 FuTest |
**URL:** http://php.testsparker.com/
**Name:** Content Security Policy (CSP) not implemented
**Severity:** Information
You can see vulnerability details from the link below.
http://ec2-18-194-173-226.eu-central-1.compute.amazonaws.com/vulnerabilities/detail/04c43bd5a3de42cc82d8aac602a9d07e | 1.0 | Vulnerability - Content Security Policy (CSP) not implemented -
**URL:** http://php.testsparker.com/
**Name:** Content Security Policy (CSP) not implemented
**Severity:** Information
You can see vulnerability details from the link below.
http://ec2-18-194-173-226.eu-central-1.compute.amazonaws.com/vulnerabilities/detail/04c43bd5a3de42cc82d8aac602a9d07e | test | vulnerability content security policy csp not implemented url name content security policy csp not implemented severity information you can see vulnerability details from the link below | 1 |
174,071 | 13,455,540,325 | IssuesEvent | 2020-09-09 06:24:48 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] FollowIndexIT.testFollowIndex and AutoFollowIT.testAutoFollowPatterns failures | :Distributed/CCR >test-failure Team:Distributed | **Build scan**:
* https://gradle-enterprise.elastic.co/s/dehqfesde3j36
* https://gradle-enterprise.elastic.co/s/egmuekpyuk446
* https://gradle-enterprise.elastic.co/s/yefmwetim6hqs
**Repro line**:
> ./gradlew ':x-pack:plugin:ccr:qa:multi-cluster:follow-cluster' --tests "org.elasticsearch.xpack.ccr.FollowIndexIT.testFollowIndex" -Dtests.seed=A377D90C2B0B3BD2 -Dtests.security.manager=true -Dtests.locale=ar-YE -Dtests.timezone=Pacific/Efate -Druntime.java=11 -Dtests.fips.enabled=true
**Reproduces locally?**:
Yes
Note the linked build scans are all for FIPS jobs. But the same failure can be reproduced locally without FIPS enabled, i.e. removing `-Dtests.fips.enabled=true` from the reproduction line.
There is a previous similar issue (#50279). But since it's been a while, I don't know how releated it is. So I am openning a new issue.
**Applicable branches**:
`7.x`
**Failure history**:
This is a new failure and just [started](https://build-stats.elastic.co/app/kibana#/discover?_g=(refreshInterval:(pause:!t,value:0),time:(from:now-60d,mode:quick,to:now))&_a=(columns:!(_source),index:b646ed00-7efc-11e8-bf69-63c8ef516157,interval:auto,query:(language:lucene,query:'%22testFollowIndex%22%20OR%20%22testAutoFollowPatterns%22%20AND%20%22java.lang.AssertionError%22'),sort:!(process.time-start,desc))) to happen.
**Failure excerpt**:
```
java.lang.AssertionError: | ย
-- | --
ย | Expected: a value equal to or greater than <1> | ย
ย | but: <0> was less than <1>
atย __randomizedtesting.SeedInfo.seed([A377D90C2B0B3BD2:401B163E12B30258]:0) | ย
-- | --
ย | ย | โขโขโข
ย | ย | atย org.elasticsearch.xpack.ccr.ESCCRRestTestCase.verifyCcrMonitoring(ESCCRRestTestCase.java:175) | ย
ย | ย | atย org.elasticsearch.xpack.ccr.FollowIndexIT.lambda$testFollowIndex$2(FollowIndexIT.java:79) | ย
ย | ย | โขโขโข
ย | ย | atย org.elasticsearch.xpack.ccr.FollowIndexIT.testFollowIndex(FollowIndexIT.java:79)
```
| 1.0 | [CI] FollowIndexIT.testFollowIndex and AutoFollowIT.testAutoFollowPatterns failures - **Build scan**:
* https://gradle-enterprise.elastic.co/s/dehqfesde3j36
* https://gradle-enterprise.elastic.co/s/egmuekpyuk446
* https://gradle-enterprise.elastic.co/s/yefmwetim6hqs
**Repro line**:
> ./gradlew ':x-pack:plugin:ccr:qa:multi-cluster:follow-cluster' --tests "org.elasticsearch.xpack.ccr.FollowIndexIT.testFollowIndex" -Dtests.seed=A377D90C2B0B3BD2 -Dtests.security.manager=true -Dtests.locale=ar-YE -Dtests.timezone=Pacific/Efate -Druntime.java=11 -Dtests.fips.enabled=true
**Reproduces locally?**:
Yes
Note the linked build scans are all for FIPS jobs. But the same failure can be reproduced locally without FIPS enabled, i.e. removing `-Dtests.fips.enabled=true` from the reproduction line.
There is a previous similar issue (#50279). But since it's been a while, I don't know how releated it is. So I am openning a new issue.
**Applicable branches**:
`7.x`
**Failure history**:
This is a new failure and just [started](https://build-stats.elastic.co/app/kibana#/discover?_g=(refreshInterval:(pause:!t,value:0),time:(from:now-60d,mode:quick,to:now))&_a=(columns:!(_source),index:b646ed00-7efc-11e8-bf69-63c8ef516157,interval:auto,query:(language:lucene,query:'%22testFollowIndex%22%20OR%20%22testAutoFollowPatterns%22%20AND%20%22java.lang.AssertionError%22'),sort:!(process.time-start,desc))) to happen.
**Failure excerpt**:
```
java.lang.AssertionError: | ย
-- | --
ย | Expected: a value equal to or greater than <1> | ย
ย | but: <0> was less than <1>
atย __randomizedtesting.SeedInfo.seed([A377D90C2B0B3BD2:401B163E12B30258]:0) | ย
-- | --
ย | ย | โขโขโข
ย | ย | atย org.elasticsearch.xpack.ccr.ESCCRRestTestCase.verifyCcrMonitoring(ESCCRRestTestCase.java:175) | ย
ย | ย | atย org.elasticsearch.xpack.ccr.FollowIndexIT.lambda$testFollowIndex$2(FollowIndexIT.java:79) | ย
ย | ย | โขโขโข
ย | ย | atย org.elasticsearch.xpack.ccr.FollowIndexIT.testFollowIndex(FollowIndexIT.java:79)
```
| test | followindexit testfollowindex and autofollowit testautofollowpatterns failures build scan repro line gradlew x pack plugin ccr qa multi cluster follow cluster tests org elasticsearch xpack ccr followindexit testfollowindex dtests seed dtests security manager true dtests locale ar ye dtests timezone pacific efate druntime java dtests fips enabled true reproduces locally yes note the linked build scans are all for fips jobs but the same failure can be reproduced locally without fips enabled i e removing dtests fips enabled true from the reproduction line there is a previous similar issue but since it s been a while i don t know how releated it is so i am openning a new issue applicable branches x failure history this is a new failure and just to happen failure excerpt java lang assertionerror ย ย expected a value equal to or greater than ย ย but was less than atย randomizedtesting seedinfo seed ย ย ย โขโขโข ย ย atย org elasticsearch xpack ccr esccrresttestcase verifyccrmonitoring esccrresttestcase java ย ย ย atย org elasticsearch xpack ccr followindexit lambda testfollowindex followindexit java ย ย ย โขโขโข ย ย atย org elasticsearch xpack ccr followindexit testfollowindex followindexit java | 1 |
45,423 | 5,714,524,608 | IssuesEvent | 2017-04-19 10:37:55 | almighty/almighty-core | https://api.github.com/repos/almighty/almighty-core | closed | Enhancement request - Enable modification of logging level, especially when running in dev mode | enhancement test URGENT | The automated performance tests require that the tests be able to generate multiple tokens, in order to both simulate multiple users, and to enable the tests to run for extended periods of time, up to our planned multi-day "soak" tests. Accordingly, we are running the performance test server in dev mode.
The problem that we are seeing is that dev mode also includes a high level of logging. We want to be able to vary/reduce the level of logging for the server while running in dev mode, so that we can be certain that the logging does not affect the server throughput.
Can a means to alter the logging level be implemented - or, if this is already supported by the server, can this be documented? Thx!
| 1.0 | Enhancement request - Enable modification of logging level, especially when running in dev mode - The automated performance tests require that the tests be able to generate multiple tokens, in order to both simulate multiple users, and to enable the tests to run for extended periods of time, up to our planned multi-day "soak" tests. Accordingly, we are running the performance test server in dev mode.
The problem that we are seeing is that dev mode also includes a high level of logging. We want to be able to vary/reduce the level of logging for the server while running in dev mode, so that we can be certain that the logging does not affect the server throughput.
Can a means to alter the logging level be implemented - or, if this is already supported by the server, can this be documented? Thx!
| test | enhancement request enable modification of logging level especially when running in dev mode the automated performance tests require that the tests be able to generate multiple tokens in order to both simulate multiple users and to enable the tests to run for extended periods of time up to our planned multi day soak tests accordingly we are running the performance test server in dev mode the problem that we are seeing is that dev mode also includes a high level of logging we want to be able to vary reduce the level of logging for the server while running in dev mode so that we can be certain that the logging does not affect the server throughput can a means to alter the logging level be implemented or if this is already supported by the server can this be documented thx | 1 |
232,740 | 17,793,336,454 | IssuesEvent | 2021-08-31 18:54:03 | anarolon/person-and-ghost | https://api.github.com/repos/anarolon/person-and-ghost | reopened | Doc | Complete GDD | documentation | - [x] Title Page
- [x] Game Overview
- [x] Game Concept
- [x] Genre
- [x] Target Audience
- [x] Game Flow Summary
- [x] Look and Feel
- [ ] Gameplay and Mechanics
- [x] Gameplay
- [ ] Mechanics
- [ ] Physics
- [x] Movement in the game
- [ ] Objects
- [x] Actions
- [x] Combat
- [x] Economy
- [ ] Screen Flow
- [x] Game Options
- [x] Replaying and Saving
- [x] Cheats and Easter Eggs
- [ ] Story, Setting and Characters
- [x] Story and Narrative
- [x] Game World
- [x] Characters
- [ ] Levels
- [ ] Levels
- [ ] Training Level
- [x] Interface
- [x] Visual System (HUD, Menus, Camera Model)
- [x] Control System
- [x] Audio, Music, Sound Effect
- [x] Help System
- [x] AI
- [x] Opponent and Enemy AI
- [x] Non-Combat and NPCs
- [x] Support AI
- [x] Technical
- [x] Target Hardware
- [x] Development hardware and software, Game Engine
- [x] Network requirements
- [x] Game Art
### Velocity = 19 | 1.0 | Doc | Complete GDD - - [x] Title Page
- [x] Game Overview
- [x] Game Concept
- [x] Genre
- [x] Target Audience
- [x] Game Flow Summary
- [x] Look and Feel
- [ ] Gameplay and Mechanics
- [x] Gameplay
- [ ] Mechanics
- [ ] Physics
- [x] Movement in the game
- [ ] Objects
- [x] Actions
- [x] Combat
- [x] Economy
- [ ] Screen Flow
- [x] Game Options
- [x] Replaying and Saving
- [x] Cheats and Easter Eggs
- [ ] Story, Setting and Characters
- [x] Story and Narrative
- [x] Game World
- [x] Characters
- [ ] Levels
- [ ] Levels
- [ ] Training Level
- [x] Interface
- [x] Visual System (HUD, Menus, Camera Model)
- [x] Control System
- [x] Audio, Music, Sound Effect
- [x] Help System
- [x] AI
- [x] Opponent and Enemy AI
- [x] Non-Combat and NPCs
- [x] Support AI
- [x] Technical
- [x] Target Hardware
- [x] Development hardware and software, Game Engine
- [x] Network requirements
- [x] Game Art
### Velocity = 19 | non_test | doc complete gdd title page game overview game concept genre target audience game flow summary look and feel gameplay and mechanics gameplay mechanics physics movement in the game objects actions combat economy screen flow game options replaying and saving cheats and easter eggs story setting and characters story and narrative game world characters levels levels training level interface visual system hud menus camera model control system audio music sound effect help system ai opponent and enemy ai non combat and npcs support ai technical target hardware development hardware and software game engine network requirements game art velocity | 0 |
157,288 | 5,996,972,873 | IssuesEvent | 2017-06-03 19:13:39 | ReikaKalseki/Reika_Mods_Issues | https://api.github.com/repos/ReikaKalseki/Reika_Mods_Issues | closed | ReactorCraft handbook Turbine description does not mention lubricant | Low Priority ReactorCraft | The entry for the Turbine in the handbook mentions nothing about requiring lubricant (although it probably should go without saying)
Given the spirit of the mod, I can imagine this was done on purpose. I simply wanted to ensure this was done on purpose. | 1.0 | ReactorCraft handbook Turbine description does not mention lubricant - The entry for the Turbine in the handbook mentions nothing about requiring lubricant (although it probably should go without saying)
Given the spirit of the mod, I can imagine this was done on purpose. I simply wanted to ensure this was done on purpose. | non_test | reactorcraft handbook turbine description does not mention lubricant the entry for the turbine in the handbook mentions nothing about requiring lubricant although it probably should go without saying given the spirit of the mod i can imagine this was done on purpose i simply wanted to ensure this was done on purpose | 0 |
90,147 | 8,229,476,463 | IssuesEvent | 2018-09-07 09:30:32 | Microsoft/AzureStorageExplorer | https://api.github.com/repos/Microsoft/AzureStorageExplorer | opened | Fail to complete the activity of undeleting blobs | :gear: blobs :gear: delete policies testing | **Storage Explorer Version**: 1.5.0
**Platform/OS Version**: Windows 10/ Linux Ubuntu 16.04/ MacOS High Sierra
**Architecture**: ia32
**Build Number**:20180905.3
**Commit**: 66ac14b9
**Regression From**: Previous release 1.4.1(20180822.1)
#### Steps to Reproduce: ####
1. Open one blob container with soft delete enabled.
2. Upload a blob to it -> Try to delete the uploaded blob -> Switch to 'Active and deleted blobs'.
3. Right click the deleted blob then select 'Undelete Selected'.
#### Expected Experience: ####
The deleted blob can be undeleted successfully.
#### Actual Experience: ####
Fail to complete the activity.
 | 1.0 | Fail to complete the activity of undeleting blobs - **Storage Explorer Version**: 1.5.0
**Platform/OS Version**: Windows 10/ Linux Ubuntu 16.04/ MacOS High Sierra
**Architecture**: ia32
**Build Number**:20180905.3
**Commit**: 66ac14b9
**Regression From**: Previous release 1.4.1(20180822.1)
#### Steps to Reproduce: ####
1. Open one blob container with soft delete enabled.
2. Upload a blob to it -> Try to delete the uploaded blob -> Switch to 'Active and deleted blobs'.
3. Right click the deleted blob then select 'Undelete Selected'.
#### Expected Experience: ####
The deleted blob can be undeleted successfully.
#### Actual Experience: ####
Fail to complete the activity.
 | test | fail to complete the activity of undeleting blobs storage explorer version platform os version windows linux ubuntu macos high sierra architecture build number commit regression from previous release steps to reproduce open one blob container with soft delete enabled upload a blob to it try to delete the uploaded blob switch to active and deleted blobs right click the deleted blob then select undelete selected expected experience the deleted blob can be undeleted successfully actual experience fail to complete the activity | 1 |
409,753 | 27,751,179,601 | IssuesEvent | 2023-03-15 20:53:19 | liviuvj/sentiment-analysis-tool | https://api.github.com/repos/liviuvj/sentiment-analysis-tool | closed | Corregir memoria del proyecto | documentation | Se procederรก a implementar las correcciones provistas a modo de *feedback* por el tutor en los comentarios de las *issues* #11 y #12. | 1.0 | Corregir memoria del proyecto - Se procederรก a implementar las correcciones provistas a modo de *feedback* por el tutor en los comentarios de las *issues* #11 y #12. | non_test | corregir memoria del proyecto se procederรก a implementar las correcciones provistas a modo de feedback por el tutor en los comentarios de las issues y | 0 |
106,693 | 9,179,388,989 | IssuesEvent | 2019-03-05 02:58:39 | Microsoft/azure-pipelines-agent | https://api.github.com/repos/Microsoft/azure-pipelines-agent | closed | [question] Publish Test Results task: Could not publish build level data. Object reference not set to an instance of an object. | Area: Test bug | I have two pipeline jobs, one for macOS and one for Windows. Both jobs invoke the same template file which contains the followings:
```yml
- script: myFolder/gradlew -p ./myFolder/ clean build -PignoreTestFailures=true --refresh-dependencies --continue
displayName: 'Gradle - Build'
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/build/test-results/test/*.xml'
mergeTestResults: true
testRunTitle: 'Tests'
displayName: 'Publish - Test Results'
```
The macOS build runs first, no warnings. Then I can see the following warning in the Windows log:
```
##[warning]Could not publish build level data. Object reference not set to an instance of an object.
```
Question: What does it mean? What is the problem?
Note: I am not using any Gradle Tools task.
Thank you! | 1.0 | [question] Publish Test Results task: Could not publish build level data. Object reference not set to an instance of an object. - I have two pipeline jobs, one for macOS and one for Windows. Both jobs invoke the same template file which contains the followings:
```yml
- script: myFolder/gradlew -p ./myFolder/ clean build -PignoreTestFailures=true --refresh-dependencies --continue
displayName: 'Gradle - Build'
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/build/test-results/test/*.xml'
mergeTestResults: true
testRunTitle: 'Tests'
displayName: 'Publish - Test Results'
```
The macOS build runs first, no warnings. Then I can see the following warning in the Windows log:
```
##[warning]Could not publish build level data. Object reference not set to an instance of an object.
```
Question: What does it mean? What is the problem?
Note: I am not using any Gradle Tools task.
Thank you! | test | publish test results task could not publish build level data object reference not set to an instance of an object i have two pipeline jobs one for macos and one for windows both jobs invoke the same template file which contains the followings yml script myfolder gradlew p myfolder clean build pignoretestfailures true refresh dependencies continue displayname gradle build task publishtestresults inputs testresultsformat junit testresultsfiles build test results test xml mergetestresults true testruntitle tests displayname publish test results the macos build runs first no warnings then i can see the following warning in the windows log could not publish build level data object reference not set to an instance of an object question what does it mean what is the problem note i am not using any gradle tools task thank you | 1 |
290,910 | 25,105,278,340 | IssuesEvent | 2022-11-08 16:11:34 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [TEST]ย DocumentSubsetBitsetCacheTests.testCacheUnderConcurrentAccess failing | >test-failure :Security/Security Team:Security | https://gradle-enterprise.elastic.co/s/wl7uqqpo2ep7e/tests/sbbvdtbroicas-3wxlzqkg2i6xo
```
java.lang.AssertionError: Query threads did not complete in expected timeClose stacktrace
at __randomizedtesting.SeedInfo.seed([56E0E4E4803B5BAC:BAEA805D1F939CFE]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.elasticsearch.xpack.core.security.authz.accesscontrol.DocumentSubsetBitsetCacheTests.lambda$testCacheUnderConcurrentAccess$13(DocumentSubsetBitsetCacheTests.java:396)
at org.elasticsearch.xpack.core.security.authz.accesscontrol.DocumentSubsetBitsetCacheTests.runTestOnIndices(DocumentSubsetBitsetCacheTests.java:554)
at org.elasticsearch.xpack.core.security.authz.accesscontrol.DocumentSubsetBitsetCacheTests.testCacheUnderConcurrentAccess(DocumentSubsetBitsetCacheTests.java:372)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
[2020-02-05T17:36:58,178][INFO ][o.e.x.c.s.a.a.DocumentSubsetBitsetCacheTests] [testCacheUnderConcurrentAccess] before test
[2020-02-05T17:36:58,764][INFO ][o.e.x.c.s.a.a.DocumentSubsetBitsetCache] [[pool-6-thread-13]] the Document Level Security BitSet cache is full which may impact performance; consider increasing the value of [xpack.security.dls.bitset.cache.size]
[2020-02-05T17:36:59,723][INFO ][o.e.x.c.s.a.a.DocumentSubsetBitsetCacheTests] [testCacheUnderConcurrentAccess] after test
```
Looks like the timeout chosen (1s) could be too small. | 1.0 | [TEST]ย DocumentSubsetBitsetCacheTests.testCacheUnderConcurrentAccess failing - https://gradle-enterprise.elastic.co/s/wl7uqqpo2ep7e/tests/sbbvdtbroicas-3wxlzqkg2i6xo
```
java.lang.AssertionError: Query threads did not complete in expected timeClose stacktrace
at __randomizedtesting.SeedInfo.seed([56E0E4E4803B5BAC:BAEA805D1F939CFE]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.elasticsearch.xpack.core.security.authz.accesscontrol.DocumentSubsetBitsetCacheTests.lambda$testCacheUnderConcurrentAccess$13(DocumentSubsetBitsetCacheTests.java:396)
at org.elasticsearch.xpack.core.security.authz.accesscontrol.DocumentSubsetBitsetCacheTests.runTestOnIndices(DocumentSubsetBitsetCacheTests.java:554)
at org.elasticsearch.xpack.core.security.authz.accesscontrol.DocumentSubsetBitsetCacheTests.testCacheUnderConcurrentAccess(DocumentSubsetBitsetCacheTests.java:372)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
[2020-02-05T17:36:58,178][INFO ][o.e.x.c.s.a.a.DocumentSubsetBitsetCacheTests] [testCacheUnderConcurrentAccess] before test
[2020-02-05T17:36:58,764][INFO ][o.e.x.c.s.a.a.DocumentSubsetBitsetCache] [[pool-6-thread-13]] the Document Level Security BitSet cache is full which may impact performance; consider increasing the value of [xpack.security.dls.bitset.cache.size]
[2020-02-05T17:36:59,723][INFO ][o.e.x.c.s.a.a.DocumentSubsetBitsetCacheTests] [testCacheUnderConcurrentAccess] after test
```
Looks like the timeout chosen (1s) could be too small. | test | ย documentsubsetbitsetcachetests testcacheunderconcurrentaccess failing java lang assertionerror query threads did not complete in expected timeclose stacktrace at randomizedtesting seedinfo seed at org junit assert fail assert java at org junit assert asserttrue assert java at org elasticsearch xpack core security authz accesscontrol documentsubsetbitsetcachetests lambda testcacheunderconcurrentaccess documentsubsetbitsetcachetests java at org elasticsearch xpack core security authz accesscontrol documentsubsetbitsetcachetests runtestonindices documentsubsetbitsetcachetests java at org elasticsearch xpack core security authz accesscontrol documentsubsetbitsetcachetests testcacheunderconcurrentaccess documentsubsetbitsetcachetests java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at java lang thread run thread java before test the document level security bitset cache is full which may impact performance consider increasing the value of after test looks like the timeout chosen could be too small | 1 |
636,801 | 20,609,502,957 | IssuesEvent | 2022-03-07 06:48:34 | Dans-Plugins/Medieval-Factions | https://api.github.com/repos/Dans-Plugins/Medieval-Factions | reopened | Fix players getting teleported to each otherโs faction homes if they initiate the home command at the same time. | bug priority might be fixed | **Describe the bug**
A clear and concise description of what the bug is.
Players are teleported to each otherโs faction homes.
---
**To Reproduce**
Steps to reproduce the behavior:
Two players must initiate the /mf home command at the same time.
---
**Expected behavior**
A clear and concise description of what you expected to happen.
The players should teleport to their respective faction homes. | 1.0 | Fix players getting teleported to each otherโs faction homes if they initiate the home command at the same time. - **Describe the bug**
A clear and concise description of what the bug is.
Players are teleported to each otherโs faction homes.
---
**To Reproduce**
Steps to reproduce the behavior:
Two players must initiate the /mf home command at the same time.
---
**Expected behavior**
A clear and concise description of what you expected to happen.
The players should teleport to their respective faction homes. | non_test | fix players getting teleported to each otherโs faction homes if they initiate the home command at the same time describe the bug a clear and concise description of what the bug is players are teleported to each otherโs faction homes to reproduce steps to reproduce the behavior two players must initiate the mf home command at the same time expected behavior a clear and concise description of what you expected to happen the players should teleport to their respective faction homes | 0 |
210,575 | 16,108,544,527 | IssuesEvent | 2021-04-27 17:54:14 | folkarps/F3 | https://api.github.com/repos/folkarps/F3 | closed | Weather presets should be changed to use decaying fog | S5:Tested; Awaiting Release T:Component improvement | The current weather presets (written by me, oops) use fog to simulate limited visibility during storms. The problem is, they use fog without a decay value, which means that the sky is obscured with even a small amount of fog. By adding in a small decay value, you can retain the limited visibility while also being able to see the Pretty Volumetric Cloudsยฉ.
Possible counterpoint, decaying fog is altitude dependent and might not be possible to set generically for all possible elevations. But I think you can probably work something out with a small decay value. | 1.0 | Weather presets should be changed to use decaying fog - The current weather presets (written by me, oops) use fog to simulate limited visibility during storms. The problem is, they use fog without a decay value, which means that the sky is obscured with even a small amount of fog. By adding in a small decay value, you can retain the limited visibility while also being able to see the Pretty Volumetric Cloudsยฉ.
Possible counterpoint, decaying fog is altitude dependent and might not be possible to set generically for all possible elevations. But I think you can probably work something out with a small decay value. | test | weather presets should be changed to use decaying fog the current weather presets written by me oops use fog to simulate limited visibility during storms the problem is they use fog without a decay value which means that the sky is obscured with even a small amount of fog by adding in a small decay value you can retain the limited visibility while also being able to see the pretty volumetric cloudsยฉ possible counterpoint decaying fog is altitude dependent and might not be possible to set generically for all possible elevations but i think you can probably work something out with a small decay value | 1 |
55,833 | 6,493,532,164 | IssuesEvent | 2017-08-21 17:32:21 | cul-2016/quiz | https://api.github.com/repos/cul-2016/quiz | closed | Housekeeping: Cookie Warning and Privacy Statement | please-test priority-2 T1d T4h | As a student or lecturer using the system, I want to be warned that a cookie is being set on my device before it is set, and have the option to find out about what cookies are being used for. I also want to be able to view a privacy statement to tell me how my data will be used, and how to have my data deleted.
- [x] Create a static webpage to hold the privacy statement (I will provide appropriate text)
- [x] On the create account page (https://www.quodl.co.uk/#/register-student), immediately above the 'Register' button, include a checkbox (easily checkable on a mobile device or tablet) along with the text "I agree with the [privacy statement](/privacy.html), including the [use of cookies](/privacy.html#cookies)."
- [x] On clicking either of the links, create either an in-window pop-up or new window displaying either the privacy information, scrolling to the cookies section where necessary.
- [x] On clicking 'Register', check the status of the consent checkbox. If it is selected, continue as currently. If it is not, give an alert where the user sees a message along the lines of "Tap the box to agree and proceed", and remain on current page.
- [x] On the lecturer and student dashboard page ([https://www.quodl.co.uk/#/dashboard](https://www.quodl.co.uk/#/dashboard)), below any modules that have been added, include a small piece of text "View [privacy statement](/privacy.html)", which as above either creates an in-window pop-up or a new window with the privacy text in.
| 1.0 | Housekeeping: Cookie Warning and Privacy Statement - As a student or lecturer using the system, I want to be warned that a cookie is being set on my device before it is set, and have the option to find out about what cookies are being used for. I also want to be able to view a privacy statement to tell me how my data will be used, and how to have my data deleted.
- [x] Create a static webpage to hold the privacy statement (I will provide appropriate text)
- [x] On the create account page (https://www.quodl.co.uk/#/register-student), immediately above the 'Register' button, include a checkbox (easily checkable on a mobile device or tablet) along with the text "I agree with the [privacy statement](/privacy.html), including the [use of cookies](/privacy.html#cookies)."
- [x] On clicking either of the links, create either an in-window pop-up or new window displaying either the privacy information, scrolling to the cookies section where necessary.
- [x] On clicking 'Register', check the status of the consent checkbox. If it is selected, continue as currently. If it is not, give an alert where the user sees a message along the lines of "Tap the box to agree and proceed", and remain on current page.
- [x] On the lecturer and student dashboard page ([https://www.quodl.co.uk/#/dashboard](https://www.quodl.co.uk/#/dashboard)), below any modules that have been added, include a small piece of text "View [privacy statement](/privacy.html)", which as above either creates an in-window pop-up or a new window with the privacy text in.
| test | housekeeping cookie warning and privacy statement as a student or lecturer using the system i want to be warned that a cookie is being set on my device before it is set and have the option to find out about what cookies are being used for i also want to be able to view a privacy statement to tell me how my data will be used and how to have my data deleted create a static webpage to hold the privacy statement i will provide appropriate text on the create account page immediately above the register button include a checkbox easily checkable on a mobile device or tablet along with the text i agree with the privacy html including the privacy html cookies on clicking either of the links create either an in window pop up or new window displaying either the privacy information scrolling to the cookies section where necessary on clicking register check the status of the consent checkbox if it is selected continue as currently if it is not give an alert where the user sees a message along the lines of tap the box to agree and proceed and remain on current page on the lecturer and student dashboard page below any modules that have been added include a small piece of text view privacy html which as above either creates an in window pop up or a new window with the privacy text in | 1 |
93,674 | 11,798,069,123 | IssuesEvent | 2020-03-18 13:49:33 | carbon-design-system/ibm-dotcom-library | https://api.github.com/repos/carbon-design-system/ibm-dotcom-library | opened | Video player Carbon version: visual design (birthday cake version) | design design: visual | ### Our story
This is to update the default Kaltura video player interface with IDL specs such as icons, containers, colors, etc.
The design from the Brand team and the func specs from the content team are on this [Box folder](https://ibm.box.com/s/0iou7yl21n58whoqhil1n1gbbjdqjlud)
### Notes
- Finalize the design and specs per the initial design and the func specs.
### Acceptance criteria
- [ ] Final design has been approved by DGC leadership
- [ ] Final design has been agreed upon by stakeholders
- [ ] IDL
- [ ] Content team
- [ ] IBM.com reboot team
| 2.0 | Video player Carbon version: visual design (birthday cake version) - ### Our story
This is to update the default Kaltura video player interface with IDL specs such as icons, containers, colors, etc.
The design from the Brand team and the func specs from the content team are on this [Box folder](https://ibm.box.com/s/0iou7yl21n58whoqhil1n1gbbjdqjlud)
### Notes
- Finalize the design and specs per the initial design and the func specs.
### Acceptance criteria
- [ ] Final design has been approved by DGC leadership
- [ ] Final design has been agreed upon by stakeholders
- [ ] IDL
- [ ] Content team
- [ ] IBM.com reboot team
| non_test | video player carbon version visual design birthday cake version our story this is to update the default kaltura video player interface with idl specs such as icons containers colors etc the design from the brand team and the func specs from the content team are on this notes finalize the design and specs per the initial design and the func specs acceptance criteria final design has been approved by dgc leadership final design has been agreed upon by stakeholders idl content team ibm com reboot team | 0 |
68,090 | 14,900,463,606 | IssuesEvent | 2021-01-21 15:26:32 | doc-ai/tensorio-webinar | https://api.github.com/repos/doc-ai/tensorio-webinar | closed | CVE-2018-11771 (Medium) detected in commons-compress-1.12.jar | security vulnerability | ## CVE-2018-11771 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.12.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE and ar, cpio,
jar, tar, zip, dump, 7z, arj.</p>
<p>Path to dependency file: tensorio-webinar/android/app/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.12/84caa68576e345eb5e7ae61a0e5a9229eb100d7b/commons-compress-1.12.jar</p>
<p>
Dependency Hierarchy:
- lint-gradle-27.1.1.jar (Root Library)
- sdk-common-27.1.1.jar
- sdklib-27.1.1.jar
- :x: **commons-compress-1.12.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/doc-ai/tensorio-webinar/commit/941f35bff0d8faa262e6aab872f29d5c55955b92">941f35bff0d8faa262e6aab872f29d5c55955b92</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted ZIP archive, the read method of Apache Commons Compress 1.7 to 1.17's ZipArchiveInputStream can fail to return the correct EOF indication after the end of the stream has been reached. When combined with a java.io.InputStreamReader this can lead to an infinite stream, which can be used to mount a denial of service attack against services that use Compress' zip package.
<p>Publish Date: 2018-08-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11771>CVE-2018-11771</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11771">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11771</a></p>
<p>Release Date: 2018-08-16</p>
<p>Fix Resolution: 1.18</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.commons","packageName":"commons-compress","packageVersion":"1.12","isTransitiveDependency":true,"dependencyTree":"com.android.tools.lint:lint-gradle:27.1.1;com.android.tools:sdk-common:27.1.1;com.android.tools:sdklib:27.1.1;org.apache.commons:commons-compress:1.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.18"}],"vulnerabilityIdentifier":"CVE-2018-11771","vulnerabilityDetails":"When reading a specially crafted ZIP archive, the read method of Apache Commons Compress 1.7 to 1.17\u0027s ZipArchiveInputStream can fail to return the correct EOF indication after the end of the stream has been reached. When combined with a java.io.InputStreamReader this can lead to an infinite stream, which can be used to mount a denial of service attack against services that use Compress\u0027 zip package.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11771","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-11771 (Medium) detected in commons-compress-1.12.jar - ## CVE-2018-11771 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.12.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE and ar, cpio,
jar, tar, zip, dump, 7z, arj.</p>
<p>Path to dependency file: tensorio-webinar/android/app/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.12/84caa68576e345eb5e7ae61a0e5a9229eb100d7b/commons-compress-1.12.jar</p>
<p>
Dependency Hierarchy:
- lint-gradle-27.1.1.jar (Root Library)
- sdk-common-27.1.1.jar
- sdklib-27.1.1.jar
- :x: **commons-compress-1.12.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/doc-ai/tensorio-webinar/commit/941f35bff0d8faa262e6aab872f29d5c55955b92">941f35bff0d8faa262e6aab872f29d5c55955b92</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted ZIP archive, the read method of Apache Commons Compress 1.7 to 1.17's ZipArchiveInputStream can fail to return the correct EOF indication after the end of the stream has been reached. When combined with a java.io.InputStreamReader this can lead to an infinite stream, which can be used to mount a denial of service attack against services that use Compress' zip package.
<p>Publish Date: 2018-08-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11771>CVE-2018-11771</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11771">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11771</a></p>
<p>Release Date: 2018-08-16</p>
<p>Fix Resolution: 1.18</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.commons","packageName":"commons-compress","packageVersion":"1.12","isTransitiveDependency":true,"dependencyTree":"com.android.tools.lint:lint-gradle:27.1.1;com.android.tools:sdk-common:27.1.1;com.android.tools:sdklib:27.1.1;org.apache.commons:commons-compress:1.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.18"}],"vulnerabilityIdentifier":"CVE-2018-11771","vulnerabilityDetails":"When reading a specially crafted ZIP archive, the read method of Apache Commons Compress 1.7 to 1.17\u0027s ZipArchiveInputStream can fail to return the correct EOF indication after the end of the stream has been reached. When combined with a java.io.InputStreamReader this can lead to an infinite stream, which can be used to mount a denial of service attack against services that use Compress\u0027 zip package.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11771","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in commons compress jar cve medium severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip lzma xz snappy traditional unix compress deflate and ar cpio jar tar zip dump arj path to dependency file tensorio webinar android app build gradle path to vulnerable library home wss scanner gradle caches modules files org apache commons commons compress commons compress jar dependency hierarchy lint gradle jar root library sdk common jar sdklib jar x commons compress jar vulnerable library found in head commit a href found in base branch master vulnerability details when reading a specially crafted zip archive the read method of apache commons compress to s ziparchiveinputstream can fail to return the correct eof indication after the end of the stream has been reached when combined with a java io inputstreamreader this can lead to an infinite stream which can be used to mount a denial of service attack against services that use compress zip package publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails when reading a specially crafted zip archive the read method of apache commons compress to ziparchiveinputstream can fail to return the correct eof indication after the end of the stream has been reached when combined with a java io inputstreamreader this can lead to an infinite stream which can be used to mount a denial of service attack against services that use compress zip package vulnerabilityurl | 0 |
181,139 | 14,005,220,477 | IssuesEvent | 2020-10-28 18:08:47 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | sql: TestSessionFinishRollsBackTxn failed | C-test-failure O-robot branch-release-20.1 | [(sql).TestSessionFinishRollsBackTxn failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2402544&tab=buildLog) on [release-20.1@a7c36bc7f70035c9e0e796f194c3fb489040c230](https://github.com/cockroachdb/cockroach/commits/a7c36bc7f70035c9e0e796f194c3fb489040c230):
```
I201028 17:58:18.766242 163918 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51868,hostssl,user=root] statement filter running on: SAVEPOINT cockroach_restart, with err=<nil>
I201028 17:58:18.781072 163918 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51868,hostssl,user=root] statement filter running on: INSERT INTO t.public.test(k, v) VALUES (1, 'a'), with err=<nil>
I201028 17:58:18.782469 163918 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51868,hostssl,user=root] statement filter running on: RELEASE SAVEPOINT cockroach_restart, with err=<nil>
I201028 17:58:18.783787 164029 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51856,hostssl,user=root] statement filter running on: SET TRANSACTION PRIORITY LOW, with err=<nil>
I201028 17:58:18.784537 164029 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51856,hostssl,user=root] statement filter running on: SELECT count(1) FROM t.test, with err=<nil>
I201028 17:58:18.785303 164029 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51856,hostssl,user=root] statement filter running on: DELETE FROM t.test, with err=<nil>
I201028 17:58:18.786090 164029 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51856,hostssl,user=root] statement filter running on: COMMIT TRANSACTION, with err=<nil>
I201028 17:58:34.381084 164277 sql/txn_restart_test.go:344 [intExec=get-tables] statement filter running on: SELECT table_id FROM crdb_internal.tables AS OF SYSTEM TIME '-1s' WHERE (schema_name = 'public') AND (drop_time IS NULL), with err=<nil>
I201028 17:58:19.560214 164240 kv/kvserver/replica_consistency.go:246 [n1,consistencyChecker,s1,r4/1:/System{/tsd-tse}] triggering stats recomputation to resolve delta of {ContainsEstimates:1412 LastUpdateNanos:1603907898602187123 IntentAge:0 GCBytesAge:0 LiveBytes:-34495 LiveCount:-679 KeyBytes:-33048 KeyCount:-679 ValBytes:-1447 ValCount:-679 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
I201028 17:58:20.388258 163113 gossip/gossip.go:1527 [n1] node has connected to cluster via gossip
I201028 17:58:34.381420 163113 kv/kvserver/stores.go:266 [n1] wrote 0 node addresses to persistent storage
W201028 17:58:27.550089 163116 kv/kvserver/closedts/provider/provider.go:152 [ct-closer] unable to move closed timestamp forward: not live
github.com/cockroachdb/cockroach/pkg/kv/kvserver.init
/go/src/github.com/cockroachdb/cockroach/pkg/kv/kvserver/node_liveness.go:57
runtime.doInit
/usr/local/go/src/runtime/proc.go:5222
runtime.doInit
/usr/local/go/src/runtime/proc.go:5217
runtime.doInit
/usr/local/go/src/runtime/proc.go:5217
runtime.doInit
/usr/local/go/src/runtime/proc.go:5217
runtime.main
/usr/local/go/src/runtime/proc.go:190
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
I201028 17:58:34.381653 163292 kv/kvserver/node_liveness.go:821 [n1,liveness-hb] retrying liveness update after kvserver.errRetryLiveness: result is ambiguous (context done during DistSender.Send: context deadline exceeded)
W201028 17:58:34.381688 163292 kv/kvserver/node_liveness.go:563 [n1,liveness-hb] slow heartbeat took 11.3s
W201028 17:58:34.381739 163292 kv/kvserver/node_liveness.go:488 [n1,liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 4.5s
(1) operation "node liveness heartbeat" timed out after 4.5s
Wraps: (2) context deadline exceeded
Error types: (1) *contextutil.TimeoutError (2) context.deadlineExceededError
I201028 17:58:34.382008 163037 util/stop/stopper.go:539 quiescing
I201028 17:58:28.550710 163284 server/status/runtime.go:498 [n1] runtime stats: 410 MiB RSS, 265 goroutines, 49 MiB/249 MiB/128 MiB GO alloc/idle/total, 23 MiB/86 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (197x), 38 MiB/38 MiB (r/w)net
I201028 17:58:34.382379 164439 kv/txn.go:739 [n1] async rollback failed: aborted in distSender: context canceled
I201028 17:58:34.382618 163154 kv/kvserver/queue.go:579 [n1,s1,r2/1:/System/NodeLiveness{-Max}] rate limited in MaybeAdd (merge): node unavailable; try another peer
W201028 17:58:34.383281 164427 kv/txn.go:607 [n1,s1,r23/1:/Table/2{7-8}] failure aborting transaction: node unavailable; try another peer; abort caused by: result is ambiguous (server shutdown)
I201028 17:58:34.383305 164427 kv/kvserver/node_liveness.go:821 [n1,s1,r23/1:/Table/2{7-8}] retrying liveness update after kvserver.errRetryLiveness: result is ambiguous (server shutdown)
W201028 17:58:34.383360 164427 kv/txn.go:607 [n1,s1,r23/1:/Table/2{7-8}] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
W201028 17:58:34.383376 164427 kv/kvserver/node_liveness.go:563 [n1,s1,r23/1:/Table/2{7-8}] slow heartbeat took 5.8s
E201028 17:58:34.383400 164427 kv/kvserver/replica_range_lease.go:339 [n1,s1,r23/1:/Table/2{7-8}] node unavailable; try another peer
I201028 17:58:34.387190 163171 kv/kvserver/queue.go:579 [n1,s1,r22/1:/Table/2{6-7}] rate limited in MaybeAdd (replicate): node unavailable; try another peer
W201028 17:58:34.387603 163285 ts/db.go:194 [n1,ts-poll] error writing time series data: failed to send RPC: sending to all 1 replicas failed; last error: (err: node unavailable; try another peer) <nil>
I201028 17:58:34.387772 164405 sql/txn_restart_test.go:344 [n1,intExec=stmt-diag-poll] statement filter running on: SELECT id, statement_fingerprint FROM system.statement_diagnostics_requests WHERE completed = false, with err=aborted during DistSender.Send: context canceled
W201028 17:58:34.387806 164405 kv/txn.go:607 [n1,intExec=stmt-diag-poll] failure aborting transaction: node unavailable; try another peer; abort caused by: aborted during DistSender.Send: context canceled
W201028 17:58:34.388024 163253 kv/kvserver/store.go:1631 [n1,s1,r1/1:/{Min-System/NodeLโฆ}] could not gossip first range descriptor: node unavailable; try another peer
I201028 17:58:34.388412 163327 kv/kvserver/queue.go:1190 [n1,replicate] purgatory is now empty
W201028 17:58:34.423839 163297 server/node.go:801 [n1,summaries] error recording status summaries: node unavailable; try another peer
--- FAIL: TestSessionFinishRollsBackTxn/CommitWait (15.63s)
conn_executor_test.go:236: Looks like the checking tx was unexpectedly blocked. It took 15.597230262s to commit.
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestSessionFinishRollsBackTxn PKG=./pkg/sql TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestSessionFinishRollsBackTxn.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 1.0 | sql: TestSessionFinishRollsBackTxn failed - [(sql).TestSessionFinishRollsBackTxn failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2402544&tab=buildLog) on [release-20.1@a7c36bc7f70035c9e0e796f194c3fb489040c230](https://github.com/cockroachdb/cockroach/commits/a7c36bc7f70035c9e0e796f194c3fb489040c230):
```
I201028 17:58:18.766242 163918 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51868,hostssl,user=root] statement filter running on: SAVEPOINT cockroach_restart, with err=<nil>
I201028 17:58:18.781072 163918 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51868,hostssl,user=root] statement filter running on: INSERT INTO t.public.test(k, v) VALUES (1, 'a'), with err=<nil>
I201028 17:58:18.782469 163918 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51868,hostssl,user=root] statement filter running on: RELEASE SAVEPOINT cockroach_restart, with err=<nil>
I201028 17:58:18.783787 164029 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51856,hostssl,user=root] statement filter running on: SET TRANSACTION PRIORITY LOW, with err=<nil>
I201028 17:58:18.784537 164029 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51856,hostssl,user=root] statement filter running on: SELECT count(1) FROM t.test, with err=<nil>
I201028 17:58:18.785303 164029 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51856,hostssl,user=root] statement filter running on: DELETE FROM t.test, with err=<nil>
I201028 17:58:18.786090 164029 sql/txn_restart_test.go:344 [n1,client=127.0.0.1:51856,hostssl,user=root] statement filter running on: COMMIT TRANSACTION, with err=<nil>
I201028 17:58:34.381084 164277 sql/txn_restart_test.go:344 [intExec=get-tables] statement filter running on: SELECT table_id FROM crdb_internal.tables AS OF SYSTEM TIME '-1s' WHERE (schema_name = 'public') AND (drop_time IS NULL), with err=<nil>
I201028 17:58:19.560214 164240 kv/kvserver/replica_consistency.go:246 [n1,consistencyChecker,s1,r4/1:/System{/tsd-tse}] triggering stats recomputation to resolve delta of {ContainsEstimates:1412 LastUpdateNanos:1603907898602187123 IntentAge:0 GCBytesAge:0 LiveBytes:-34495 LiveCount:-679 KeyBytes:-33048 KeyCount:-679 ValBytes:-1447 ValCount:-679 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
I201028 17:58:20.388258 163113 gossip/gossip.go:1527 [n1] node has connected to cluster via gossip
I201028 17:58:34.381420 163113 kv/kvserver/stores.go:266 [n1] wrote 0 node addresses to persistent storage
W201028 17:58:27.550089 163116 kv/kvserver/closedts/provider/provider.go:152 [ct-closer] unable to move closed timestamp forward: not live
github.com/cockroachdb/cockroach/pkg/kv/kvserver.init
/go/src/github.com/cockroachdb/cockroach/pkg/kv/kvserver/node_liveness.go:57
runtime.doInit
/usr/local/go/src/runtime/proc.go:5222
runtime.doInit
/usr/local/go/src/runtime/proc.go:5217
runtime.doInit
/usr/local/go/src/runtime/proc.go:5217
runtime.doInit
/usr/local/go/src/runtime/proc.go:5217
runtime.main
/usr/local/go/src/runtime/proc.go:190
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
I201028 17:58:34.381653 163292 kv/kvserver/node_liveness.go:821 [n1,liveness-hb] retrying liveness update after kvserver.errRetryLiveness: result is ambiguous (context done during DistSender.Send: context deadline exceeded)
W201028 17:58:34.381688 163292 kv/kvserver/node_liveness.go:563 [n1,liveness-hb] slow heartbeat took 11.3s
W201028 17:58:34.381739 163292 kv/kvserver/node_liveness.go:488 [n1,liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 4.5s
(1) operation "node liveness heartbeat" timed out after 4.5s
Wraps: (2) context deadline exceeded
Error types: (1) *contextutil.TimeoutError (2) context.deadlineExceededError
I201028 17:58:34.382008 163037 util/stop/stopper.go:539 quiescing
I201028 17:58:28.550710 163284 server/status/runtime.go:498 [n1] runtime stats: 410 MiB RSS, 265 goroutines, 49 MiB/249 MiB/128 MiB GO alloc/idle/total, 23 MiB/86 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (197x), 38 MiB/38 MiB (r/w)net
I201028 17:58:34.382379 164439 kv/txn.go:739 [n1] async rollback failed: aborted in distSender: context canceled
I201028 17:58:34.382618 163154 kv/kvserver/queue.go:579 [n1,s1,r2/1:/System/NodeLiveness{-Max}] rate limited in MaybeAdd (merge): node unavailable; try another peer
W201028 17:58:34.383281 164427 kv/txn.go:607 [n1,s1,r23/1:/Table/2{7-8}] failure aborting transaction: node unavailable; try another peer; abort caused by: result is ambiguous (server shutdown)
I201028 17:58:34.383305 164427 kv/kvserver/node_liveness.go:821 [n1,s1,r23/1:/Table/2{7-8}] retrying liveness update after kvserver.errRetryLiveness: result is ambiguous (server shutdown)
W201028 17:58:34.383360 164427 kv/txn.go:607 [n1,s1,r23/1:/Table/2{7-8}] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
W201028 17:58:34.383376 164427 kv/kvserver/node_liveness.go:563 [n1,s1,r23/1:/Table/2{7-8}] slow heartbeat took 5.8s
E201028 17:58:34.383400 164427 kv/kvserver/replica_range_lease.go:339 [n1,s1,r23/1:/Table/2{7-8}] node unavailable; try another peer
I201028 17:58:34.387190 163171 kv/kvserver/queue.go:579 [n1,s1,r22/1:/Table/2{6-7}] rate limited in MaybeAdd (replicate): node unavailable; try another peer
W201028 17:58:34.387603 163285 ts/db.go:194 [n1,ts-poll] error writing time series data: failed to send RPC: sending to all 1 replicas failed; last error: (err: node unavailable; try another peer) <nil>
I201028 17:58:34.387772 164405 sql/txn_restart_test.go:344 [n1,intExec=stmt-diag-poll] statement filter running on: SELECT id, statement_fingerprint FROM system.statement_diagnostics_requests WHERE completed = false, with err=aborted during DistSender.Send: context canceled
W201028 17:58:34.387806 164405 kv/txn.go:607 [n1,intExec=stmt-diag-poll] failure aborting transaction: node unavailable; try another peer; abort caused by: aborted during DistSender.Send: context canceled
W201028 17:58:34.388024 163253 kv/kvserver/store.go:1631 [n1,s1,r1/1:/{Min-System/NodeLโฆ}] could not gossip first range descriptor: node unavailable; try another peer
I201028 17:58:34.388412 163327 kv/kvserver/queue.go:1190 [n1,replicate] purgatory is now empty
W201028 17:58:34.423839 163297 server/node.go:801 [n1,summaries] error recording status summaries: node unavailable; try another peer
--- FAIL: TestSessionFinishRollsBackTxn/CommitWait (15.63s)
conn_executor_test.go:236: Looks like the checking tx was unexpectedly blocked. It took 15.597230262s to commit.
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestSessionFinishRollsBackTxn PKG=./pkg/sql TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestSessionFinishRollsBackTxn.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | sql testsessionfinishrollsbacktxn failed on sql txn restart test go statement filter running on savepoint cockroach restart with err sql txn restart test go statement filter running on insert into t public test k v values a with err sql txn restart test go statement filter running on release savepoint cockroach restart with err sql txn restart test go statement filter running on set transaction priority low with err sql txn restart test go statement filter running on select count from t test with err sql txn restart test go statement filter running on delete from t test with err sql txn restart test go statement filter running on commit transaction with err sql txn restart test go statement filter running on select table id from crdb internal tables as of system time where schema name public and drop time is null with err kv kvserver replica consistency go triggering stats recomputation to resolve delta of containsestimates lastupdatenanos intentage gcbytesage livebytes livecount keybytes keycount valbytes valcount intentbytes intentcount sysbytes syscount gossip gossip go node has connected to cluster via gossip kv kvserver stores go wrote node addresses to persistent storage kv kvserver closedts provider provider go unable to move closed timestamp forward not live github com cockroachdb cockroach pkg kv kvserver init go src github com cockroachdb cockroach pkg kv kvserver node liveness go runtime doinit usr local go src runtime proc go runtime doinit usr local go src runtime proc go runtime doinit usr local go src runtime proc go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s kv kvserver node liveness go retrying liveness update after kvserver errretryliveness result is ambiguous context done during distsender send context deadline exceeded kv kvserver node liveness go slow heartbeat took kv kvserver node liveness go failed node liveness heartbeat operation node liveness heartbeat timed out after operation node liveness heartbeat timed out after wraps context deadline exceeded error types contextutil timeouterror context deadlineexceedederror util stop stopper go quiescing server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net kv txn go async rollback failed aborted in distsender context canceled kv kvserver queue go rate limited in maybeadd merge node unavailable try another peer kv txn go failure aborting transaction node unavailable try another peer abort caused by result is ambiguous server shutdown kv kvserver node liveness go retrying liveness update after kvserver errretryliveness result is ambiguous server shutdown kv txn go failure aborting transaction node unavailable try another peer abort caused by node unavailable try another peer kv kvserver node liveness go slow heartbeat took kv kvserver replica range lease go node unavailable try another peer kv kvserver queue go rate limited in maybeadd replicate node unavailable try another peer ts db go error writing time series data failed to send rpc sending to all replicas failed last error err node unavailable try another peer sql txn restart test go statement filter running on select id statement fingerprint from system statement diagnostics requests where completed false with err aborted during distsender send context canceled kv txn go failure aborting transaction node unavailable try another peer abort caused by aborted during distsender send context canceled kv kvserver store go could not gossip first range descriptor node unavailable try another peer kv kvserver queue go purgatory is now empty server node go error recording status summaries node unavailable try another peer fail testsessionfinishrollsbacktxn commitwait conn executor test go looks like the checking tx was unexpectedly blocked it took to commit more parameters goflags json make stressrace tests testsessionfinishrollsbacktxn pkg pkg sql testtimeout stressflags timeout powered by | 1 |
171,482 | 13,235,090,201 | IssuesEvent | 2020-08-18 17:25:26 | spring-projects/spring-framework | https://api.github.com/repos/spring-projects/spring-framework | closed | Exception occurs when testing with TestNG 7.3.0 | in: test status: waiting-for-feedback status: waiting-for-triage | After upgrading TestNG from 7.1.0 to 7.3.0, tests fail with:
```
java.lang.IllegalStateException: org.springframework.web.context.support.GenericWebApplicationContext@5567c481 has been closed already
```
I have tests that create multiple application contexts.
Is TestNG 7.3.0 supported? | 1.0 | Exception occurs when testing with TestNG 7.3.0 - After upgrading TestNG from 7.1.0 to 7.3.0, tests fail with:
```
java.lang.IllegalStateException: org.springframework.web.context.support.GenericWebApplicationContext@5567c481 has been closed already
```
I have tests that create multiple application contexts.
Is TestNG 7.3.0 supported? | test | exception occurs when testing with testng after upgrading testng from to tests fail with java lang illegalstateexception org springframework web context support genericwebapplicationcontext has been closed already i have tests that create multiple application contexts is testng supported | 1 |
319,774 | 27,400,484,154 | IssuesEvent | 2023-03-01 00:00:29 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | opened | Fix statistical.test_std | Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4296783117/jobs/7488948449" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
| 1.0 | Fix statistical.test_std - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4296783117/jobs/7488948449" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
| test | fix statistical test std tensorflow img src torch img src numpy img src jax img src not found not found | 1 |
132,980 | 18,278,721,480 | IssuesEvent | 2021-10-04 22:33:31 | ghc-dev/Diana-Mosley | https://api.github.com/repos/ghc-dev/Diana-Mosley | closed | CVE-2020-14330 (Medium) detected in ansible-2.9.9.tar.gz - autoclosed | security vulnerability | ## CVE-2020-14330 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: Diana-Mosley/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Diana-Mosley/commit/a46fff3cca70dc22767bd82a3b2283dbab82bec0">a46fff3cca70dc22767bd82a3b2283dbab82bec0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An Improper Output Neutralization for Logs flaw was found in Ansible when using the uri module, where sensitive data is exposed to content and json output. This flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module. The highest threat from this vulnerability is to data confidentiality.
<p>Publish Date: 2020-09-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14330>CVE-2020-14330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14330">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14330</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 2.10.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14330","vulnerabilityDetails":"An Improper Output Neutralization for Logs flaw was found in Ansible when using the uri module, where sensitive data is exposed to content and json output. This flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module. The highest threat from this vulnerability is to data confidentiality.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14330","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-14330 (Medium) detected in ansible-2.9.9.tar.gz - autoclosed - ## CVE-2020-14330 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: Diana-Mosley/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Diana-Mosley/commit/a46fff3cca70dc22767bd82a3b2283dbab82bec0">a46fff3cca70dc22767bd82a3b2283dbab82bec0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An Improper Output Neutralization for Logs flaw was found in Ansible when using the uri module, where sensitive data is exposed to content and json output. This flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module. The highest threat from this vulnerability is to data confidentiality.
<p>Publish Date: 2020-09-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14330>CVE-2020-14330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14330">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14330</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 2.10.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14330","vulnerabilityDetails":"An Improper Output Neutralization for Logs flaw was found in Ansible when using the uri module, where sensitive data is exposed to content and json output. This flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module. The highest threat from this vulnerability is to data confidentiality.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14330","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in ansible tar gz autoclosed cve medium severity vulnerability vulnerable library ansible tar gz radically simple it automation library home page a href path to dependency file diana mosley requirements txt path to vulnerable library requirements txt dependency hierarchy x ansible tar gz vulnerable library found in head commit a href found in base branch master vulnerability details an improper output neutralization for logs flaw was found in ansible when using the uri module where sensitive data is exposed to content and json output this flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module the highest threat from this vulnerability is to data confidentiality publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree ansible isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails an improper output neutralization for logs flaw was found in ansible when using the uri module where sensitive data is exposed to content and json output this flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module the highest threat from this vulnerability is to data confidentiality vulnerabilityurl | 0 |
297,701 | 25,757,330,894 | IssuesEvent | 2022-12-08 17:24:05 | 10up/ElasticPress | https://api.github.com/repos/10up/ElasticPress | closed | Improve PHP Unit Tests Coverage | enhancement needs tests | <!-- Thank you for suggesting an idea to make things better. Please fill in as much of the template below as you can. -->
**Is your enhancement related to a problem? Please describe.**
<!-- Please describe the problem you are trying to solve. -->
This bug will be used to improve the overall coverage of PHP Unit Tests | 1.0 | Improve PHP Unit Tests Coverage - <!-- Thank you for suggesting an idea to make things better. Please fill in as much of the template below as you can. -->
**Is your enhancement related to a problem? Please describe.**
<!-- Please describe the problem you are trying to solve. -->
This bug will be used to improve the overall coverage of PHP Unit Tests | test | improve php unit tests coverage is your enhancement related to a problem please describe this bug will be used to improve the overall coverage of php unit tests | 1 |
131,906 | 10,721,977,650 | IssuesEvent | 2019-10-27 08:16:29 | MajkiIT/polish-ads-filter | https://api.github.com/repos/MajkiIT/polish-ads-filter | closed | vod.tvp.pl | bลฤ
d reguลy gotowe/testowanie | Title: Official Polish Filters for AdBlock, uBlock Origin & AdGuard
Aลผ dziw, ลผe tego nikt nie zauwaลผyล:
Wลฤ
czony:

Wyลฤ
czny:

Brak suwaka wideo i przycisku peลnego ekranu
| 1.0 | vod.tvp.pl - Title: Official Polish Filters for AdBlock, uBlock Origin & AdGuard
Aลผ dziw, ลผe tego nikt nie zauwaลผyล:
Wลฤ
czony:

Wyลฤ
czny:

Brak suwaka wideo i przycisku peลnego ekranu
| test | vod tvp pl title official polish filters for adblock ublock origin adguard aลผ dziw ลผe tego nikt nie zauwaลผyล wลฤ
czony wyลฤ
czny brak suwaka wideo i przycisku peลnego ekranu | 1 |
285,699 | 31,155,123,877 | IssuesEvent | 2023-08-16 12:40:56 | Trinadh465/linux-4.1.15_CVE-2018-5873 | https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2018-5873 | opened | CVE-2015-8785 (Medium) detected in linuxlinux-4.1.52 | Mend: dependency security vulnerability | ## CVE-2015-8785 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2018-5873/commit/32145daf0c96b012284199f23418243e0168269f">32145daf0c96b012284199f23418243e0168269f</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The fuse_fill_write_pages function in fs/fuse/file.c in the Linux kernel before 4.4 allows local users to cause a denial of service (infinite loop) via a writev system call that triggers a zero length for the first segment of an iov.
<p>Publish Date: 2016-02-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8785>CVE-2015-8785</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-8785">https://nvd.nist.gov/vuln/detail/CVE-2015-8785</a></p>
<p>Release Date: 2016-02-08</p>
<p>Fix Resolution: 4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-8785 (Medium) detected in linuxlinux-4.1.52 - ## CVE-2015-8785 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2018-5873/commit/32145daf0c96b012284199f23418243e0168269f">32145daf0c96b012284199f23418243e0168269f</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The fuse_fill_write_pages function in fs/fuse/file.c in the Linux kernel before 4.4 allows local users to cause a denial of service (infinite loop) via a writev system call that triggers a zero length for the first segment of an iov.
<p>Publish Date: 2016-02-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8785>CVE-2015-8785</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-8785">https://nvd.nist.gov/vuln/detail/CVE-2015-8785</a></p>
<p>Release Date: 2016-02-08</p>
<p>Fix Resolution: 4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files vulnerability details the fuse fill write pages function in fs fuse file c in the linux kernel before allows local users to cause a denial of service infinite loop via a writev system call that triggers a zero length for the first segment of an iov publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
13,460 | 3,728,784,301 | IssuesEvent | 2016-03-07 02:44:30 | autosportlabs/RaceCapture-Pro_firmware | https://api.github.com/repos/autosportlabs/RaceCapture-Pro_firmware | closed | Enhance Virtual Channel methods to include getChannel ability | Documentation Enhancement | Currently it's cumbersome to get data from channels - you have to make local variables at the time of update. It would be cleaner code if getChannel would read Virtual Channel data. | 1.0 | Enhance Virtual Channel methods to include getChannel ability - Currently it's cumbersome to get data from channels - you have to make local variables at the time of update. It would be cleaner code if getChannel would read Virtual Channel data. | non_test | enhance virtual channel methods to include getchannel ability currently it s cumbersome to get data from channels you have to make local variables at the time of update it would be cleaner code if getchannel would read virtual channel data | 0 |
65,738 | 6,973,944,793 | IssuesEvent | 2017-12-11 22:22:50 | elastic/logstash | https://api.github.com/repos/elastic/logstash | opened | org.logstash.config.ir.graph.GraphTest > complexConsistencyTest FAILED | test failure | CI Build: https://logstash-ci.elastic.co/job/elastic+logstash+6.1+multijob-unix-compatibility/os=oraclelinux/15/console
Failure:
```
20:53:46 org.logstash.config.ir.graph.GraphTest > complexConsistencyTest FAILED
20:53:46 org.junit.ComparisonFailure: expected:<[6de45043af6004f983d0b17aa80b6f98bf68dfd2591d3aa2086df9e65041dd4a]> but was:<[76d98686a05dd3786b26787b7138113152797e71e709692523aed2f5013d6a85]>
20:53:46 at org.junit.Assert.assertEquals(Assert.java:115)
20:53:46 at org.junit.Assert.assertEquals(Assert.java:144)
20:53:46 at org.logstash.config.ir.graph.GraphTest.complexConsistencyTest(GraphTest.java:89)
``` | 1.0 | org.logstash.config.ir.graph.GraphTest > complexConsistencyTest FAILED - CI Build: https://logstash-ci.elastic.co/job/elastic+logstash+6.1+multijob-unix-compatibility/os=oraclelinux/15/console
Failure:
```
20:53:46 org.logstash.config.ir.graph.GraphTest > complexConsistencyTest FAILED
20:53:46 org.junit.ComparisonFailure: expected:<[6de45043af6004f983d0b17aa80b6f98bf68dfd2591d3aa2086df9e65041dd4a]> but was:<[76d98686a05dd3786b26787b7138113152797e71e709692523aed2f5013d6a85]>
20:53:46 at org.junit.Assert.assertEquals(Assert.java:115)
20:53:46 at org.junit.Assert.assertEquals(Assert.java:144)
20:53:46 at org.logstash.config.ir.graph.GraphTest.complexConsistencyTest(GraphTest.java:89)
``` | test | org logstash config ir graph graphtest complexconsistencytest failed ci build failure org logstash config ir graph graphtest complexconsistencytest failed org junit comparisonfailure expected but was at org junit assert assertequals assert java at org junit assert assertequals assert java at org logstash config ir graph graphtest complexconsistencytest graphtest java | 1 |
107,089 | 11,517,264,843 | IssuesEvent | 2020-02-14 07:54:14 | sailuh/topicflow | https://api.github.com/repos/sailuh/topicflow | opened | add example input dataset to run.py | documentation | There is currently a heavy convention on both folder organization, columns and file names. While this will be available in github.com/sailuh/kaona, example input data to run.py should be included here for future reference. | 1.0 | add example input dataset to run.py - There is currently a heavy convention on both folder organization, columns and file names. While this will be available in github.com/sailuh/kaona, example input data to run.py should be included here for future reference. | non_test | add example input dataset to run py there is currently a heavy convention on both folder organization columns and file names while this will be available in github com sailuh kaona example input data to run py should be included here for future reference | 0 |
12,563 | 20,238,162,875 | IssuesEvent | 2022-02-14 05:59:23 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | autodiscoverFilter with commas | type:bug priority-3-normal breaking status:requirements | This seems to be breaking our setup where we define the filter similar to this:
```
export RENOVATE_AUTODISCOVER_FILTER="project/{a,b,c,d,*-app}"
```
This gets mapped to the following value in 31.2.2
```
"autodiscoverFilter": "project/{a,b,c,d,*-app}"
```
But it breaks to the following after that.
```
"autodiscoverFilter": [
"project/{a",
"b",
"c",
"d",
"*-app}"
]
```
_Originally posted by @cascornelissen in https://github.com/renovatebot/renovate/issues/13100#issuecomment-996645166_ | 1.0 | autodiscoverFilter with commas - This seems to be breaking our setup where we define the filter similar to this:
```
export RENOVATE_AUTODISCOVER_FILTER="project/{a,b,c,d,*-app}"
```
This gets mapped to the following value in 31.2.2
```
"autodiscoverFilter": "project/{a,b,c,d,*-app}"
```
But it breaks to the following after that.
```
"autodiscoverFilter": [
"project/{a",
"b",
"c",
"d",
"*-app}"
]
```
_Originally posted by @cascornelissen in https://github.com/renovatebot/renovate/issues/13100#issuecomment-996645166_ | non_test | autodiscoverfilter with commas this seems to be breaking our setup where we define the filter similar to this export renovate autodiscover filter project a b c d app this gets mapped to the following value in autodiscoverfilter project a b c d app but it breaks to the following after that autodiscoverfilter project a b c d app originally posted by cascornelissen in | 0 |
33,653 | 2,770,707,488 | IssuesEvent | 2015-05-01 16:29:54 | digitalbazaar/loginhub | https://api.github.com/repos/digitalbazaar/loginhub | opened | Build and run instructions | enhancement medium priority | The README.md needs more information on:
- [ ] What the software does
- [ ] The functionality of the software
- [ ] How you run the software in a dev environment
- [ ] The REST API for the software | 1.0 | Build and run instructions - The README.md needs more information on:
- [ ] What the software does
- [ ] The functionality of the software
- [ ] How you run the software in a dev environment
- [ ] The REST API for the software | non_test | build and run instructions the readme md needs more information on what the software does the functionality of the software how you run the software in a dev environment the rest api for the software | 0 |
218,090 | 16,943,403,103 | IssuesEvent | 2021-06-28 00:40:00 | backend-br/vagas | https://api.github.com/repos/backend-br/vagas | closed | [Remoto] Pessoa Engenheira de Software Pleno @ VAGAS.com | AWS CLT Java Kubernetes Pleno Python Redis Remoto Rest Ruby Stale TDD Testes automatizados | ## Nossa empresa
Nossos colaboradores estรฃo trabalhando em regime de home office e permanecerรฃo assim por tempo indeterminado. Por isso, os processos seletivos sรฃo 100% remotos.
## Descriรงรฃo da vaga
A equipe de Backend tem como missรฃo entregar aplicaรงรตes com a melhor performance possรญvel proporcionando uma boa experiรชncia para os usuรกrios.
O seu desafio dentro do time serรก participar do desenvolvimento de sistemas com as melhores prรกticas, tendo como foco: escalabilidade e manutenibilidade.
Quer fazer parte de um time engajado, que busca aprendizado contรญnuo e curte resolver problemas? Entรฃo, inscreva-se!
\#vemserVAGAS
## Local
Remoto ou Escritรณrio, Sรฃo Paulo - Berrini
## Requisitos
#### Como serรก seu dia a dia
- Apoiarรก os times de desenvolvimento no desenho da arquitetura de software / produto
- Irรก modularizar nossos produtos em microsserviรงos e APIs
- Desenvolverรก aplicaรงรตes de alta performance, garantindo que os componentes desenvolvidos sejam escalรกveis, performรกticos e resilientes
- Garantirรก a qualidade do cรณdigo, incluindo boas prรกticas de desenvolvimento, como TDD, Code Reviews etc
- Farรก Integraรงรฃo Contรญnua e Deploy Contรญnuo
- Participarรก de equipes multifuncionais, interagindo com profissionais de outras รกreas da empresa
- Participarรก de todo o ciclo de desenvolvimento do produto, passando, inclusive, pelas fases de concepรงรฃo e anรกlise de mercado
- Farรก anรกlise de riscos e impactos no desenvolvimento do cรณdigo (Profiling, Debug)
- Construirรก aplicaรงรตes para atender o mercado de recrutamento e seleรงรฃo
- Compartilharรก conhecimento e apoiarรก no desenvolvimento dos seus pares
#### O que buscamos em vocรช
- Fluรชncia na linguagem de programaรงรฃo orientada a objetos (Ruby ou Java ou C# ou Python)
- Experiรชncia com Integraรงรฃo Contรญnua e Deploy Contรญnuo
- Ter desenvolvido APIs REST
- Experiรชncia em arquitetura orientada a eventos e em testes automatizados
- Experiรชncia em arquitetura de software
- Ter trabalhado com Design Patterns e boas prรกticas de programaรงรฃo (SOLID, DRY, YAGNI, KISS)
- Ser proativa(o) e ter bom relacionamento interpessoal
- Conhecimento em arquitetura de microsserviรงos
- Experiรชncia com documentaรงรฃo de software
- Ter atuado com metodologias รกgeis
- Ter visรฃo sistรชmica
#### Plus
- Conhecimento em programaรงรฃo funcional
- Conhecimento de Kubernetes (EKS/ECS)
- Conhecimento em cloud (AWS)
- Conhecimento em bancos de dados nรฃo relacionais, principalmente ElasticSearch e Redis
- Conhecimento em streaming de dados (Kafka, Kinesis, Nifi)
## Benefรญcios
- Assistรชncia mรฉdica
- Assistรชncia odontolรณgica
- Auxรญlio-academia
- Auxรญlio-creche
- Auxรญlio-desenvolvimento
- Auxรญlio-estacionamento
- Seguro de vida
- Vale-refeiรงรฃo
- Vale-transporte
- Auxรญlio-fretado
- Cafรฉ da manhรฃ
- Cesta de natal
- Consignado
- Ginรกstica laboral
- Massoterapia
- Home office
## Contrataรงรฃo
CLT
## Como se candidatar
Se inscreva atravรฉs do nosso site:
- https://trabalheconosco.vagas.com.br/vagas/oportunidade/pessoa-engenheira-de-software-pleno-backend/2160836
## Labels
#### Alocaรงรฃo
- Remoto
#### Regime
- CLT
#### Nรญvel
- Pleno
| 1.0 | [Remoto] Pessoa Engenheira de Software Pleno @ VAGAS.com - ## Nossa empresa
Nossos colaboradores estรฃo trabalhando em regime de home office e permanecerรฃo assim por tempo indeterminado. Por isso, os processos seletivos sรฃo 100% remotos.
## Descriรงรฃo da vaga
A equipe de Backend tem como missรฃo entregar aplicaรงรตes com a melhor performance possรญvel proporcionando uma boa experiรชncia para os usuรกrios.
O seu desafio dentro do time serรก participar do desenvolvimento de sistemas com as melhores prรกticas, tendo como foco: escalabilidade e manutenibilidade.
Quer fazer parte de um time engajado, que busca aprendizado contรญnuo e curte resolver problemas? Entรฃo, inscreva-se!
\#vemserVAGAS
## Local
Remoto ou Escritรณrio, Sรฃo Paulo - Berrini
## Requisitos
#### Como serรก seu dia a dia
- Apoiarรก os times de desenvolvimento no desenho da arquitetura de software / produto
- Irรก modularizar nossos produtos em microsserviรงos e APIs
- Desenvolverรก aplicaรงรตes de alta performance, garantindo que os componentes desenvolvidos sejam escalรกveis, performรกticos e resilientes
- Garantirรก a qualidade do cรณdigo, incluindo boas prรกticas de desenvolvimento, como TDD, Code Reviews etc
- Farรก Integraรงรฃo Contรญnua e Deploy Contรญnuo
- Participarรก de equipes multifuncionais, interagindo com profissionais de outras รกreas da empresa
- Participarรก de todo o ciclo de desenvolvimento do produto, passando, inclusive, pelas fases de concepรงรฃo e anรกlise de mercado
- Farรก anรกlise de riscos e impactos no desenvolvimento do cรณdigo (Profiling, Debug)
- Construirรก aplicaรงรตes para atender o mercado de recrutamento e seleรงรฃo
- Compartilharรก conhecimento e apoiarรก no desenvolvimento dos seus pares
#### O que buscamos em vocรช
- Fluรชncia na linguagem de programaรงรฃo orientada a objetos (Ruby ou Java ou C# ou Python)
- Experiรชncia com Integraรงรฃo Contรญnua e Deploy Contรญnuo
- Ter desenvolvido APIs REST
- Experiรชncia em arquitetura orientada a eventos e em testes automatizados
- Experiรชncia em arquitetura de software
- Ter trabalhado com Design Patterns e boas prรกticas de programaรงรฃo (SOLID, DRY, YAGNI, KISS)
- Ser proativa(o) e ter bom relacionamento interpessoal
- Conhecimento em arquitetura de microsserviรงos
- Experiรชncia com documentaรงรฃo de software
- Ter atuado com metodologias รกgeis
- Ter visรฃo sistรชmica
#### Plus
- Conhecimento em programaรงรฃo funcional
- Conhecimento de Kubernetes (EKS/ECS)
- Conhecimento em cloud (AWS)
- Conhecimento em bancos de dados nรฃo relacionais, principalmente ElasticSearch e Redis
- Conhecimento em streaming de dados (Kafka, Kinesis, Nifi)
## Benefรญcios
- Assistรชncia mรฉdica
- Assistรชncia odontolรณgica
- Auxรญlio-academia
- Auxรญlio-creche
- Auxรญlio-desenvolvimento
- Auxรญlio-estacionamento
- Seguro de vida
- Vale-refeiรงรฃo
- Vale-transporte
- Auxรญlio-fretado
- Cafรฉ da manhรฃ
- Cesta de natal
- Consignado
- Ginรกstica laboral
- Massoterapia
- Home office
## Contrataรงรฃo
CLT
## Como se candidatar
Se inscreva atravรฉs do nosso site:
- https://trabalheconosco.vagas.com.br/vagas/oportunidade/pessoa-engenheira-de-software-pleno-backend/2160836
## Labels
#### Alocaรงรฃo
- Remoto
#### Regime
- CLT
#### Nรญvel
- Pleno
| test | pessoa engenheira de software pleno vagas com nossa empresa nossos colaboradores estรฃo trabalhando em regime de home office e permanecerรฃo assim por tempo indeterminado por isso os processos seletivos sรฃo remotos descriรงรฃo da vaga a equipe de backend tem como missรฃo entregar aplicaรงรตes com a melhor performance possรญvel proporcionando uma boa experiรชncia para os usuรกrios o seu desafio dentro do time serรก participar do desenvolvimento de sistemas com as melhores prรกticas tendo como foco escalabilidade e manutenibilidade quer fazer parte de um time engajado que busca aprendizado contรญnuo e curte resolver problemas entรฃo inscreva se vemservagas local remoto ou escritรณrio sรฃo paulo berrini requisitos como serรก seu dia a dia apoiarรก os times de desenvolvimento no desenho da arquitetura de software produto irรก modularizar nossos produtos em microsserviรงos e apis desenvolverรก aplicaรงรตes de alta performance garantindo que os componentes desenvolvidos sejam escalรกveis performรกticos e resilientes garantirรก a qualidade do cรณdigo incluindo boas prรกticas de desenvolvimento como tdd code reviews etc farรก integraรงรฃo contรญnua e deploy contรญnuo participarรก de equipes multifuncionais interagindo com profissionais de outras รกreas da empresa participarรก de todo o ciclo de desenvolvimento do produto passando inclusive pelas fases de concepรงรฃo e anรกlise de mercado farรก anรกlise de riscos e impactos no desenvolvimento do cรณdigo profiling debug construirรก aplicaรงรตes para atender o mercado de recrutamento e seleรงรฃo compartilharรก conhecimento e apoiarรก no desenvolvimento dos seus pares o que buscamos em vocรช fluรชncia na linguagem de programaรงรฃo orientada a objetos ruby ou java ou c ou python experiรชncia com integraรงรฃo contรญnua e deploy contรญnuo ter desenvolvido apis rest experiรชncia em arquitetura orientada a eventos e em testes automatizados experiรชncia em arquitetura de software ter trabalhado com design patterns e boas prรกticas de programaรงรฃo solid dry yagni kiss ser proativa o e ter bom relacionamento interpessoal conhecimento em arquitetura de microsserviรงos experiรชncia com documentaรงรฃo de software ter atuado com metodologias รกgeis ter visรฃo sistรชmica plus conhecimento em programaรงรฃo funcional conhecimento de kubernetes eks ecs conhecimento em cloud aws conhecimento em bancos de dados nรฃo relacionais principalmente elasticsearch e redis conhecimento em streaming de dados kafka kinesis nifi benefรญcios assistรชncia mรฉdica assistรชncia odontolรณgica auxรญlio academia auxรญlio creche auxรญlio desenvolvimento auxรญlio estacionamento seguro de vida vale refeiรงรฃo vale transporte auxรญlio fretado cafรฉ da manhรฃ cesta de natal consignado ginรกstica laboral massoterapia home office contrataรงรฃo clt como se candidatar se inscreva atravรฉs do nosso site labels alocaรงรฃo remoto regime clt nรญvel pleno | 1 |
280,655 | 8,684,753,678 | IssuesEvent | 2018-12-03 04:07:13 | sailplan/anchor-watch | https://api.github.com/repos/sailplan/anchor-watch | opened | A user can adjust the position of the anchor by dragging the map | enhancement priority:SHOULD | After the anchor has been dropped, you should be able to edit the anchor, reposition the map, and re-drop it. | 1.0 | A user can adjust the position of the anchor by dragging the map - After the anchor has been dropped, you should be able to edit the anchor, reposition the map, and re-drop it. | non_test | a user can adjust the position of the anchor by dragging the map after the anchor has been dropped you should be able to edit the anchor reposition the map and re drop it | 0 |
516,556 | 14,983,786,710 | IssuesEvent | 2021-01-28 17:40:42 | airshipit/airshipctl | https://api.github.com/repos/airshipit/airshipctl | closed | Missing plugin images are handled as document errors | 1-Core bug priority/medium | **Describe the bug**
If a plugin image is missing, airshipctl will not halt and display an error; it will continue to attempt to process documents, causing misleading Kustomize errors.
```
[airshipctl] 2020/11/06 16:23:23 opendev.org/airship/airshipctl@/pkg/cluster/clustermap/map.go:61: cluster is not defined in cluster map &{{ClusterMap airshipit.org/v1alpha1} {main-map 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[airshipit.org/deploy-k8s:false] map[] [] [] []} map[ephemeral-cluster:0xc0008d5b30 target-cluster:0xc0008d5b60]}
Unable to find image 'quay.io/airshipit/templater:dev' locally
docker: Error response from daemon: manifest for quay.io/airshipit/templater:dev not found: manifest unknown: manifest unknown.
See 'docker run --help'.
accumulating resources: accumulateFile "accumulating resources from '../catalogues': '/home/ubuntu/opendev.org/airship/treasuremap/manifests/site/test-site/ephemeral/catalogues' must resolve to a file", accumulateDirector: "recursed accumulation of path '/home/ubuntu/opendev.org/airship/treasuremap/manifests/site/test-site/ephemeral/catalogues': accumulating resources: accumulateFile \"accumulating resources from '../../target/catalogues': '/home/ubuntu/opendev.org/airship/treasuremap/manifests/site/test-site/target/catalogues' must resolve to a file\", accumulateDirector: \"recursed accumulation of path '/home/ubuntu/opendev.org/airship/treasuremap/manifests/site/test-site/target/catalogues': accumulating resources: accumulateFile \\\"accumulating resources from '../../../../type/airship-core/shared/catalogues': '/home/ubuntu/opendev.org/airship/treasuremap/manifests/type/airship-core/shared/catalogues' must resolve to a file\\\", accumulateDirector: \\\"recursed accumulation of path '/home/ubuntu/opendev.org/airship/treasuremap/manifests/type/airship-core/shared/catalogues': accumulating resources: accumulateFile \\\\\\\"accumulating resources from '../../../../../../airshipctl/manifests/function/airshipctl-base-catalogues': '/home/ubuntu/opendev.org/airship/airshipctl/manifests/function/airshipctl-base-catalogues' must resolve to a file\\\\\\\", accumulateDirector: \\\\\\\"recursed accumulation of path '/home/ubuntu/opendev.org/airship/airshipctl/manifests/function/airshipctl-base-catalogues': couldn't execute function: exit status 125\\\\\\\"\\\"\""
```
**Steps To Reproduce**
1. Remove local Docker images
2. Run `airshipctl phase run bootstrap`
**Expected behavior**
Airshipctl should produce a relevant, actionable error for the user. Something like "Plugin image `quay.io/airshipit/templater:dev` not found". Airshipctl should not show Kustomize errors when plugin images are missing.
| 1.0 | Missing plugin images are handled as document errors - **Describe the bug**
If a plugin image is missing, airshipctl will not halt and display an error; it will continue to attempt to process documents, causing misleading Kustomize errors.
```
[airshipctl] 2020/11/06 16:23:23 opendev.org/airship/airshipctl@/pkg/cluster/clustermap/map.go:61: cluster is not defined in cluster map &{{ClusterMap airshipit.org/v1alpha1} {main-map 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[airshipit.org/deploy-k8s:false] map[] [] [] []} map[ephemeral-cluster:0xc0008d5b30 target-cluster:0xc0008d5b60]}
Unable to find image 'quay.io/airshipit/templater:dev' locally
docker: Error response from daemon: manifest for quay.io/airshipit/templater:dev not found: manifest unknown: manifest unknown.
See 'docker run --help'.
accumulating resources: accumulateFile "accumulating resources from '../catalogues': '/home/ubuntu/opendev.org/airship/treasuremap/manifests/site/test-site/ephemeral/catalogues' must resolve to a file", accumulateDirector: "recursed accumulation of path '/home/ubuntu/opendev.org/airship/treasuremap/manifests/site/test-site/ephemeral/catalogues': accumulating resources: accumulateFile \"accumulating resources from '../../target/catalogues': '/home/ubuntu/opendev.org/airship/treasuremap/manifests/site/test-site/target/catalogues' must resolve to a file\", accumulateDirector: \"recursed accumulation of path '/home/ubuntu/opendev.org/airship/treasuremap/manifests/site/test-site/target/catalogues': accumulating resources: accumulateFile \\\"accumulating resources from '../../../../type/airship-core/shared/catalogues': '/home/ubuntu/opendev.org/airship/treasuremap/manifests/type/airship-core/shared/catalogues' must resolve to a file\\\", accumulateDirector: \\\"recursed accumulation of path '/home/ubuntu/opendev.org/airship/treasuremap/manifests/type/airship-core/shared/catalogues': accumulating resources: accumulateFile \\\\\\\"accumulating resources from '../../../../../../airshipctl/manifests/function/airshipctl-base-catalogues': '/home/ubuntu/opendev.org/airship/airshipctl/manifests/function/airshipctl-base-catalogues' must resolve to a file\\\\\\\", accumulateDirector: \\\\\\\"recursed accumulation of path '/home/ubuntu/opendev.org/airship/airshipctl/manifests/function/airshipctl-base-catalogues': couldn't execute function: exit status 125\\\\\\\"\\\"\""
```
**Steps To Reproduce**
1. Remove local Docker images
2. Run `airshipctl phase run bootstrap`
**Expected behavior**
Airshipctl should produce a relevant, actionable error for the user. Something like "Plugin image `quay.io/airshipit/templater:dev` not found". Airshipctl should not show Kustomize errors when plugin images are missing.
| non_test | missing plugin images are handled as document errors describe the bug if a plugin image is missing airshipctl will not halt and display an error it will continue to attempt to process documents causing misleading kustomize errors opendev org airship airshipctl pkg cluster clustermap map go cluster is not defined in cluster map clustermap airshipit org main map utc map map map unable to find image quay io airshipit templater dev locally docker error response from daemon manifest for quay io airshipit templater dev not found manifest unknown manifest unknown see docker run help accumulating resources accumulatefile accumulating resources from catalogues home ubuntu opendev org airship treasuremap manifests site test site ephemeral catalogues must resolve to a file accumulatedirector recursed accumulation of path home ubuntu opendev org airship treasuremap manifests site test site ephemeral catalogues accumulating resources accumulatefile accumulating resources from target catalogues home ubuntu opendev org airship treasuremap manifests site test site target catalogues must resolve to a file accumulatedirector recursed accumulation of path home ubuntu opendev org airship treasuremap manifests site test site target catalogues accumulating resources accumulatefile accumulating resources from type airship core shared catalogues home ubuntu opendev org airship treasuremap manifests type airship core shared catalogues must resolve to a file accumulatedirector recursed accumulation of path home ubuntu opendev org airship treasuremap manifests type airship core shared catalogues accumulating resources accumulatefile accumulating resources from airshipctl manifests function airshipctl base catalogues home ubuntu opendev org airship airshipctl manifests function airshipctl base catalogues must resolve to a file accumulatedirector recursed accumulation of path home ubuntu opendev org airship airshipctl manifests function airshipctl base catalogues couldn t execute function exit status steps to reproduce remove local docker images run airshipctl phase run bootstrap expected behavior airshipctl should produce a relevant actionable error for the user something like plugin image quay io airshipit templater dev not found airshipctl should not show kustomize errors when plugin images are missing | 0 |
435,549 | 30,506,943,730 | IssuesEvent | 2023-07-18 17:36:54 | woocommerce/woocommerce-blocks | https://api.github.com/repos/woocommerce/woocommerce-blocks | opened | Optimise documentation of `Snackbar notices` | type: documentation | Currently, there are the following two filters for `Snackbar notices`:
- `showApplyCouponNotice`
- `showRemoveCouponNotice`
While both filters are mentioned on https://github.com/woocommerce/woocommerce-blocks/blob/trunk/docs/third-party-developers/extensibility/checkout-block/available-filters.md, there's no designated section for `Snackbar notices`.
This issue aims to create a designated section for the `Snackbar notices` filters. | 1.0 | Optimise documentation of `Snackbar notices` - Currently, there are the following two filters for `Snackbar notices`:
- `showApplyCouponNotice`
- `showRemoveCouponNotice`
While both filters are mentioned on https://github.com/woocommerce/woocommerce-blocks/blob/trunk/docs/third-party-developers/extensibility/checkout-block/available-filters.md, there's no designated section for `Snackbar notices`.
This issue aims to create a designated section for the `Snackbar notices` filters. | non_test | optimise documentation of snackbar notices currently there are the following two filters for snackbar notices showapplycouponnotice showremovecouponnotice while both filters are mentioned on there s no designated section for snackbar notices this issue aims to create a designated section for the snackbar notices filters | 0 |
201,032 | 15,170,824,850 | IssuesEvent | 2021-02-13 00:26:28 | backend-br/vagas | https://api.github.com/repos/backend-br/vagas | closed | [REMOTE] Back-end Developer Sรชnior @Agriness | CLT JavaScript Python Remoto SQL Stale Testes automatizados | <!--
==================================================
Caso a vaga for remoto durante a pandemia informar no texto "Remoto durante o covid"
==================================================
-->
<!--
==================================================
POR FAVOR, Sร POSTE SE A VAGA FOR PARA BACK-END!
Nรฃo faรงa distinรงรฃo de gรชnero no tรญtulo da vaga.
Use: "Back-End Developer" ao invรฉs de
"Desenvolvedor Back-End" \o/
Exemplo: `[Sรฃo Paulo] Back-End Developer @ NOME DA EMPRESA`
==================================================
-->
<!--
==================================================
Caso a vaga for remoto durante a pandemia deixar a linha abaixo
==================================================
-->
> Vaga Remota durante a pandemia
## Nossa empresa
Somos uma empresa de gestรฃo da produรงรฃo animal que tem como propรณsito impulsionar a prosperidade no campo. Trabalhamos muito para tornar o agronegรณcio mais produtivo e rentรกvel, nรฃo somente no Brasil, mas no mundo. Estamos em constante crescimento e expansรฃo. Nosso desafio: escalar a Agriness para o mundo!
Para isso, precisamos de um time que faรงa acontecer!
## Descriรงรฃo da vaga
Estamos em busca de um(a) Desenvolvedor(a) Backend para atuar em uma de nossas squads. Aqui vocรช irรก trabalhar no time de Engenharia de Software, lado a lado de Product Managers, Designers, Analistas e QAโs para planejar e implementar novas funcionalidades e melhorias tanto na plataforma quanto em nossos produtos. Atuando em um Squad vocรช participa na implementaรงรฃo de soluรงรตes escalรกveis que atendam aos critรฉrios de qualidade da empresa e que entregam valor aos nossos clientes.
Sendo mais especรญfico, o Desenvolvedor irรก:
- Interagir e influenciar outros desenvolvedores e integrantes da squad para definiรงรฃo e implementaรงรฃo das soluรงรตes planejadas.
- Disseminar boas prรกticas a todos integrantes da squad, buscando a otimizaรงรฃo de processos evitando atividades manuais e repetitivas.
- Interagir com os PMs e Designers para alinhamento sobre a implementaรงรฃo de novas soluรงรตes, funcionalidades ou melhorias no produto S4.
- Implementar cรณdigo seguindo os padrรตes de qualidade definidos pela รกrea, atravรฉs de uso de testes automatizados, processos de integraรงรฃo contรญnua e revisรฃo de cรณdigo.
- Fazer entregas de forma contรญnua, buscando manter a cadรชncia necessรกria para atendimento dos acordos firmados, interagindo tambรฉm com outros squads no alinhamento de entregas evitando geraรงรฃo de dependรชncias.
- Buscar a implementaรงรฃo de soluรงรตes multicamadas em uma arquitetura de microservices, utilizando para isso tecnologias jรก adotadas pela empresa, tais como Python e Javascript.
## Local
Remoto ou Escritรณrio em Florianรณpolis/SC
## Requisitos
**Obrigatรณrios:**
Skills comportamentais
- Capacidade de aprender regras de negรณcio rapidamente.
- Capacidade de estruturar projetos tรฉcnicos.
Skills tรฉcnicas
- Ensino superior em รกreas de tecnologia.
- Domรญnio da linguagem Python.
- Conhecimento na definiรงรฃo e construรงรฃo de soluรงรตes para Cloud.
- Conhecimento no uso de testes automatizados.
- Conhecimento na construรงรฃo de APIโs e micro-serviรงos.
- Conhecimento em temas como: clean code, design patterns, message queuing, containers, continuous integrations e continuous delivery.
- Conhecimento em bancos de dados relacionais e linguagem SQL.
**Desejรกveis:**
- Conhecimento em tecnologias de frontend.
- Inglรชs Avanรงado
## Benefรญcios
- Oportunidade de trabalhar em uma empresa que te desafia, motiva, incentiva e dรก liberdade para questionar e contribuir com novas ideias.
- Muita oportunidade de desenvolvimento!
- Ginรกstica Laboral duas vezes por semana.
- No dress code.
- Previdรชncia Privada.
- Folga no dia do aniversรกrio.
- Parcerias de descontos com convรชnios.
- Pacote de benefรญcios tradicionais: VR/VA, Auxรญlio Combustรญvel, Plano de Saรบde.
- Possibilidade de trabalhar remoto pรณs-pandemia.
## Contrataรงรฃo
CLT
## Como se candidatar
Por favor envie um e-mail para pertencer@agriness.com com seu CV anexado - enviar no assunto: Vaga Back-end
## Labels
<!-- retire os labels que nรฃo fazem sentido ร vaga -->
#### Alocaรงรฃo
- Remoto
#### Regime
- CLT
#### Nรญvel
- Sรชnior
| 1.0 | [REMOTE] Back-end Developer Sรชnior @Agriness - <!--
==================================================
Caso a vaga for remoto durante a pandemia informar no texto "Remoto durante o covid"
==================================================
-->
<!--
==================================================
POR FAVOR, Sร POSTE SE A VAGA FOR PARA BACK-END!
Nรฃo faรงa distinรงรฃo de gรชnero no tรญtulo da vaga.
Use: "Back-End Developer" ao invรฉs de
"Desenvolvedor Back-End" \o/
Exemplo: `[Sรฃo Paulo] Back-End Developer @ NOME DA EMPRESA`
==================================================
-->
<!--
==================================================
Caso a vaga for remoto durante a pandemia deixar a linha abaixo
==================================================
-->
> Vaga Remota durante a pandemia
## Nossa empresa
Somos uma empresa de gestรฃo da produรงรฃo animal que tem como propรณsito impulsionar a prosperidade no campo. Trabalhamos muito para tornar o agronegรณcio mais produtivo e rentรกvel, nรฃo somente no Brasil, mas no mundo. Estamos em constante crescimento e expansรฃo. Nosso desafio: escalar a Agriness para o mundo!
Para isso, precisamos de um time que faรงa acontecer!
## Descriรงรฃo da vaga
Estamos em busca de um(a) Desenvolvedor(a) Backend para atuar em uma de nossas squads. Aqui vocรช irรก trabalhar no time de Engenharia de Software, lado a lado de Product Managers, Designers, Analistas e QAโs para planejar e implementar novas funcionalidades e melhorias tanto na plataforma quanto em nossos produtos. Atuando em um Squad vocรช participa na implementaรงรฃo de soluรงรตes escalรกveis que atendam aos critรฉrios de qualidade da empresa e que entregam valor aos nossos clientes.
Sendo mais especรญfico, o Desenvolvedor irรก:
- Interagir e influenciar outros desenvolvedores e integrantes da squad para definiรงรฃo e implementaรงรฃo das soluรงรตes planejadas.
- Disseminar boas prรกticas a todos integrantes da squad, buscando a otimizaรงรฃo de processos evitando atividades manuais e repetitivas.
- Interagir com os PMs e Designers para alinhamento sobre a implementaรงรฃo de novas soluรงรตes, funcionalidades ou melhorias no produto S4.
- Implementar cรณdigo seguindo os padrรตes de qualidade definidos pela รกrea, atravรฉs de uso de testes automatizados, processos de integraรงรฃo contรญnua e revisรฃo de cรณdigo.
- Fazer entregas de forma contรญnua, buscando manter a cadรชncia necessรกria para atendimento dos acordos firmados, interagindo tambรฉm com outros squads no alinhamento de entregas evitando geraรงรฃo de dependรชncias.
- Buscar a implementaรงรฃo de soluรงรตes multicamadas em uma arquitetura de microservices, utilizando para isso tecnologias jรก adotadas pela empresa, tais como Python e Javascript.
## Local
Remoto ou Escritรณrio em Florianรณpolis/SC
## Requisitos
**Obrigatรณrios:**
Skills comportamentais
- Capacidade de aprender regras de negรณcio rapidamente.
- Capacidade de estruturar projetos tรฉcnicos.
Skills tรฉcnicas
- Ensino superior em รกreas de tecnologia.
- Domรญnio da linguagem Python.
- Conhecimento na definiรงรฃo e construรงรฃo de soluรงรตes para Cloud.
- Conhecimento no uso de testes automatizados.
- Conhecimento na construรงรฃo de APIโs e micro-serviรงos.
- Conhecimento em temas como: clean code, design patterns, message queuing, containers, continuous integrations e continuous delivery.
- Conhecimento em bancos de dados relacionais e linguagem SQL.
**Desejรกveis:**
- Conhecimento em tecnologias de frontend.
- Inglรชs Avanรงado
## Benefรญcios
- Oportunidade de trabalhar em uma empresa que te desafia, motiva, incentiva e dรก liberdade para questionar e contribuir com novas ideias.
- Muita oportunidade de desenvolvimento!
- Ginรกstica Laboral duas vezes por semana.
- No dress code.
- Previdรชncia Privada.
- Folga no dia do aniversรกrio.
- Parcerias de descontos com convรชnios.
- Pacote de benefรญcios tradicionais: VR/VA, Auxรญlio Combustรญvel, Plano de Saรบde.
- Possibilidade de trabalhar remoto pรณs-pandemia.
## Contrataรงรฃo
CLT
## Como se candidatar
Por favor envie um e-mail para pertencer@agriness.com com seu CV anexado - enviar no assunto: Vaga Back-end
## Labels
<!-- retire os labels que nรฃo fazem sentido ร vaga -->
#### Alocaรงรฃo
- Remoto
#### Regime
- CLT
#### Nรญvel
- Sรชnior
| test | back end developer sรชnior agriness caso a vaga for remoto durante a pandemia informar no texto remoto durante o covid por favor sรณ poste se a vaga for para back end nรฃo faรงa distinรงรฃo de gรชnero no tรญtulo da vaga use back end developer ao invรฉs de desenvolvedor back end o exemplo back end developer nome da empresa caso a vaga for remoto durante a pandemia deixar a linha abaixo vaga remota durante a pandemia nossa empresa somos uma empresa de gestรฃo da produรงรฃo animal que tem como propรณsito impulsionar a prosperidade no campo trabalhamos muito para tornar o agronegรณcio mais produtivo e rentรกvel nรฃo somente no brasil mas no mundo estamos em constante crescimento e expansรฃo nosso desafio escalar a agriness para o mundo para isso precisamos de um time que faรงa acontecer descriรงรฃo da vaga estamos em busca de um a desenvolvedor a backend para atuar em uma de nossas squads aqui vocรช irรก trabalhar no time de engenharia de software lado a lado de product managers designers analistas e qaโs para planejar e implementar novas funcionalidades e melhorias tanto na plataforma quanto em nossos produtos atuando em um squad vocรช participa na implementaรงรฃo de soluรงรตes escalรกveis que atendam aos critรฉrios de qualidade da empresa e que entregam valor aos nossos clientes sendo mais especรญfico o desenvolvedor irรก interagir e influenciar outros desenvolvedores e integrantes da squad para definiรงรฃo e implementaรงรฃo das soluรงรตes planejadas disseminar boas prรกticas a todos integrantes da squad buscando a otimizaรงรฃo de processos evitando atividades manuais e repetitivas interagir com os pms e designers para alinhamento sobre a implementaรงรฃo de novas soluรงรตes funcionalidades ou melhorias no produto implementar cรณdigo seguindo os padrรตes de qualidade definidos pela รกrea atravรฉs de uso de testes automatizados processos de integraรงรฃo contรญnua e revisรฃo de cรณdigo fazer entregas de forma contรญnua buscando manter a cadรชncia necessรกria para atendimento dos acordos firmados interagindo tambรฉm com outros squads no alinhamento de entregas evitando geraรงรฃo de dependรชncias buscar a implementaรงรฃo de soluรงรตes multicamadas em uma arquitetura de microservices utilizando para isso tecnologias jรก adotadas pela empresa tais como python e javascript local remoto ou escritรณrio em florianรณpolis sc requisitos obrigatรณrios skills comportamentais capacidade de aprender regras de negรณcio rapidamente capacidade de estruturar projetos tรฉcnicos skills tรฉcnicas ensino superior em รกreas de tecnologia domรญnio da linguagem python conhecimento na definiรงรฃo e construรงรฃo de soluรงรตes para cloud conhecimento no uso de testes automatizados conhecimento na construรงรฃo de apiโs e micro serviรงos conhecimento em temas como clean code design patterns message queuing containers continuous integrations e continuous delivery conhecimento em bancos de dados relacionais e linguagem sql desejรกveis conhecimento em tecnologias de frontend inglรชs avanรงado benefรญcios oportunidade de trabalhar em uma empresa que te desafia motiva incentiva e dรก liberdade para questionar e contribuir com novas ideias muita oportunidade de desenvolvimento ginรกstica laboral duas vezes por semana no dress code previdรชncia privada folga no dia do aniversรกrio parcerias de descontos com convรชnios pacote de benefรญcios tradicionais vr va auxรญlio combustรญvel plano de saรบde possibilidade de trabalhar remoto pรณs pandemia contrataรงรฃo clt como se candidatar por favor envie um e mail para pertencer agriness com com seu cv anexado enviar no assunto vaga back end labels alocaรงรฃo remoto regime clt nรญvel sรชnior | 1 |
820,005 | 30,757,528,200 | IssuesEvent | 2023-07-29 08:47:00 | code4romania/asistent-medical-comunitar | https://api.github.com/repos/code4romania/asistent-medical-comunitar | closed | [Beneficiari/Gospodฤrii] change ''Vezi household'' modal screen title | medium-priority | **Describe the bug**
Wrong modal screen title. When you open an individual household you can notice that the modal screen title is ''Vezi household''
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Listฤ beneficiari section
2. Go to Gospodฤrii tab
3. Click on a household
4. Notice the modal screen title

**Expected behavior**
Change modal screen title from ''Vezi household'' to ''Vezi gospodฤrie''
| 1.0 | [Beneficiari/Gospodฤrii] change ''Vezi household'' modal screen title - **Describe the bug**
Wrong modal screen title. When you open an individual household you can notice that the modal screen title is ''Vezi household''
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Listฤ beneficiari section
2. Go to Gospodฤrii tab
3. Click on a household
4. Notice the modal screen title

**Expected behavior**
Change modal screen title from ''Vezi household'' to ''Vezi gospodฤrie''
| non_test | change vezi household modal screen title describe the bug wrong modal screen title when you open an individual household you can notice that the modal screen title is vezi household to reproduce steps to reproduce the behavior go to listฤ beneficiari section go to gospodฤrii tab click on a household notice the modal screen title expected behavior change modal screen title from vezi household to vezi gospodฤrie | 0 |
151,267 | 23,790,245,361 | IssuesEvent | 2022-09-02 13:56:22 | audacity/audacity | https://api.github.com/repos/audacity/audacity | closed | Progress dialogs for effects (and generators) show erroneous durations | bug P2 Design / UX regression Effects | **Describe the bug**
Progress dialogs for effects (and generators) show erroneous durations
a) initially for a few seconds they show an ever-increasing , and large, forecast Remaining Time duration (with no progress bar)
b) then the progress bar is shown immediately at half way across
c) the forecast Remaining time and the Elapsed time both start with the no. of secs for steps a & b
d) the forecast Remaining time and the Elapsed time increment in step together until the effect or generator finishes
This is a regression on 3.1.3
**To Reproduce**
Steps to reproduce the behavior:
1. add a new Stereo track
2. Generate > Chirp (duration 1 hour)
3. Observe: initially for a few seconds the forecast Remaining Time duration increment to silly levels (with no progress bar)
4. Observe: :after a few seconds the progress bar appears locked at half way across
5. Observe: then the forecast Remaining time and the Elapsed time both start with the no. of secs for steps 2-3
6. Effect > Amplify
7. Observe the same behaviors as in steps 3-5
**Actual behavior**
Progress dialogs for effects (and generators) show erroneous durations - see screenshots
**Expected behavior**
The progress dialogs to behave properly as they do in 3.1.3
**Screenshots**
### Step 3 - generating chirp - Remaining time shows 50 minutes forecast after 3 seconds

### Steps 4 and 5 generating chirp (the generator actually takes c. 8 seconds)

### Similar for Amplify - the effect actually took 17 seconds


**Additional information (please complete the following information):**
- OS: Windows 10 - but assume all OS
- Version [e.g. Audacity 2.5.1]
**Additional context**
I have a vague memory of a similar set of behaviors happening before and they got fixed - and now have returned
@Tantacrul @LWinterberg I have flagged this as a UX issue as the ridiculously large initial remaining time forecast will alarm users and the lack of proper remaining time shown as the effect/generator progresses is also worrisome for the user.
_I stumbled across the in real life today as I was using 3.2.0 latest alpha to work on a two hour project and I fund it worrsome as a user (I was amplifying the show)._
| 1.0 | Progress dialogs for effects (and generators) show erroneous durations - **Describe the bug**
Progress dialogs for effects (and generators) show erroneous durations
a) initially for a few seconds they show an ever-increasing , and large, forecast Remaining Time duration (with no progress bar)
b) then the progress bar is shown immediately at half way across
c) the forecast Remaining time and the Elapsed time both start with the no. of secs for steps a & b
d) the forecast Remaining time and the Elapsed time increment in step together until the effect or generator finishes
This is a regression on 3.1.3
**To Reproduce**
Steps to reproduce the behavior:
1. add a new Stereo track
2. Generate > Chirp (duration 1 hour)
3. Observe: initially for a few seconds the forecast Remaining Time duration increment to silly levels (with no progress bar)
4. Observe: :after a few seconds the progress bar appears locked at half way across
5. Observe: then the forecast Remaining time and the Elapsed time both start with the no. of secs for steps 2-3
6. Effect > Amplify
7. Observe the same behaviors as in steps 3-5
**Actual behavior**
Progress dialogs for effects (and generators) show erroneous durations - see screenshots
**Expected behavior**
The progress dialogs to behave properly as they do in 3.1.3
**Screenshots**
### Step 3 - generating chirp - Remaining time shows 50 minutes forecast after 3 seconds

### Steps 4 and 5 generating chirp (the generator actually takes c. 8 seconds)

### Similar for Amplify - the effect actually took 17 seconds


**Additional information (please complete the following information):**
- OS: Windows 10 - but assume all OS
- Version [e.g. Audacity 2.5.1]
**Additional context**
I have a vague memory of a similar set of behaviors happening before and they got fixed - and now have returned
@Tantacrul @LWinterberg I have flagged this as a UX issue as the ridiculously large initial remaining time forecast will alarm users and the lack of proper remaining time shown as the effect/generator progresses is also worrisome for the user.
_I stumbled across the in real life today as I was using 3.2.0 latest alpha to work on a two hour project and I fund it worrsome as a user (I was amplifying the show)._
| non_test | progress dialogs for effects and generators show erroneous durations describe the bug progress dialogs for effects and generators show erroneous durations a initially for a few seconds they show an ever increasing and large forecast remaining time duration with no progress bar b then the progress bar is shown immediately at half way across c the forecast remaining time and the elapsed time both start with the no of secs for steps a b d the forecast remaining time and the elapsed time increment in step together until the effect or generator finishes this is a regression on to reproduce steps to reproduce the behavior add a new stereo track generate chirp duration hour observe initially for a few seconds the forecast remaining time duration increment to silly levels with no progress bar observe after a few seconds the progress bar appears locked at half way across observe then the forecast remaining time and the elapsed time both start with the no of secs for steps effect amplify observe the same behaviors as in steps actual behavior progress dialogs for effects and generators show erroneous durations see screenshots expected behavior the progress dialogs to behave properly as they do in screenshots step generating chirp remaining time shows minutes forecast after seconds steps and generating chirp the generator actually takes c seconds similar for amplify the effect actually took seconds additional information please complete the following information os windows but assume all os version additional context i have a vague memory of a similar set of behaviors happening before and they got fixed and now have returned tantacrul lwinterberg i have flagged this as a ux issue as the ridiculously large initial remaining time forecast will alarm users and the lack of proper remaining time shown as the effect generator progresses is also worrisome for the user i stumbled across the in real life today as i was using latest alpha to work on a two hour project and i fund it worrsome as a user i was amplifying the show | 0 |
159,942 | 12,497,952,513 | IssuesEvent | 2020-06-01 17:24:13 | ForgottenGlory/Living-Skyrim-2 | https://api.github.com/repos/ForgottenGlory/Living-Skyrim-2 | closed | Floating Campfire | bug need testers | **If you are reporting a crash to desktop, please attach your NET Script Framework crash log. This can be found in MO2's Overwrite folder.
If possible, please also attach a copy of your most recent save before the issue occurred.**
**LS Version**
2.0 beta 1
**Describe the bug**
Floating Campfire when choosing "I am camping in the woods" option from alternate start.
**To Reproduce**
Select the option "I am camping in the woods" option from alternate start.
**Expected behavior**
Nothing should be floating in the area.
**Screenshots**



**Additional context**
| 1.0 | Floating Campfire - **If you are reporting a crash to desktop, please attach your NET Script Framework crash log. This can be found in MO2's Overwrite folder.
If possible, please also attach a copy of your most recent save before the issue occurred.**
**LS Version**
2.0 beta 1
**Describe the bug**
Floating Campfire when choosing "I am camping in the woods" option from alternate start.
**To Reproduce**
Select the option "I am camping in the woods" option from alternate start.
**Expected behavior**
Nothing should be floating in the area.
**Screenshots**



**Additional context**
| test | floating campfire if you are reporting a crash to desktop please attach your net script framework crash log this can be found in s overwrite folder if possible please also attach a copy of your most recent save before the issue occurred ls version beta describe the bug floating campfire when choosing i am camping in the woods option from alternate start to reproduce select the option i am camping in the woods option from alternate start expected behavior nothing should be floating in the area screenshots additional context | 1 |
679,633 | 23,240,603,114 | IssuesEvent | 2022-08-03 15:16:23 | DDMAL/cantus | https://api.github.com/repos/DDMAL/cantus | closed | folio numbers get "stuck" on big scrolls | High Priority | If you scroll from one page to another that has no folio number (e.g. the cover of Salzinnes), the field for folio number stays the same as where you started scrolling, and everything else is (correctly) empty. If you then scroll back down to 1r, the mapping resumes correctly from there.
BUT if you scroll to the end and back up page by page, it will keep saying the same thing!

honestly not that common a situation but a weird one. | 1.0 | folio numbers get "stuck" on big scrolls - If you scroll from one page to another that has no folio number (e.g. the cover of Salzinnes), the field for folio number stays the same as where you started scrolling, and everything else is (correctly) empty. If you then scroll back down to 1r, the mapping resumes correctly from there.
BUT if you scroll to the end and back up page by page, it will keep saying the same thing!

honestly not that common a situation but a weird one. | non_test | folio numbers get stuck on big scrolls if you scroll from one page to another that has no folio number e g the cover of salzinnes the field for folio number stays the same as where you started scrolling and everything else is correctly empty if you then scroll back down to the mapping resumes correctly from there but if you scroll to the end and back up page by page it will keep saying the same thing honestly not that common a situation but a weird one | 0 |
344,792 | 10,349,640,167 | IssuesEvent | 2019-09-04 23:18:12 | oslc-op/jira-migration-landfill | https://api.github.com/repos/oslc-op/jira-migration-landfill | closed | Query spec does not define behaviour if both oslc.where and oslc.searchTerms is specified | Core: Query Priority: High Xtra: Jira | [https://open-services.net/bin/view/Main/OSLCCoreSpecQuery](https://open-services.net/bin/view/Main/OSLCCoreSpecQuery) defines the behaviour if both oslc.where and oslc.searchTerms is specified. That description could be referenced earlier in the document to answer a readerโs obvious question when the terms are first mentioned.
In practice, oslc.searchTerms is likely to be a free-text search performed by something like Lucene, whereas oslc.where will either be a SPARQL query or some form of SQL query. Many implementations might choose to support query but not free text search, or query and free text search separately, but mutually exclusive. How does a client discover this?
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-103 (opened by @oslc-bot; previously assigned to @jamsden)_
| 1.0 | Query spec does not define behaviour if both oslc.where and oslc.searchTerms is specified - [https://open-services.net/bin/view/Main/OSLCCoreSpecQuery](https://open-services.net/bin/view/Main/OSLCCoreSpecQuery) defines the behaviour if both oslc.where and oslc.searchTerms is specified. That description could be referenced earlier in the document to answer a readerโs obvious question when the terms are first mentioned.
In practice, oslc.searchTerms is likely to be a free-text search performed by something like Lucene, whereas oslc.where will either be a SPARQL query or some form of SQL query. Many implementations might choose to support query but not free text search, or query and free text search separately, but mutually exclusive. How does a client discover this?
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-103 (opened by @oslc-bot; previously assigned to @jamsden)_
| non_test | query spec does not define behaviour if both oslc where and oslc searchterms is specified defines the behaviour if both oslc where and oslc searchterms is specified that description could be referenced earlier in the document to answer a readerโs obvious question when the terms are first mentioned in practice oslc searchterms is likely to be a free text search performed by something like lucene whereas oslc where will either be a sparql query or some form of sql query many implementations might choose to support query but not free text search or query and free text search separately but mutually exclusive how does a client discover this migrated from opened by oslc bot previously assigned to jamsden | 0 |
54,516 | 6,393,636,982 | IssuesEvent | 2017-08-04 08:05:35 | LiskHQ/lisk-nano | https://api.github.com/repos/LiskHQ/lisk-nano | closed | Add unit tests for sign/verify message | easy test | ### Expected behaviour
There should be unit tests for sign/verify message
### Actual behaviour
There are no unit tests for sign/verify message
| 1.0 | Add unit tests for sign/verify message - ### Expected behaviour
There should be unit tests for sign/verify message
### Actual behaviour
There are no unit tests for sign/verify message
| test | add unit tests for sign verify message expected behaviour there should be unit tests for sign verify message actual behaviour there are no unit tests for sign verify message | 1 |
279,671 | 24,246,014,552 | IssuesEvent | 2022-09-27 10:36:29 | cheminfo/nmrium | https://api.github.com/repos/cheminfo/nmrium | closed | Add drag / drop testcases | Testcases | @wadjih-bencheikh18 could you check how to create a test case (or if it exists already) of a drag / drop of a file or files.
You could try to drag / drop one file and the spectrum should be displayed and there should be one line in the panel 'spectra' on the right.
You could also drag / drop many files and check that many spectra are loaded.
You could use the examples in the project or the attached file (but it would add more files in the project so this is not optimal).
[ethylvinylether.zip](https://github.com/cheminfo/nmrium/files/9524941/ethylvinylether.zip)
| 1.0 | Add drag / drop testcases - @wadjih-bencheikh18 could you check how to create a test case (or if it exists already) of a drag / drop of a file or files.
You could try to drag / drop one file and the spectrum should be displayed and there should be one line in the panel 'spectra' on the right.
You could also drag / drop many files and check that many spectra are loaded.
You could use the examples in the project or the attached file (but it would add more files in the project so this is not optimal).
[ethylvinylether.zip](https://github.com/cheminfo/nmrium/files/9524941/ethylvinylether.zip)
| test | add drag drop testcases wadjih could you check how to create a test case or if it exists already of a drag drop of a file or files you could try to drag drop one file and the spectrum should be displayed and there should be one line in the panel spectra on the right you could also drag drop many files and check that many spectra are loaded you could use the examples in the project or the attached file but it would add more files in the project so this is not optimal | 1 |
178,508 | 13,782,231,625 | IssuesEvent | 2020-10-08 17:19:52 | TheRenegadeCoder/sample-programs | https://api.github.com/repos/TheRenegadeCoder/sample-programs | closed | Add Groovy Testing | enhancement hacktoberfest tests | If possible, please provide the link(s) to a Docker Hub image which would support the language you are requesting:
- https://hub.docker.com/_/groovy
If possible, please provide a style guide which details file naming conventions for the language you would like supported:
- N/A
| 1.0 | Add Groovy Testing - If possible, please provide the link(s) to a Docker Hub image which would support the language you are requesting:
- https://hub.docker.com/_/groovy
If possible, please provide a style guide which details file naming conventions for the language you would like supported:
- N/A
| test | add groovy testing if possible please provide the link s to a docker hub image which would support the language you are requesting if possible please provide a style guide which details file naming conventions for the language you would like supported n a | 1 |
36,856 | 18,016,920,662 | IssuesEvent | 2021-09-16 14:49:14 | cBioPortal/cbioportal | https://api.github.com/repos/cBioPortal/cbioportal | opened | login popup window loads slowly | performance | When you click on login it takes ~5 seconds for the login window to load | True | login popup window loads slowly - When you click on login it takes ~5 seconds for the login window to load | non_test | login popup window loads slowly when you click on login it takes seconds for the login window to load | 0 |
13,022 | 3,298,646,658 | IssuesEvent | 2015-11-02 15:26:12 | mapnik/mapnik | https://api.github.com/repos/mapnik/mapnik | opened | Ensure mapnik::value implements =,!=,<,<=,>,>= for all possible pre-mutations of value types | bug tests | this is fails with current master :
```c++
mapnik::value v1 = 1.01;
mapnik::value v2 = true;
REQUIRE( v1 > v2 )
```
due to lack of specialisation! It's very easy to make a mistake so we need to either come up with a different approach or at lease ensure all required specialisations are present | 1.0 | Ensure mapnik::value implements =,!=,<,<=,>,>= for all possible pre-mutations of value types - this is fails with current master :
```c++
mapnik::value v1 = 1.01;
mapnik::value v2 = true;
REQUIRE( v1 > v2 )
```
due to lack of specialisation! It's very easy to make a mistake so we need to either come up with a different approach or at lease ensure all required specialisations are present | test | ensure mapnik value implements for all possible pre mutations of value types this is fails with current master c mapnik value mapnik value true require due to lack of specialisation it s very easy to make a mistake so we need to either come up with a different approach or at lease ensure all required specialisations are present | 1 |
688,159 | 23,550,436,780 | IssuesEvent | 2022-08-21 18:54:07 | dnd-side-project/dnd-7th-2-backend | https://api.github.com/repos/dnd-side-project/dnd-7th-2-backend | closed | [Feature] ํ์ด์ด๋ฒ ์ด์ค๋ฅผ ์ด์ฉํ FCM Push ์๋ฒ ๊ตฌ์ถ | Type: Feature Priority: Medium Status: On Hold | ## ์ฌ์ ์ค๋น
* [x] ๊ฐ๋
๋ฐ ์๋ฃ ์กฐ์ฌ
* [x] ํ์ด์ด๋ฒ ์ด์ค ๊ณ์ ๊ด๋ จ ์ฒ๋ฆฌ
* [x] ๋ผ์ด๋ธ๋ฌ๋ฆฌ ํ์ต
## ๊ตฌํ
* [x] Practice ํ๋ก์ ํธ๋ก ํ
์คํธ
* [x] Push Notification FCM API ์๋ ํฌ์ธํธ ์ถ๊ฐ
* [x] ํ
์คํธ
* [ ] Android ์ฐ๋ ํ
์คํธ
| 1.0 | [Feature] ํ์ด์ด๋ฒ ์ด์ค๋ฅผ ์ด์ฉํ FCM Push ์๋ฒ ๊ตฌ์ถ - ## ์ฌ์ ์ค๋น
* [x] ๊ฐ๋
๋ฐ ์๋ฃ ์กฐ์ฌ
* [x] ํ์ด์ด๋ฒ ์ด์ค ๊ณ์ ๊ด๋ จ ์ฒ๋ฆฌ
* [x] ๋ผ์ด๋ธ๋ฌ๋ฆฌ ํ์ต
## ๊ตฌํ
* [x] Practice ํ๋ก์ ํธ๋ก ํ
์คํธ
* [x] Push Notification FCM API ์๋ ํฌ์ธํธ ์ถ๊ฐ
* [x] ํ
์คํธ
* [ ] Android ์ฐ๋ ํ
์คํธ
| non_test | ํ์ด์ด๋ฒ ์ด์ค๋ฅผ ์ด์ฉํ fcm push ์๋ฒ ๊ตฌ์ถ ์ฌ์ ์ค๋น ๊ฐ๋
๋ฐ ์๋ฃ ์กฐ์ฌ ํ์ด์ด๋ฒ ์ด์ค ๊ณ์ ๊ด๋ จ ์ฒ๋ฆฌ ๋ผ์ด๋ธ๋ฌ๋ฆฌ ํ์ต ๊ตฌํ practice ํ๋ก์ ํธ๋ก ํ
์คํธ push notification fcm api ์๋ ํฌ์ธํธ ์ถ๊ฐ ํ
์คํธ android ์ฐ๋ ํ
์คํธ | 0 |
450,713 | 31,987,845,336 | IssuesEvent | 2023-09-21 01:50:28 | broadinstitute/long-read-pipelines | https://api.github.com/repos/broadinstitute/long-read-pipelines | opened | Documentation for Epi and CNV pipelines | documentation | - [ ] `ONT/Epigenomics/ONTMethylation.wdl`
- [ ] `TechAgnostic/VariantCalling/LRCNVs.wdl`
I'll also document the touched dockers. | 1.0 | Documentation for Epi and CNV pipelines - - [ ] `ONT/Epigenomics/ONTMethylation.wdl`
- [ ] `TechAgnostic/VariantCalling/LRCNVs.wdl`
I'll also document the touched dockers. | non_test | documentation for epi and cnv pipelines ont epigenomics ontmethylation wdl techagnostic variantcalling lrcnvs wdl i ll also document the touched dockers | 0 |
17,530 | 3,620,218,948 | IssuesEvent | 2016-02-08 19:04:33 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | e2e flake: e2e-gce timed out because of "Services" test | area/test kind/flake priority/P1 team/cluster | 00:10:00 Build was aborted
00:10:00 [BeforeEach] Services
....
00:10:00 STEP: hitting the UDP service's LoadBalancer
00:10:00 Feb 3 23:54:11.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:16.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:20.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:24.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:28.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:32.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:32.235: INFO: Successfully reached udp://104.197.234.65:81
00:10:00 STEP: changing TCP service mutability-test back to type=ClusterIP
00:10:00 Feb 3 23:54:32.242: INFO: Waiting up to 20m0s for service "mutability-test" to have no LoadBalancer
00:10:00
00:10:00 ---------------------------------------------------------
00:10:00 Received interrupt. Running AfterSuite...
https://pantheon.corp.google.com/storage/browser/kubernetes-jenkins/logs/kubernetes-e2e-gce/10928/?project=kubernetes-jenkins
http://kubekins.dls.corp.google.com:8080/view/Critical%20Builds/job/kubernetes-e2e-gce/10928/consoleFull
@kubernetes/goog-cluster | 1.0 | e2e flake: e2e-gce timed out because of "Services" test - 00:10:00 Build was aborted
00:10:00 [BeforeEach] Services
....
00:10:00 STEP: hitting the UDP service's LoadBalancer
00:10:00 Feb 3 23:54:11.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:16.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:20.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:24.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:28.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:32.232: INFO: Testing UDP reachability of udp://104.197.234.65:81
00:10:00 Feb 3 23:54:32.235: INFO: Successfully reached udp://104.197.234.65:81
00:10:00 STEP: changing TCP service mutability-test back to type=ClusterIP
00:10:00 Feb 3 23:54:32.242: INFO: Waiting up to 20m0s for service "mutability-test" to have no LoadBalancer
00:10:00
00:10:00 ---------------------------------------------------------
00:10:00 Received interrupt. Running AfterSuite...
https://pantheon.corp.google.com/storage/browser/kubernetes-jenkins/logs/kubernetes-e2e-gce/10928/?project=kubernetes-jenkins
http://kubekins.dls.corp.google.com:8080/view/Critical%20Builds/job/kubernetes-e2e-gce/10928/consoleFull
@kubernetes/goog-cluster | test | flake gce timed out because of services test build was aborted services step hitting the udp service s loadbalancer feb info testing udp reachability of udp feb info testing udp reachability of udp feb info testing udp reachability of udp feb info testing udp reachability of udp feb info testing udp reachability of udp feb info testing udp reachability of udp feb info successfully reached udp step changing tcp service mutability test back to type clusterip feb info waiting up to for service mutability test to have no loadbalancer received interrupt running aftersuite kubernetes goog cluster | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.