Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8,924
| 12,032,525,281
|
IssuesEvent
|
2020-04-13 12:21:17
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
closed
|
Testing new preprocessor
|
enhancement preprocessor
|
We should add tests for the new backend.
A lot of work has already been done, most recently @jvegasbsc added to the tests here:
https://github.com/ESMValGroup/ESMValTool/tree/REFACTORING_fixes/tests
|
1.0
|
Testing new preprocessor - We should add tests for the new backend.
A lot of work has already been done, most recently @jvegasbsc added to the tests here:
https://github.com/ESMValGroup/ESMValTool/tree/REFACTORING_fixes/tests
|
process
|
testing new preprocessor we should add tests for the new backend a lot of work has already been done most recently jvegasbsc added to the tests here
| 1
|
714,921
| 24,580,890,901
|
IssuesEvent
|
2022-10-13 15:31:50
|
WFP-VAM/prism-app
|
https://api.github.com/repos/WFP-VAM/prism-app
|
closed
|
Failed GetCapabilities request from WFP GeoServer prevents app from loading
|
bug priority:high
|
Something has gone wrong with the WMS GetCapabilities response from WFP's GeoServer. But instead of just failing and allowing the application to load, the loading circle is continuously spinning and the app is unusable.
This is an active issue on this surge deployment:
https://prism-moz.surge.sh/?
This request is failing:
https://geonode.wfp.org/geoserver/prism/wms?request=GetCapabilities
<img width="1440" alt="Screen Shot 2022-10-12 at 21 09 53" src="https://user-images.githubusercontent.com/3343536/195497327-1ebc7bed-8773-4e1b-a80e-7afb8319eb89.png">
|
1.0
|
Failed GetCapabilities request from WFP GeoServer prevents app from loading - Something has gone wrong with the WMS GetCapabilities response from WFP's GeoServer. But instead of just failing and allowing the application to load, the loading circle is continuously spinning and the app is unusable.
This is an active issue on this surge deployment:
https://prism-moz.surge.sh/?
This request is failing:
https://geonode.wfp.org/geoserver/prism/wms?request=GetCapabilities
<img width="1440" alt="Screen Shot 2022-10-12 at 21 09 53" src="https://user-images.githubusercontent.com/3343536/195497327-1ebc7bed-8773-4e1b-a80e-7afb8319eb89.png">
|
non_process
|
failed getcapabilities request from wfp geoserver prevents app from loading something has gone wrong with the wms getcapabilities response from wfp s geoserver but instead of just failing and allowing the application to load the loading circle is continuously spinning and the app is unusable this is an active issue on this surge deployment this request is failing img width alt screen shot at src
| 0
|
21,068
| 28,017,151,570
|
IssuesEvent
|
2023-03-28 00:14:56
|
nephio-project/sig-release
|
https://api.github.com/repos/nephio-project/sig-release
|
opened
|
Create Nephio docker registries and make sure builds pushing images to them
|
area/process-mgmt sig/release
|
We need to make sure we have the docker registry available to push and pull images. We need to create the resgistry and ensure build pipelines can push/pull from them.
|
1.0
|
Create Nephio docker registries and make sure builds pushing images to them - We need to make sure we have the docker registry available to push and pull images. We need to create the resgistry and ensure build pipelines can push/pull from them.
|
process
|
create nephio docker registries and make sure builds pushing images to them we need to make sure we have the docker registry available to push and pull images we need to create the resgistry and ensure build pipelines can push pull from them
| 1
|
6,052
| 8,872,423,843
|
IssuesEvent
|
2019-01-11 15:23:32
|
kiwicom/orbit-components
|
https://api.github.com/repos/kiwicom/orbit-components
|
closed
|
<Stepper> component
|
Enhancement Processing
|
## Description
Stepper allows users to easily change the amount of something by increments. Is great to use for passengers or baggage.
## Visual style

Zeplin: https://zpl.io/aRE0XMn
### Interactions
- Just button, same as default
## Functional specs
- It's based on Stepper from search (passenger popover), functional behavior should allow doing those same things.
- Could have similar props to InputStepper (min, max, steps, ...)
|
1.0
|
<Stepper> component - ## Description
Stepper allows users to easily change the amount of something by increments. Is great to use for passengers or baggage.
## Visual style

Zeplin: https://zpl.io/aRE0XMn
### Interactions
- Just button, same as default
## Functional specs
- It's based on Stepper from search (passenger popover), functional behavior should allow doing those same things.
- Could have similar props to InputStepper (min, max, steps, ...)
|
process
|
component description stepper allows users to easily change the amount of something by increments is great to use for passengers or baggage visual style zeplin interactions just button same as default functional specs it s based on stepper from search passenger popover functional behavior should allow doing those same things could have similar props to inputstepper min max steps
| 1
|
135,499
| 19,584,483,941
|
IssuesEvent
|
2022-01-05 03:52:53
|
JaydenDev/Catalyst
|
https://api.github.com/repos/JaydenDev/Catalyst
|
closed
|
Scroll cutoff in Preferences
|
bug help wanted good first issue design frontend high priority
|
If you drag the window and make it smaller, then open settings, then you can scroll down, but then it is cut off where it says "bookmarks".
|
1.0
|
Scroll cutoff in Preferences - If you drag the window and make it smaller, then open settings, then you can scroll down, but then it is cut off where it says "bookmarks".
|
non_process
|
scroll cutoff in preferences if you drag the window and make it smaller then open settings then you can scroll down but then it is cut off where it says bookmarks
| 0
|
20,873
| 27,659,299,361
|
IssuesEvent
|
2023-03-12 10:41:28
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
closed
|
[C++] Nightly Integration Testing Report
|
type: process nightly-testing
|
Note: This report excludes firestore. Please also check **[the report for firestore](https://github.com/firebase/firebase-cpp-sdk/issues/1178)**
***
<hidden value="integration-test-status-comment"></hidden>
### ❌ [build against repo] Integration test FAILED
Requested by @DellaBitta on commit cb719bd3b53128ac6e2d7b42c2bbef0f4df2e785
Last updated: Sat Mar 11 02:47 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4391533166)**
| Failures | Configs |
|----------|---------|
| database | [BUILD] [ERROR] [MacOS] [1/2 ssl_lib: arm64] [boringssl]<br/> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit cb719bd3b53128ac6e2d7b42c2bbef0f4df2e785
Last updated: Sat Mar 11 05:26 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4392233579)**
<hidden value="integration-test-status-comment"></hidden>
|
1.0
|
[C++] Nightly Integration Testing Report - Note: This report excludes firestore. Please also check **[the report for firestore](https://github.com/firebase/firebase-cpp-sdk/issues/1178)**
***
<hidden value="integration-test-status-comment"></hidden>
### ❌ [build against repo] Integration test FAILED
Requested by @DellaBitta on commit cb719bd3b53128ac6e2d7b42c2bbef0f4df2e785
Last updated: Sat Mar 11 02:47 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4391533166)**
| Failures | Configs |
|----------|---------|
| database | [BUILD] [ERROR] [MacOS] [1/2 ssl_lib: arm64] [boringssl]<br/> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit cb719bd3b53128ac6e2d7b42c2bbef0f4df2e785
Last updated: Sat Mar 11 05:26 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4392233579)**
<hidden value="integration-test-status-comment"></hidden>
|
process
|
nightly integration testing report note this report excludes firestore please also check ❌ nbsp integration test failed requested by dellabitta on commit last updated sat mar pst failures configs database add flaky tests to ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated sat mar pst
| 1
|
94,204
| 15,962,352,168
|
IssuesEvent
|
2021-04-16 01:07:22
|
RG4421/crayons
|
https://api.github.com/repos/RG4421/crayons
|
opened
|
CVE-2021-23341 (High) detected in prismjs-1.17.1.tgz, prismjs-1.20.0.tgz
|
security vulnerability
|
## CVE-2021-23341 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>prismjs-1.17.1.tgz</b>, <b>prismjs-1.20.0.tgz</b></p></summary>
<p>
<details><summary><b>prismjs-1.17.1.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.17.1.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.17.1.tgz</a></p>
<p>Path to dependency file: crayons/package.json</p>
<p>Path to vulnerable library: crayons/node_modules/refractor/node_modules/prismjs/package.json</p>
<p>
Dependency Hierarchy:
- addon-storysource-5.3.14.tgz (Root Library)
- react-syntax-highlighter-11.0.2.tgz
- refractor-2.10.1.tgz
- :x: **prismjs-1.17.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>prismjs-1.20.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.20.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.20.0.tgz</a></p>
<p>Path to dependency file: crayons/package.json</p>
<p>Path to vulnerable library: crayons/node_modules/prismjs/package.json</p>
<p>
Dependency Hierarchy:
- addon-storysource-5.3.14.tgz (Root Library)
- react-syntax-highlighter-11.0.2.tgz
- :x: **prismjs-1.20.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341>CVE-2021-23341</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341</a></p>
<p>Release Date: 2021-02-18</p>
<p>Fix Resolution: 1.23.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.17.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@storybook/addon-storysource:5.3.14;react-syntax-highlighter:11.0.2;refractor:2.10.1;prismjs:1.17.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.23.0"},{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.20.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@storybook/addon-storysource:5.3.14;react-syntax-highlighter:11.0.2;prismjs:1.20.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.23.0"}],"baseBranches":["next"],"vulnerabilityIdentifier":"CVE-2021-23341","vulnerabilityDetails":"The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23341 (High) detected in prismjs-1.17.1.tgz, prismjs-1.20.0.tgz - ## CVE-2021-23341 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>prismjs-1.17.1.tgz</b>, <b>prismjs-1.20.0.tgz</b></p></summary>
<p>
<details><summary><b>prismjs-1.17.1.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.17.1.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.17.1.tgz</a></p>
<p>Path to dependency file: crayons/package.json</p>
<p>Path to vulnerable library: crayons/node_modules/refractor/node_modules/prismjs/package.json</p>
<p>
Dependency Hierarchy:
- addon-storysource-5.3.14.tgz (Root Library)
- react-syntax-highlighter-11.0.2.tgz
- refractor-2.10.1.tgz
- :x: **prismjs-1.17.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>prismjs-1.20.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.20.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.20.0.tgz</a></p>
<p>Path to dependency file: crayons/package.json</p>
<p>Path to vulnerable library: crayons/node_modules/prismjs/package.json</p>
<p>
Dependency Hierarchy:
- addon-storysource-5.3.14.tgz (Root Library)
- react-syntax-highlighter-11.0.2.tgz
- :x: **prismjs-1.20.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341>CVE-2021-23341</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341</a></p>
<p>Release Date: 2021-02-18</p>
<p>Fix Resolution: 1.23.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.17.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@storybook/addon-storysource:5.3.14;react-syntax-highlighter:11.0.2;refractor:2.10.1;prismjs:1.17.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.23.0"},{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.20.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@storybook/addon-storysource:5.3.14;react-syntax-highlighter:11.0.2;prismjs:1.20.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.23.0"}],"baseBranches":["next"],"vulnerabilityIdentifier":"CVE-2021-23341","vulnerabilityDetails":"The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in prismjs tgz prismjs tgz cve high severity vulnerability vulnerable libraries prismjs tgz prismjs tgz prismjs tgz lightweight robust elegant syntax highlighting a spin off project from dabblet library home page a href path to dependency file crayons package json path to vulnerable library crayons node modules refractor node modules prismjs package json dependency hierarchy addon storysource tgz root library react syntax highlighter tgz refractor tgz x prismjs tgz vulnerable library prismjs tgz lightweight robust elegant syntax highlighting a spin off project from dabblet library home page a href path to dependency file crayons package json path to vulnerable library crayons node modules prismjs package json dependency hierarchy addon storysource tgz root library react syntax highlighter tgz x prismjs tgz vulnerable library found in base branch next vulnerability details the package prismjs before are vulnerable to regular expression denial of service redos via the prism asciidoc prism rest prism tap and prism eiffel components publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree storybook addon storysource react syntax highlighter refractor prismjs isminimumfixversionavailable true minimumfixversion packagetype javascript node js packagename prismjs packageversion packagefilepaths istransitivedependency true dependencytree storybook addon storysource react syntax highlighter prismjs isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the package prismjs before are vulnerable to regular expression denial of service redos via the prism asciidoc prism rest prism tap and prism eiffel components vulnerabilityurl
| 0
|
295,948
| 9,102,655,504
|
IssuesEvent
|
2019-02-20 14:16:42
|
NickBurneConsulting-GivePanel/givepanel
|
https://api.github.com/repos/NickBurneConsulting-GivePanel/givepanel
|
opened
|
Total raised column on Fundraiser Table
|
Priority
|
Should not include Gift Aid.
This will make it easier when onboarding non-UK countries.
How Gift Aid works on the Fundraiser details panel is fine.
|
1.0
|
Total raised column on Fundraiser Table - Should not include Gift Aid.
This will make it easier when onboarding non-UK countries.
How Gift Aid works on the Fundraiser details panel is fine.
|
non_process
|
total raised column on fundraiser table should not include gift aid this will make it easier when onboarding non uk countries how gift aid works on the fundraiser details panel is fine
| 0
|
109,421
| 16,845,820,036
|
IssuesEvent
|
2021-06-19 13:08:46
|
mukul-seagate11/cortx-s3server
|
https://api.github.com/repos/mukul-seagate11/cortx-s3server
|
closed
|
CVE-2020-24616 (High) detected in jackson-databind-2.6.6.jar
|
security vulnerability
|
## CVE-2020-24616 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: cortx-s3server/auth-utils/jclient/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.6/jackson-databind-2.6.6.jar</p>
<p>
Dependency Hierarchy:
- aws-java-sdk-s3-1.11.37.jar (Root Library)
- aws-java-sdk-core-1.11.37.jar
- :x: **jackson-databind-2.6.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/mukul-seagate11/cortx-s3server/commits/03f1533c44ecd1d636be384cd5d10a8eb1e25f47">03f1533c44ecd1d636be384cd5d10a8eb1e25f47</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.6 mishandles the interaction between serialization gadgets and typing, related to br.com.anteros.dbcp.AnterosDBCPDataSource (aka Anteros-DBCP).
<p>Publish Date: 2020-08-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24616>CVE-2020-24616</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616</a></p>
<p>Release Date: 2020-08-25</p>
<p>Fix Resolution: 2.9.10.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-24616 (High) detected in jackson-databind-2.6.6.jar - ## CVE-2020-24616 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: cortx-s3server/auth-utils/jclient/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.6/jackson-databind-2.6.6.jar</p>
<p>
Dependency Hierarchy:
- aws-java-sdk-s3-1.11.37.jar (Root Library)
- aws-java-sdk-core-1.11.37.jar
- :x: **jackson-databind-2.6.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/mukul-seagate11/cortx-s3server/commits/03f1533c44ecd1d636be384cd5d10a8eb1e25f47">03f1533c44ecd1d636be384cd5d10a8eb1e25f47</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.6 mishandles the interaction between serialization gadgets and typing, related to br.com.anteros.dbcp.AnterosDBCPDataSource (aka Anteros-DBCP).
<p>Publish Date: 2020-08-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24616>CVE-2020-24616</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616</a></p>
<p>Release Date: 2020-08-25</p>
<p>Fix Resolution: 2.9.10.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file cortx auth utils jclient pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy aws java sdk jar root library aws java sdk core jar x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to br com anteros dbcp anterosdbcpdatasource aka anteros dbcp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
13,325
| 15,788,046,099
|
IssuesEvent
|
2021-04-01 20:10:32
|
modm-io/modm
|
https://api.github.com/repos/modm-io/modm
|
closed
|
Tags and Releases are missing.
|
process 📊
|
[Some](https://github.com/modm-io/modm/commit/c442b5ce1fc4630a37f2b3bf7e126ead8091f816) commits are marked as releases, although no Github [Releases](https://github.com/modm-io/modm/releases) and not [tags](https://github.com/modm-io/modm/tags) are defined.
|
1.0
|
Tags and Releases are missing. - [Some](https://github.com/modm-io/modm/commit/c442b5ce1fc4630a37f2b3bf7e126ead8091f816) commits are marked as releases, although no Github [Releases](https://github.com/modm-io/modm/releases) and not [tags](https://github.com/modm-io/modm/tags) are defined.
|
process
|
tags and releases are missing commits are marked as releases although no github and not are defined
| 1
|
17,167
| 22,743,401,261
|
IssuesEvent
|
2022-07-07 06:56:44
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
opened
|
Refactor StreamProcessor / Engine
|
kind/toil team/distributed team/process-automation
|
**Description**
Part of #9600
**Todo:**
- [ ] Find a good name for the StreamProcessor
- [ ] Rename the current StreamProcessor, e.g. StreamingPlatform (see #9602)
- [ ] Introduce a new StreamProcessor interface, which covers the reality (this needs to be adjusted during progressing with #9600 )
- [ ] init Method
- [ ] replay Method
- [ ] process Method (in the begin with no Result returning, only after parts of #9724 are done)
- [ ] onError Method
- [ ] Implement LifecycleAware?
- [ ] Create a new StreamProcessor implementation called Engine
- [ ] Move some code out form the StreamingPlatform, ReplayStateMachine and ProcessingStateMachine to the Engine, see POC #9602 and related branch
- [ ] :warning: OnError implementation, can be completed after the writer change is done and we can return an Result see #9724
- [ ] **Bonus:** If we create the Engine outside of the StreamingPlatform and give it to the builder, we can inject the dependencies outside and can simplify the StreamProcessor tests
A clear and concise description of what this issue is about.
|
1.0
|
Refactor StreamProcessor / Engine - **Description**
Part of #9600
**Todo:**
- [ ] Find a good name for the StreamProcessor
- [ ] Rename the current StreamProcessor, e.g. StreamingPlatform (see #9602)
- [ ] Introduce a new StreamProcessor interface, which covers the reality (this needs to be adjusted during progressing with #9600 )
- [ ] init Method
- [ ] replay Method
- [ ] process Method (in the begin with no Result returning, only after parts of #9724 are done)
- [ ] onError Method
- [ ] Implement LifecycleAware?
- [ ] Create a new StreamProcessor implementation called Engine
- [ ] Move some code out form the StreamingPlatform, ReplayStateMachine and ProcessingStateMachine to the Engine, see POC #9602 and related branch
- [ ] :warning: OnError implementation, can be completed after the writer change is done and we can return an Result see #9724
- [ ] **Bonus:** If we create the Engine outside of the StreamingPlatform and give it to the builder, we can inject the dependencies outside and can simplify the StreamProcessor tests
A clear and concise description of what this issue is about.
|
process
|
refactor streamprocessor engine description part of todo find a good name for the streamprocessor rename the current streamprocessor e g streamingplatform see introduce a new streamprocessor interface which covers the reality this needs to be adjusted during progressing with init method replay method process method in the begin with no result returning only after parts of are done onerror method implement lifecycleaware create a new streamprocessor implementation called engine move some code out form the streamingplatform replaystatemachine and processingstatemachine to the engine see poc and related branch warning onerror implementation can be completed after the writer change is done and we can return an result see bonus if we create the engine outside of the streamingplatform and give it to the builder we can inject the dependencies outside and can simplify the streamprocessor tests a clear and concise description of what this issue is about
| 1
|
386,359
| 11,437,523,221
|
IssuesEvent
|
2020-02-05 00:15:29
|
Alluxio/alluxio
|
https://api.github.com/repos/Alluxio/alluxio
|
closed
|
Remove unnecessary OSS logging due to deleteFileIfExists
|
area-ufs priority-low type-bug
|
**Alluxio Version:**
2.1.1
**Describe the bug**
Excessive unnecessary logging on checking object existence due to `deleteFileIfExists` using OSS
```
2020-02-04 22:02:03,816 WARN OSSUnderFileSystem - Failed to get Object io-test/.alluxio_ufs_blocks.alluxio.0x1D91AC0E01AB0165.tmp/234377707520, return null
com.aliyun.oss.OSSException: Not Found
[ErrorCode]: NoSuchKey
[RequestId]: 5E39E9DBB766D33436D131A4
[HostId]: null
at com.aliyun.oss.common.utils.ExceptionFactory.createOSSException(ExceptionFactory.java:105)
at com.aliyun.oss.internal.OSSErrorResponseHandler.handle(OSSErrorResponseHandler.java:56)
at com.aliyun.oss.common.comm.ServiceClient.handleResponse(ServiceClient.java:257)
at com.aliyun.oss.common.comm.ServiceClient.sendRequestImpl(ServiceClient.java:140)
at com.aliyun.oss.common.comm.ServiceClient.sendRequest(ServiceClient.java:70)
at com.aliyun.oss.internal.OSSOperation.send(OSSOperation.java:83)
at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:145)
at com.aliyun.oss.internal.OSSObjectOperation.getObjectMetadata(OSSObjectOperation.java:496)
at com.aliyun.oss.OSSClient.getObjectMetadata(OSSClient.java:599)
at com.aliyun.oss.OSSClient.getObjectMetadata(OSSClient.java:589)
at alluxio.underfs.oss.OSSUnderFileSystem.getObjectStatus(OSSUnderFileSystem.java:243)
at alluxio.underfs.ObjectUnderFileSystem.isFile(ObjectUnderFileSystem.java:588)
at alluxio.underfs.UnderFileSystemWithLogging$30.call(UnderFileSystemWithLogging.java:543)
at alluxio.underfs.UnderFileSystemWithLogging$30.call(UnderFileSystemWithLogging.java:540)
at alluxio.underfs.UnderFileSystemWithLogging.call(UnderFileSystemWithLogging.java:949)
at alluxio.underfs.UnderFileSystemWithLogging.isFile(UnderFileSystemWithLogging.java:540)
at alluxio.concurrent.ManagedBlockingUfsForwarder$25.execute(ManagedBlockingUfsForwarder.java:344)
at alluxio.concurrent.ManagedBlockingUfsForwarder$25.execute(ManagedBlockingUfsForwarder.java:341)
at alluxio.concurrent.ManagedBlockingUfsForwarder$ManagedBlockingUfsMethod.block(ManagedBlockingUfsForwarder.java:596)
at alluxio.concurrent.jsr.ForkJoinPool.managedBlock(ForkJoinPool.java:1013)
at alluxio.concurrent.ForkJoinPoolHelper.safeManagedBlock(ForkJoinPoolHelper.java:41)
at alluxio.concurrent.ManagedBlockingUfsForwarder$ManagedBlockingUfsMethod.get(ManagedBlockingUfsForwarder.java:582)
at alluxio.concurrent.ManagedBlockingUfsForwarder.isFile(ManagedBlockingUfsForwarder.java:346)
at alluxio.util.UnderFileSystemUtils.deleteFileIfExists(UnderFileSystemUtils.java:76)
at alluxio.master.file.DefaultFileSystemMaster$PersistenceChecker.handleSuccess(DefaultFileSystemMaster.java:4254)
at alluxio.master.file.DefaultFileSystemMaster$PersistenceChecker.lambda$heartbeat$0(DefaultFileSystemMaster.java:4330)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
**To Reproduce**
when writing data to Alluxio async propagated to OSS
**Expected behavior**
This warning is unnecessary
**Urgency**
low
**Additional context**
|
1.0
|
Remove unnecessary OSS logging due to deleteFileIfExists - **Alluxio Version:**
2.1.1
**Describe the bug**
Excessive unnecessary logging on checking object existence due to `deleteFileIfExists` using OSS
```
2020-02-04 22:02:03,816 WARN OSSUnderFileSystem - Failed to get Object io-test/.alluxio_ufs_blocks.alluxio.0x1D91AC0E01AB0165.tmp/234377707520, return null
com.aliyun.oss.OSSException: Not Found
[ErrorCode]: NoSuchKey
[RequestId]: 5E39E9DBB766D33436D131A4
[HostId]: null
at com.aliyun.oss.common.utils.ExceptionFactory.createOSSException(ExceptionFactory.java:105)
at com.aliyun.oss.internal.OSSErrorResponseHandler.handle(OSSErrorResponseHandler.java:56)
at com.aliyun.oss.common.comm.ServiceClient.handleResponse(ServiceClient.java:257)
at com.aliyun.oss.common.comm.ServiceClient.sendRequestImpl(ServiceClient.java:140)
at com.aliyun.oss.common.comm.ServiceClient.sendRequest(ServiceClient.java:70)
at com.aliyun.oss.internal.OSSOperation.send(OSSOperation.java:83)
at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:145)
at com.aliyun.oss.internal.OSSObjectOperation.getObjectMetadata(OSSObjectOperation.java:496)
at com.aliyun.oss.OSSClient.getObjectMetadata(OSSClient.java:599)
at com.aliyun.oss.OSSClient.getObjectMetadata(OSSClient.java:589)
at alluxio.underfs.oss.OSSUnderFileSystem.getObjectStatus(OSSUnderFileSystem.java:243)
at alluxio.underfs.ObjectUnderFileSystem.isFile(ObjectUnderFileSystem.java:588)
at alluxio.underfs.UnderFileSystemWithLogging$30.call(UnderFileSystemWithLogging.java:543)
at alluxio.underfs.UnderFileSystemWithLogging$30.call(UnderFileSystemWithLogging.java:540)
at alluxio.underfs.UnderFileSystemWithLogging.call(UnderFileSystemWithLogging.java:949)
at alluxio.underfs.UnderFileSystemWithLogging.isFile(UnderFileSystemWithLogging.java:540)
at alluxio.concurrent.ManagedBlockingUfsForwarder$25.execute(ManagedBlockingUfsForwarder.java:344)
at alluxio.concurrent.ManagedBlockingUfsForwarder$25.execute(ManagedBlockingUfsForwarder.java:341)
at alluxio.concurrent.ManagedBlockingUfsForwarder$ManagedBlockingUfsMethod.block(ManagedBlockingUfsForwarder.java:596)
at alluxio.concurrent.jsr.ForkJoinPool.managedBlock(ForkJoinPool.java:1013)
at alluxio.concurrent.ForkJoinPoolHelper.safeManagedBlock(ForkJoinPoolHelper.java:41)
at alluxio.concurrent.ManagedBlockingUfsForwarder$ManagedBlockingUfsMethod.get(ManagedBlockingUfsForwarder.java:582)
at alluxio.concurrent.ManagedBlockingUfsForwarder.isFile(ManagedBlockingUfsForwarder.java:346)
at alluxio.util.UnderFileSystemUtils.deleteFileIfExists(UnderFileSystemUtils.java:76)
at alluxio.master.file.DefaultFileSystemMaster$PersistenceChecker.handleSuccess(DefaultFileSystemMaster.java:4254)
at alluxio.master.file.DefaultFileSystemMaster$PersistenceChecker.lambda$heartbeat$0(DefaultFileSystemMaster.java:4330)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
**To Reproduce**
when writing data to Alluxio async propagated to OSS
**Expected behavior**
This warning is unnecessary
**Urgency**
low
**Additional context**
|
non_process
|
remove unnecessary oss logging due to deletefileifexists alluxio version describe the bug excessive unnecessary logging on checking object existence due to deletefileifexists using oss warn ossunderfilesystem failed to get object io test alluxio ufs blocks alluxio tmp return null com aliyun oss ossexception not found nosuchkey null at com aliyun oss common utils exceptionfactory createossexception exceptionfactory java at com aliyun oss internal osserrorresponsehandler handle osserrorresponsehandler java at com aliyun oss common comm serviceclient handleresponse serviceclient java at com aliyun oss common comm serviceclient sendrequestimpl serviceclient java at com aliyun oss common comm serviceclient sendrequest serviceclient java at com aliyun oss internal ossoperation send ossoperation java at com aliyun oss internal ossoperation dooperation ossoperation java at com aliyun oss internal ossobjectoperation getobjectmetadata ossobjectoperation java at com aliyun oss ossclient getobjectmetadata ossclient java at com aliyun oss ossclient getobjectmetadata ossclient java at alluxio underfs oss ossunderfilesystem getobjectstatus ossunderfilesystem java at alluxio underfs objectunderfilesystem isfile objectunderfilesystem java at alluxio underfs underfilesystemwithlogging call underfilesystemwithlogging java at alluxio underfs underfilesystemwithlogging call underfilesystemwithlogging java at alluxio underfs underfilesystemwithlogging call underfilesystemwithlogging java at alluxio underfs underfilesystemwithlogging isfile underfilesystemwithlogging java at alluxio concurrent managedblockingufsforwarder execute managedblockingufsforwarder java at alluxio concurrent managedblockingufsforwarder execute managedblockingufsforwarder java at alluxio concurrent managedblockingufsforwarder managedblockingufsmethod block managedblockingufsforwarder java at alluxio concurrent jsr forkjoinpool managedblock forkjoinpool java at alluxio concurrent forkjoinpoolhelper safemanagedblock forkjoinpoolhelper java at alluxio concurrent managedblockingufsforwarder managedblockingufsmethod get managedblockingufsforwarder java at alluxio concurrent managedblockingufsforwarder isfile managedblockingufsforwarder java at alluxio util underfilesystemutils deletefileifexists underfilesystemutils java at alluxio master file defaultfilesystemmaster persistencechecker handlesuccess defaultfilesystemmaster java at alluxio master file defaultfilesystemmaster persistencechecker lambda heartbeat defaultfilesystemmaster java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java to reproduce when writing data to alluxio async propagated to oss expected behavior this warning is unnecessary urgency low additional context
| 0
|
17,623
| 23,442,845,593
|
IssuesEvent
|
2022-08-15 16:31:03
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Unable to delete Obj1 if Obj2 references it with Action: SetNull
|
bug/0-unknown kind/bug process/candidate topic: broken query topic: relations team/client topic: referential actions topic: referentialIntegrity
|
### Bug description
When using the `Action: SetNull` referential action for a model relationship the deletion fails and throws an error about violating a foreign key constraint:
```
Invalid `prisma.hub.delete()` invocation:\n\n\n Foreign key constraint failed on the field: `BatteryLevel_hubId_fkey (index)`
```
When I update the set the field with the foreign key to be null ahead of the deletion it works:
```ts
await prisma.batteryLevel.updateMany({ where: { hubId: id }, data: { hubId: null } })
```
### How to reproduce
1. Use the `Action: SetNull` Cascade option for a model relationship
2. Create a field of both objects
3. Try to delete the parent object
### Expected behavior
The foreign key field should be set to null and the deletion should succeed
### Prisma information
```prisma
model Hub {
id Int @id @default(autoincrement())
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
name String
batteryLevels BatteryLevel[]
}
model BatteryLevel {
id Int @id @default(autoincrement())
volts Float
percent Float
hubId Int?
hub Hub? @relation(fields: [hubId], references: [id], onDelete: SetNull)
createdAt DateTime @default(now())
}
```
### Environment & setup
- OS: Windows 10
- Database: PostgreSQL
- Node.js version: v16.14.2
### Prisma Version
```
prisma : 3.15.2
@prisma/client : 3.15.2
Current platform : windows
Query Engine (Node-API) : libquery-engine 461d6a05159055555eb7dfb337c9fb271cbd4d7e (at node_modules\@prisma\engines\query_engine-windows.dll.node)
Migration Engine : migration-engine-cli 461d6a05159055555eb7dfb337c9fb271cbd4d7e (at node_modules\@prisma\engines\migration-engine-windows.exe)
Introspection Engine : introspection-core 461d6a05159055555eb7dfb337c9fb271cbd4d7e (at node_modules\@prisma\engines\introspection-engine-windows.exe)
Format Binary : prisma-fmt 461d6a05159055555eb7dfb337c9fb271cbd4d7e (at node_modules\@prisma\engines\prisma-fmt-windows.exe)
Default Engines Hash : 461d6a05159055555eb7dfb337c9fb271cbd4d7e
Studio : 0.462.0
```
|
1.0
|
Unable to delete Obj1 if Obj2 references it with Action: SetNull - ### Bug description
When using the `Action: SetNull` referential action for a model relationship the deletion fails and throws an error about violating a foreign key constraint:
```
Invalid `prisma.hub.delete()` invocation:\n\n\n Foreign key constraint failed on the field: `BatteryLevel_hubId_fkey (index)`
```
When I update the set the field with the foreign key to be null ahead of the deletion it works:
```ts
await prisma.batteryLevel.updateMany({ where: { hubId: id }, data: { hubId: null } })
```
### How to reproduce
1. Use the `Action: SetNull` Cascade option for a model relationship
2. Create a field of both objects
3. Try to delete the parent object
### Expected behavior
The foreign key field should be set to null and the deletion should succeed
### Prisma information
```prisma
model Hub {
id Int @id @default(autoincrement())
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
name String
batteryLevels BatteryLevel[]
}
model BatteryLevel {
id Int @id @default(autoincrement())
volts Float
percent Float
hubId Int?
hub Hub? @relation(fields: [hubId], references: [id], onDelete: SetNull)
createdAt DateTime @default(now())
}
```
### Environment & setup
- OS: Windows 10
- Database: PostgreSQL
- Node.js version: v16.14.2
### Prisma Version
```
prisma : 3.15.2
@prisma/client : 3.15.2
Current platform : windows
Query Engine (Node-API) : libquery-engine 461d6a05159055555eb7dfb337c9fb271cbd4d7e (at node_modules\@prisma\engines\query_engine-windows.dll.node)
Migration Engine : migration-engine-cli 461d6a05159055555eb7dfb337c9fb271cbd4d7e (at node_modules\@prisma\engines\migration-engine-windows.exe)
Introspection Engine : introspection-core 461d6a05159055555eb7dfb337c9fb271cbd4d7e (at node_modules\@prisma\engines\introspection-engine-windows.exe)
Format Binary : prisma-fmt 461d6a05159055555eb7dfb337c9fb271cbd4d7e (at node_modules\@prisma\engines\prisma-fmt-windows.exe)
Default Engines Hash : 461d6a05159055555eb7dfb337c9fb271cbd4d7e
Studio : 0.462.0
```
|
process
|
unable to delete if references it with action setnull bug description when using the action setnull referential action for a model relationship the deletion fails and throws an error about violating a foreign key constraint invalid prisma hub delete invocation n n n foreign key constraint failed on the field batterylevel hubid fkey index when i update the set the field with the foreign key to be null ahead of the deletion it works ts await prisma batterylevel updatemany where hubid id data hubid null how to reproduce use the action setnull cascade option for a model relationship create a field of both objects try to delete the parent object expected behavior the foreign key field should be set to null and the deletion should succeed prisma information prisma model hub id int id default autoincrement createdat datetime default now updatedat datetime default now updatedat name string batterylevels batterylevel model batterylevel id int id default autoincrement volts float percent float hubid int hub hub relation fields references ondelete setnull createdat datetime default now environment setup os windows database postgresql node js version prisma version prisma prisma client current platform windows query engine node api libquery engine at node modules prisma engines query engine windows dll node migration engine migration engine cli at node modules prisma engines migration engine windows exe introspection engine introspection core at node modules prisma engines introspection engine windows exe format binary prisma fmt at node modules prisma engines prisma fmt windows exe default engines hash studio
| 1
|
11,520
| 14,401,109,274
|
IssuesEvent
|
2020-12-03 13:18:32
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Scripts in iframe's srcdoc not executing
|
AREA: client SYSTEM: iframe processing TYPE: bug
|
### What is your Scenario?
An iframe with srcdoc containing a script.
### What is the Current behavior?
The script does not execute under TestCafe.

### What is the Expected behavior?
The script executes.

### What is your public web site URL?
<!-- Share a public accessible link to your web site or provide a simple app which we can run. -->
Your website URL (or attach your complete example):
```html
<!DOCTYPE html>
<html>
<body>
<iframe srcdoc="<script>document.write(123)</script>"></iframe>
</body>
</html>
```
<details>
<summary>Your complete app code (or attach your test files):</summary>
<!-- Paste your app code here: -->
```js
fixture `1`;
test.page('./page.html')('srcdoc', async t => {
await t.debug();
})
```
</details>
### Your Environment details:
* node.js version: v14.8.0
* browser name and version: Chrome 86.0.4240.198
* platform and version: Windows 10
* TestCafe version: 1.10.0-rc.4
|
1.0
|
Scripts in iframe's srcdoc not executing - ### What is your Scenario?
An iframe with srcdoc containing a script.
### What is the Current behavior?
The script does not execute under TestCafe.

### What is the Expected behavior?
The script executes.

### What is your public web site URL?
<!-- Share a public accessible link to your web site or provide a simple app which we can run. -->
Your website URL (or attach your complete example):
```html
<!DOCTYPE html>
<html>
<body>
<iframe srcdoc="<script>document.write(123)</script>"></iframe>
</body>
</html>
```
<details>
<summary>Your complete app code (or attach your test files):</summary>
<!-- Paste your app code here: -->
```js
fixture `1`;
test.page('./page.html')('srcdoc', async t => {
await t.debug();
})
```
</details>
### Your Environment details:
* node.js version: v14.8.0
* browser name and version: Chrome 86.0.4240.198
* platform and version: Windows 10
* TestCafe version: 1.10.0-rc.4
|
process
|
scripts in iframe s srcdoc not executing what is your scenario an iframe with srcdoc containing a script what is the current behavior the script does not execute under testcafe what is the expected behavior the script executes what is your public web site url your website url or attach your complete example html document write your complete app code or attach your test files js fixture test page page html srcdoc async t await t debug your environment details node js version browser name and version chrome platform and version windows testcafe version rc
| 1
|
23,120
| 11,853,088,060
|
IssuesEvent
|
2020-03-24 21:13:56
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
reopened
|
Frame rate is locked to 60FPS on devices with frame rate optimization
|
e: device-specific engine perf: speed severe: performance severe: rendering
|
All Flutter apps seem to be locked at 60FPS on One Plus 7 Pro that has 90Hz refresh rate. Every other app including Native apps and OpenGL games runs at 90FPS. Because of this Flutter apps feel slow compared to the rest of the operating system even though there are no performance issues or janks.
It seems apps have to somehow notify the OS that it can do higher framerates. And One Plus OxygenOS seems to do this check and switches between 60 and 90 FPS modes. This [XDA article](https://www.xda-developers.com/oneplus-7-pro-true-90hz-display-mode/) explains how this works and how you can force all apps to use 90FPS. After applying the change mentioned in the article Flutter apps run at 90FPS.
```
[✓] Flutter (Channel stable, v1.5.4-hotfix.2, on Mac OS X 10.14.5 18F132, locale en-IN)
• Flutter version 1.5.4-hotfix.2 at /Users/ajinasokan/flutter
• Framework revision 7a4c33425d (8 weeks ago), 2019-04-29 11:05:24 -0700
• Engine revision 52c7a1e849
• Dart version 2.3.0 (build 2.3.0-dev.0.5 a1668566e5)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/ajinasokan/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• ANDROID_HOME = /Users/ajinasokan/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 10.2.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.2.1, Build version 10E1001
• ios-deploy 1.9.4
• CocoaPods version 1.5.3
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 34.0.2
• Dart plugin version 183.5901
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] VS Code (version 1.32.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 2.25.1
[✓] Connected device (1 available)
• GM1911 • bc329e0f • android-arm64 • Android 9 (API 28)
• No issues found!
```
|
True
|
Frame rate is locked to 60FPS on devices with frame rate optimization - All Flutter apps seem to be locked at 60FPS on One Plus 7 Pro that has 90Hz refresh rate. Every other app including Native apps and OpenGL games runs at 90FPS. Because of this Flutter apps feel slow compared to the rest of the operating system even though there are no performance issues or janks.
It seems apps have to somehow notify the OS that it can do higher framerates. And One Plus OxygenOS seems to do this check and switches between 60 and 90 FPS modes. This [XDA article](https://www.xda-developers.com/oneplus-7-pro-true-90hz-display-mode/) explains how this works and how you can force all apps to use 90FPS. After applying the change mentioned in the article Flutter apps run at 90FPS.
```
[✓] Flutter (Channel stable, v1.5.4-hotfix.2, on Mac OS X 10.14.5 18F132, locale en-IN)
• Flutter version 1.5.4-hotfix.2 at /Users/ajinasokan/flutter
• Framework revision 7a4c33425d (8 weeks ago), 2019-04-29 11:05:24 -0700
• Engine revision 52c7a1e849
• Dart version 2.3.0 (build 2.3.0-dev.0.5 a1668566e5)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/ajinasokan/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• ANDROID_HOME = /Users/ajinasokan/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 10.2.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.2.1, Build version 10E1001
• ios-deploy 1.9.4
• CocoaPods version 1.5.3
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 34.0.2
• Dart plugin version 183.5901
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] VS Code (version 1.32.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 2.25.1
[✓] Connected device (1 available)
• GM1911 • bc329e0f • android-arm64 • Android 9 (API 28)
• No issues found!
```
|
non_process
|
frame rate is locked to on devices with frame rate optimization all flutter apps seem to be locked at on one plus pro that has refresh rate every other app including native apps and opengl games runs at because of this flutter apps feel slow compared to the rest of the operating system even though there are no performance issues or janks it seems apps have to somehow notify the os that it can do higher framerates and one plus oxygenos seems to do this check and switches between and fps modes this explains how this works and how you can force all apps to use after applying the change mentioned in the article flutter apps run at flutter channel stable hotfix on mac os x locale en in • flutter version hotfix at users ajinasokan flutter • framework revision weeks ago • engine revision • dart version build dev android toolchain develop for android devices android sdk version • android sdk at users ajinasokan library android sdk • android ndk location not configured optional useful for native profiling support • platform android build tools • android home users ajinasokan library android sdk • java binary at applications android studio app contents jre jdk contents home bin java • java version openjdk runtime environment build release • all android licenses accepted ios toolchain develop for ios devices xcode • xcode at applications xcode app contents developer • xcode build version • ios deploy • cocoapods version android studio version • android studio at applications android studio app contents • flutter plugin version • dart plugin version • java version openjdk runtime environment build release vs code version • vs code at applications visual studio code app contents • flutter extension version connected device available • • • android • android api • no issues found
| 0
|
19,584
| 25,920,138,252
|
IssuesEvent
|
2022-12-15 21:05:27
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/filterprocessor] Broken `TestLoadingConfigOTTL` tests
|
bug priority:p1 processor/filter
|
Part of https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/17037
See https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/3706734880/jobs/6282715360
```
make[2]: Entering directory '/home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor'
go test -race -timeout 300s --tags="" ./...
--- FAIL: TestLoadingConfigOTTL (0.04s)
--- FAIL: TestLoadingConfigOTTL/filter/bad_syntax_span (0.00s)
config_test.go:9[38](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/3706734880/jobs/6282715360#step:9:39):
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:938
Error: Error message not equal:
expected: "1:24: unexpected token \"[\" (expected <opcomparison> Value)"
actual : "1:24: unexpected token \"[\" (expected <opcomparison> Value); unable to parse OTTL statement, ensure the statement's syntax is correct; common mistakes include missing parentheses, missing double quotes and incorrect function name case"
Test: TestLoadingConfigOTTL/filter/bad_syntax_span
--- FAIL: TestLoadingConfigOTTL/filter/bad_syntax_spanevent (0.00s)
config_test.go:938:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:938
Error: Error message not equal:
expected: "1:24: unexpected token \"[\" (expected <opcomparison> Value)"
actual : "1:24: unexpected token \"[\" (expected <opcomparison> Value); unable to parse OTTL statement, ensure the statement's syntax is correct; common mistakes include missing parentheses, missing double quotes and incorrect function name case"
Test: TestLoadingConfigOTTL/filter/bad_syntax_spanevent
--- FAIL: TestLoadingConfigOTTL/filter/bad_syntax_metric (0.00s)
config_test.go:938:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:938
Error: Error message not equal:
expected: "1:33: unexpected token \"[\" (expected <opcomparison> Value)"
actual : "1:33: unexpected token \"[\" (expected <opcomparison> Value); unable to parse OTTL statement, ensure the statement's syntax is correct; common mistakes include missing parentheses, missing double quotes and incorrect function name case"
Test: TestLoadingConfigOTTL/filter/bad_syntax_metric
--- FAIL: TestLoadingConfigOTTL/filter/bad_syntax_datapoint (0.00s)
config_test.go:938:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:938
Error: Error message not equal:
expected: "1:24: unexpected token \"[\" (expected <opcomparison> Value)"
actual : "1:24: unexpected token \"[\" (expected <opcomparison> Value); unable to parse OTTL statement, ensure the statement's syntax is correct; common mistakes include missing parentheses, missing double quotes and incorrect function name case"
Test: TestLoadingConfigOTTL/filter/bad_syntax_datapoint
--- FAIL: TestLoadingConfigOTTL/filter/bad_syntax_log (0.00s)
config_test.go:938:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:938
Error: Error message not equal:
expected: "1:24: unexpected token \"[\" (expected <opcomparison> Value)"
actual : "1:24: unexpected token \"[\" (expected <opcomparison> Value); unable to parse OTTL statement, ensure the statement's syntax is correct; common mistakes include missing parentheses, missing double quotes and incorrect function name case"
Test: TestLoadingConfigOTTL/filter/bad_syntax_log
```
|
1.0
|
[processor/filterprocessor] Broken `TestLoadingConfigOTTL` tests - Part of https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/17037
See https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/3706734880/jobs/6282715360
```
make[2]: Entering directory '/home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor'
go test -race -timeout 300s --tags="" ./...
--- FAIL: TestLoadingConfigOTTL (0.04s)
--- FAIL: TestLoadingConfigOTTL/filter/bad_syntax_span (0.00s)
config_test.go:9[38](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/3706734880/jobs/6282715360#step:9:39):
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:938
Error: Error message not equal:
expected: "1:24: unexpected token \"[\" (expected <opcomparison> Value)"
actual : "1:24: unexpected token \"[\" (expected <opcomparison> Value); unable to parse OTTL statement, ensure the statement's syntax is correct; common mistakes include missing parentheses, missing double quotes and incorrect function name case"
Test: TestLoadingConfigOTTL/filter/bad_syntax_span
--- FAIL: TestLoadingConfigOTTL/filter/bad_syntax_spanevent (0.00s)
config_test.go:938:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:938
Error: Error message not equal:
expected: "1:24: unexpected token \"[\" (expected <opcomparison> Value)"
actual : "1:24: unexpected token \"[\" (expected <opcomparison> Value); unable to parse OTTL statement, ensure the statement's syntax is correct; common mistakes include missing parentheses, missing double quotes and incorrect function name case"
Test: TestLoadingConfigOTTL/filter/bad_syntax_spanevent
--- FAIL: TestLoadingConfigOTTL/filter/bad_syntax_metric (0.00s)
config_test.go:938:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:938
Error: Error message not equal:
expected: "1:33: unexpected token \"[\" (expected <opcomparison> Value)"
actual : "1:33: unexpected token \"[\" (expected <opcomparison> Value); unable to parse OTTL statement, ensure the statement's syntax is correct; common mistakes include missing parentheses, missing double quotes and incorrect function name case"
Test: TestLoadingConfigOTTL/filter/bad_syntax_metric
--- FAIL: TestLoadingConfigOTTL/filter/bad_syntax_datapoint (0.00s)
config_test.go:938:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:938
Error: Error message not equal:
expected: "1:24: unexpected token \"[\" (expected <opcomparison> Value)"
actual : "1:24: unexpected token \"[\" (expected <opcomparison> Value); unable to parse OTTL statement, ensure the statement's syntax is correct; common mistakes include missing parentheses, missing double quotes and incorrect function name case"
Test: TestLoadingConfigOTTL/filter/bad_syntax_datapoint
--- FAIL: TestLoadingConfigOTTL/filter/bad_syntax_log (0.00s)
config_test.go:938:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:938
Error: Error message not equal:
expected: "1:24: unexpected token \"[\" (expected <opcomparison> Value)"
actual : "1:24: unexpected token \"[\" (expected <opcomparison> Value); unable to parse OTTL statement, ensure the statement's syntax is correct; common mistakes include missing parentheses, missing double quotes and incorrect function name case"
Test: TestLoadingConfigOTTL/filter/bad_syntax_log
```
|
process
|
broken testloadingconfigottl tests part of see make entering directory home runner work opentelemetry collector contrib opentelemetry collector contrib processor filterprocessor go test race timeout tags fail testloadingconfigottl fail testloadingconfigottl filter bad syntax span config test go error trace home runner work opentelemetry collector contrib opentelemetry collector contrib processor filterprocessor config test go error error message not equal expected unexpected token expected value actual unexpected token expected value unable to parse ottl statement ensure the statement s syntax is correct common mistakes include missing parentheses missing double quotes and incorrect function name case test testloadingconfigottl filter bad syntax span fail testloadingconfigottl filter bad syntax spanevent config test go error trace home runner work opentelemetry collector contrib opentelemetry collector contrib processor filterprocessor config test go error error message not equal expected unexpected token expected value actual unexpected token expected value unable to parse ottl statement ensure the statement s syntax is correct common mistakes include missing parentheses missing double quotes and incorrect function name case test testloadingconfigottl filter bad syntax spanevent fail testloadingconfigottl filter bad syntax metric config test go error trace home runner work opentelemetry collector contrib opentelemetry collector contrib processor filterprocessor config test go error error message not equal expected unexpected token expected value actual unexpected token expected value unable to parse ottl statement ensure the statement s syntax is correct common mistakes include missing parentheses missing double quotes and incorrect function name case test testloadingconfigottl filter bad syntax metric fail testloadingconfigottl filter bad syntax datapoint config test go error trace home runner work opentelemetry collector contrib opentelemetry collector contrib processor filterprocessor config test go error error message not equal expected unexpected token expected value actual unexpected token expected value unable to parse ottl statement ensure the statement s syntax is correct common mistakes include missing parentheses missing double quotes and incorrect function name case test testloadingconfigottl filter bad syntax datapoint fail testloadingconfigottl filter bad syntax log config test go error trace home runner work opentelemetry collector contrib opentelemetry collector contrib processor filterprocessor config test go error error message not equal expected unexpected token expected value actual unexpected token expected value unable to parse ottl statement ensure the statement s syntax is correct common mistakes include missing parentheses missing double quotes and incorrect function name case test testloadingconfigottl filter bad syntax log
| 1
|
317,715
| 27,260,266,020
|
IssuesEvent
|
2023-02-22 14:27:00
|
flexcompute/tidy3d
|
https://api.github.com/repos/flexcompute/tidy3d
|
closed
|
Better `assert_log_level`
|
testing
|
* Dont like that that you pass the log level as an int and not a string, ie `assert_log_level(caplog,30) instead of `assert_log_level(caplog, 'warning')`
* It's potentially unnecessary to have to pass `caplog` to `assert_log_level`. I think we might be able to just use a fixture. `def test_somthing(assert_logs_warning)` or `def test_something(assert_log_level('warning'))`
|
1.0
|
Better `assert_log_level` - * Dont like that that you pass the log level as an int and not a string, ie `assert_log_level(caplog,30) instead of `assert_log_level(caplog, 'warning')`
* It's potentially unnecessary to have to pass `caplog` to `assert_log_level`. I think we might be able to just use a fixture. `def test_somthing(assert_logs_warning)` or `def test_something(assert_log_level('warning'))`
|
non_process
|
better assert log level dont like that that you pass the log level as an int and not a string ie assert log level caplog instead of assert log level caplog warning it s potentially unnecessary to have to pass caplog to assert log level i think we might be able to just use a fixture def test somthing assert logs warning or def test something assert log level warning
| 0
|
13,599
| 8,293,055,846
|
IssuesEvent
|
2018-09-20 04:31:03
|
Microsoft/visualfsharp
|
https://api.github.com/repos/Microsoft/visualfsharp
|
closed
|
IL Generation: struct and struct records methods are slower
|
Feature Improvement Ready Tenet-Performance
|
Doing a diff between structs and struct records generated code, I noticed that an extra copy is done in methods using copy and update.
#### Repro steps
- Create a struct type
- Create a struct record
- implement the same method that copy and update the type
- compare IL both
``` F#
[<Struct>]
type Structure =
val Line: int
val OriginalLine: int
val StartOfLineAbsoluteOffset: int
new(l,ol,s) = { Line =l; OriginalLine = ol; StartOfLineAbsoluteOffset = s }
member x.NextLine =
Structure(x.Line+1, x.OriginalLine+1,x.StartOfLineAbsoluteOffset)
[<Struct>]
type Rec = {
Line: int
OriginalLine: int
StartOfLineAbsoluteOffset: int
} with
member x.NextLine =
{ x with Line = x.Line + 1; OriginalLine = x.OriginalLine+1}
```
#### Expected behavior
IL should be the same
#### Actual behavior
There is a few extra instructions for struct records:
``` IL
.maxstack 5
.locals init (valuetype Program/Rec V_0)
IL_0000: ldarg.0
IL_0001: ldobj Program/Rec
IL_0006: stloc.0
```
the rest is exactly similar.
#### Known workarounds
Use struct types.
#### Related information
Is this needed ? The Structure.NextLine seems ok without it.
These lines seems generated by a call to `mkAddrGet` function (TastOps.fs) for byref value types.
|
True
|
IL Generation: struct and struct records methods are slower - Doing a diff between structs and struct records generated code, I noticed that an extra copy is done in methods using copy and update.
#### Repro steps
- Create a struct type
- Create a struct record
- implement the same method that copy and update the type
- compare IL both
``` F#
[<Struct>]
type Structure =
val Line: int
val OriginalLine: int
val StartOfLineAbsoluteOffset: int
new(l,ol,s) = { Line =l; OriginalLine = ol; StartOfLineAbsoluteOffset = s }
member x.NextLine =
Structure(x.Line+1, x.OriginalLine+1,x.StartOfLineAbsoluteOffset)
[<Struct>]
type Rec = {
Line: int
OriginalLine: int
StartOfLineAbsoluteOffset: int
} with
member x.NextLine =
{ x with Line = x.Line + 1; OriginalLine = x.OriginalLine+1}
```
#### Expected behavior
IL should be the same
#### Actual behavior
There is a few extra instructions for struct records:
``` IL
.maxstack 5
.locals init (valuetype Program/Rec V_0)
IL_0000: ldarg.0
IL_0001: ldobj Program/Rec
IL_0006: stloc.0
```
the rest is exactly similar.
#### Known workarounds
Use struct types.
#### Related information
Is this needed ? The Structure.NextLine seems ok without it.
These lines seems generated by a call to `mkAddrGet` function (TastOps.fs) for byref value types.
|
non_process
|
il generation struct and struct records methods are slower doing a diff between structs and struct records generated code i noticed that an extra copy is done in methods using copy and update repro steps create a struct type create a struct record implement the same method that copy and update the type compare il both f type structure val line int val originalline int val startoflineabsoluteoffset int new l ol s line l originalline ol startoflineabsoluteoffset s member x nextline structure x line x originalline x startoflineabsoluteoffset type rec line int originalline int startoflineabsoluteoffset int with member x nextline x with line x line originalline x originalline expected behavior il should be the same actual behavior there is a few extra instructions for struct records il maxstack locals init valuetype program rec v il ldarg il ldobj program rec il stloc the rest is exactly similar known workarounds use struct types related information is this needed the structure nextline seems ok without it these lines seems generated by a call to mkaddrget function tastops fs for byref value types
| 0
|
31,045
| 11,865,707,942
|
IssuesEvent
|
2020-03-26 01:17:48
|
freedomofpress/securedrop-workstation
|
https://api.github.com/repos/freedomofpress/securedrop-workstation
|
closed
|
Remove non securedrop-client entries from sd-svs app menus
|
security
|
followup from https://github.com/freedomofpress/securedrop-workstation/issues/198:
> we have an icon for "securedrop-client" in the menu, but it's buried among the many other entries. To resolve this issue, let's update the app menu for sd-svs to contain only the relevant entry
|
True
|
Remove non securedrop-client entries from sd-svs app menus - followup from https://github.com/freedomofpress/securedrop-workstation/issues/198:
> we have an icon for "securedrop-client" in the menu, but it's buried among the many other entries. To resolve this issue, let's update the app menu for sd-svs to contain only the relevant entry
|
non_process
|
remove non securedrop client entries from sd svs app menus followup from we have an icon for securedrop client in the menu but it s buried among the many other entries to resolve this issue let s update the app menu for sd svs to contain only the relevant entry
| 0
|
12,433
| 3,075,274,400
|
IssuesEvent
|
2015-08-20 12:49:42
|
mesosphere/marathon
|
https://api.github.com/repos/mesosphere/marathon
|
opened
|
Advanced JSON editing mode in app modal
|
design enhancement gui
|
As a user, I want to create an app using [all fields](http://mesosphere.github.io/marathon/docs/rest-api.html#post-v2-apps) in its definition from the GUI.
Currently the app modal does not expose all the fields available in the API. This means that a user cannot, for example, specify `forcePullImage=true` on a docker application.
The modal would become very unwieldy if all fields were available to be edited. We should therefore provide an advanced mode which allows the user to deal directly with the JSON app definition.
ACs
- a tab allows the user to switch between the form and the JSON editor
- switching between the two tabs preserves the latest edits in either mode
- the editor need only check for JSON validity
|
1.0
|
Advanced JSON editing mode in app modal - As a user, I want to create an app using [all fields](http://mesosphere.github.io/marathon/docs/rest-api.html#post-v2-apps) in its definition from the GUI.
Currently the app modal does not expose all the fields available in the API. This means that a user cannot, for example, specify `forcePullImage=true` on a docker application.
The modal would become very unwieldy if all fields were available to be edited. We should therefore provide an advanced mode which allows the user to deal directly with the JSON app definition.
ACs
- a tab allows the user to switch between the form and the JSON editor
- switching between the two tabs preserves the latest edits in either mode
- the editor need only check for JSON validity
|
non_process
|
advanced json editing mode in app modal as a user i want to create an app using in its definition from the gui currently the app modal does not expose all the fields available in the api this means that a user cannot for example specify forcepullimage true on a docker application the modal would become very unwieldy if all fields were available to be edited we should therefore provide an advanced mode which allows the user to deal directly with the json app definition acs a tab allows the user to switch between the form and the json editor switching between the two tabs preserves the latest edits in either mode the editor need only check for json validity
| 0
|
149,627
| 11,907,219,252
|
IssuesEvent
|
2020-03-30 21:50:00
|
pantsbuild/pants
|
https://api.github.com/repos/pantsbuild/pants
|
closed
|
TestArtifactCache.test_local_backed_remote_cache is flaky
|
flaky-test test-skipped
|
Looks like:
```
tests/python/pants_test/cache:artifact_cache stdout:
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /b/f/w
plugins: cov-2.8.1, timeout-1.3.4
collected 13 items
pants_test/cache/test_artifact_cache.py ...F....ssss. [100%]
=================================== FAILURES ===================================
_______________ TestArtifactCache.test_local_backed_remote_cache _______________
self = <pants_test.cache.test_artifact_cache.TestArtifactCache testMethod=test_local_backed_remote_cache>
def test_local_backed_remote_cache(self):
"""make sure that the combined cache finds what it should and that it backfills."""
with self.setup_server() as server:
with self.setup_local_cache() as local:
tmp = TempLocalArtifactCache(local.artifact_root, local.artifact_extraction_root, 0)
remote = RESTfulArtifactCache(
local.artifact_root, BestUrlSelector([server.url]), tmp
)
combined = RESTfulArtifactCache(
local.artifact_root, BestUrlSelector([server.url]), local
)
key = CacheKey("muppet_key", "fake_hash")
with self.setup_test_file(local.artifact_root) as path:
# No cache has key.
self.assertFalse(local.has(key))
self.assertFalse(remote.has(key))
self.assertFalse(combined.has(key))
# No cache returns key.
self.assertFalse(bool(local.use_cached_files(key)))
self.assertFalse(bool(remote.use_cached_files(key)))
self.assertFalse(bool(combined.use_cached_files(key)))
# Attempting to use key that no cache had should not change anything.
self.assertFalse(local.has(key))
self.assertFalse(remote.has(key))
self.assertFalse(combined.has(key))
# Add to only remote cache.
remote.insert(key, [path])
# After insertion to remote, remote and only remote should have key
self.assertFalse(local.has(key))
> self.assertTrue(remote.has(key))
E AssertionError: False is not true
pants_test/cache/test_artifact_cache.py:193: AssertionError
---------------------------- Captured stderr setup -----------------------------
Process Process-2:
Traceback (most recent call last):
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/b/f/w/pants_test/cache/cache_server.py", line 154, in _cache_server_process
httpd.serve_forever()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/socketserver.py", line 236, in serve_forever
ready = selector.select(poll_interval)
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/selectors.py", line 376, in select
fd_event_list = self._poll.poll(timeout)
File "/b/f/w/pants/base/exception_sink.py", line 104, in handle_sigterm
raise self.SignalHandledNonLocalExit(signum, "SIGTERM")
pants.base.exception_sink.SignalHandler.SignalHandledNonLocalExit
----------------------------- Captured stderr call -----------------------------
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "GET /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "GET /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "GET /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "GET /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "PUT /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
------------------------------ Captured log call -------------------------------
ERROR pants.cache.artifact_cache:artifact_cache.py:94 Error while writing to artifact cache: Failed to PUT http://localhost:48035/muppet_key/fake_hash.tgz. Error: HTTPConnectionPool(host='localhost', port=48035): Max retries exceeded with url: /muppet_key/fake_hash.tgz (Caused by ProtocolError('Connection aborted.', BrokenPipeError(32, 'Broken pipe')))
--------------------------- Captured stderr teardown ---------------------------
Process Process-3:
Traceback (most recent call last):
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/b/f/w/pants_test/cache/cache_server.py", line 154, in _cache_server_process
httpd.serve_forever()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/socketserver.py", line 236, in serve_forever
ready = selector.select(poll_interval)
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/selectors.py", line 376, in select
fd_event_list = self._poll.poll(timeout)
File "/b/f/w/pants/base/exception_sink.py", line 104, in handle_sigterm
raise self.SignalHandledNonLocalExit(signum, "SIGTERM")
pants.base.exception_sink.SignalHandler.SignalHandledNonLocalExit
=============================== warnings summary ===============================
pex_root/installed_wheels/b310034b9aeb62e68d8f271e153def66197e9d0d/toml-0.10.0-py2.py3-none-any.whl/toml/decoder.py:47
/b/f/w/pex_root/installed_wheels/b310034b9aeb62e68d8f271e153def66197e9d0d/toml-0.10.0-py2.py3-none-any.whl/toml/decoder.py:47: DeprecationWarning: invalid escape sequence \.
TIME_RE = re.compile("([0-9]{2}):([0-9]{2}):([0-9]{2})(\.([0-9]{3,6}))?")
-- Docs: https://docs.pytest.org/en/latest/warnings.html
============== 1 failed, 8 passed, 4 skipped, 1 warning in 4.40s ===============
Process Process-6:
Traceback (most recent call last):
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/b/f/w/pants_test/cache/cache_server.py", line 154, in _cache_server_process
httpd.serve_forever()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/socketserver.py", line 236, in serve_forever
ready = selector.select(poll_interval)
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/selectors.py", line 376, in select
fd_event_list = self._poll.poll(timeout)
File "/b/f/w/pants/base/exception_sink.py", line 104, in handle_sigterm
raise self.SignalHandledNonLocalExit(signum, "SIGTERM")
pants.base.exception_sink.SignalHandler.SignalHandledNonLocalExit
```
|
2.0
|
TestArtifactCache.test_local_backed_remote_cache is flaky - Looks like:
```
tests/python/pants_test/cache:artifact_cache stdout:
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /b/f/w
plugins: cov-2.8.1, timeout-1.3.4
collected 13 items
pants_test/cache/test_artifact_cache.py ...F....ssss. [100%]
=================================== FAILURES ===================================
_______________ TestArtifactCache.test_local_backed_remote_cache _______________
self = <pants_test.cache.test_artifact_cache.TestArtifactCache testMethod=test_local_backed_remote_cache>
def test_local_backed_remote_cache(self):
"""make sure that the combined cache finds what it should and that it backfills."""
with self.setup_server() as server:
with self.setup_local_cache() as local:
tmp = TempLocalArtifactCache(local.artifact_root, local.artifact_extraction_root, 0)
remote = RESTfulArtifactCache(
local.artifact_root, BestUrlSelector([server.url]), tmp
)
combined = RESTfulArtifactCache(
local.artifact_root, BestUrlSelector([server.url]), local
)
key = CacheKey("muppet_key", "fake_hash")
with self.setup_test_file(local.artifact_root) as path:
# No cache has key.
self.assertFalse(local.has(key))
self.assertFalse(remote.has(key))
self.assertFalse(combined.has(key))
# No cache returns key.
self.assertFalse(bool(local.use_cached_files(key)))
self.assertFalse(bool(remote.use_cached_files(key)))
self.assertFalse(bool(combined.use_cached_files(key)))
# Attempting to use key that no cache had should not change anything.
self.assertFalse(local.has(key))
self.assertFalse(remote.has(key))
self.assertFalse(combined.has(key))
# Add to only remote cache.
remote.insert(key, [path])
# After insertion to remote, remote and only remote should have key
self.assertFalse(local.has(key))
> self.assertTrue(remote.has(key))
E AssertionError: False is not true
pants_test/cache/test_artifact_cache.py:193: AssertionError
---------------------------- Captured stderr setup -----------------------------
Process Process-2:
Traceback (most recent call last):
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/b/f/w/pants_test/cache/cache_server.py", line 154, in _cache_server_process
httpd.serve_forever()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/socketserver.py", line 236, in serve_forever
ready = selector.select(poll_interval)
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/selectors.py", line 376, in select
fd_event_list = self._poll.poll(timeout)
File "/b/f/w/pants/base/exception_sink.py", line 104, in handle_sigterm
raise self.SignalHandledNonLocalExit(signum, "SIGTERM")
pants.base.exception_sink.SignalHandler.SignalHandledNonLocalExit
----------------------------- Captured stderr call -----------------------------
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "GET /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "GET /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "GET /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "GET /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "PUT /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz HTTP/1.1" 307 -
127.0.0.1 - - [30/Mar/2020 18:32:11] code 404, message File not found
127.0.0.1 - - [30/Mar/2020 18:32:11] "HEAD /muppet_key/fake_hash.tgz/__redir__ HTTP/1.1" 404 -
------------------------------ Captured log call -------------------------------
ERROR pants.cache.artifact_cache:artifact_cache.py:94 Error while writing to artifact cache: Failed to PUT http://localhost:48035/muppet_key/fake_hash.tgz. Error: HTTPConnectionPool(host='localhost', port=48035): Max retries exceeded with url: /muppet_key/fake_hash.tgz (Caused by ProtocolError('Connection aborted.', BrokenPipeError(32, 'Broken pipe')))
--------------------------- Captured stderr teardown ---------------------------
Process Process-3:
Traceback (most recent call last):
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/b/f/w/pants_test/cache/cache_server.py", line 154, in _cache_server_process
httpd.serve_forever()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/socketserver.py", line 236, in serve_forever
ready = selector.select(poll_interval)
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/selectors.py", line 376, in select
fd_event_list = self._poll.poll(timeout)
File "/b/f/w/pants/base/exception_sink.py", line 104, in handle_sigterm
raise self.SignalHandledNonLocalExit(signum, "SIGTERM")
pants.base.exception_sink.SignalHandler.SignalHandledNonLocalExit
=============================== warnings summary ===============================
pex_root/installed_wheels/b310034b9aeb62e68d8f271e153def66197e9d0d/toml-0.10.0-py2.py3-none-any.whl/toml/decoder.py:47
/b/f/w/pex_root/installed_wheels/b310034b9aeb62e68d8f271e153def66197e9d0d/toml-0.10.0-py2.py3-none-any.whl/toml/decoder.py:47: DeprecationWarning: invalid escape sequence \.
TIME_RE = re.compile("([0-9]{2}):([0-9]{2}):([0-9]{2})(\.([0-9]{3,6}))?")
-- Docs: https://docs.pytest.org/en/latest/warnings.html
============== 1 failed, 8 passed, 4 skipped, 1 warning in 4.40s ===============
Process Process-6:
Traceback (most recent call last):
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/b/f/w/pants_test/cache/cache_server.py", line 154, in _cache_server_process
httpd.serve_forever()
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/socketserver.py", line 236, in serve_forever
ready = selector.select(poll_interval)
File "/pyenv-docker-build/versions/3.6.8/lib/python3.6/selectors.py", line 376, in select
fd_event_list = self._poll.poll(timeout)
File "/b/f/w/pants/base/exception_sink.py", line 104, in handle_sigterm
raise self.SignalHandledNonLocalExit(signum, "SIGTERM")
pants.base.exception_sink.SignalHandler.SignalHandledNonLocalExit
```
|
non_process
|
testartifactcache test local backed remote cache is flaky looks like tests python pants test cache artifact cache stdout test session starts platform linux python pytest py pluggy rootdir b f w plugins cov timeout collected items pants test cache test artifact cache py f ssss failures testartifactcache test local backed remote cache self def test local backed remote cache self make sure that the combined cache finds what it should and that it backfills with self setup server as server with self setup local cache as local tmp templocalartifactcache local artifact root local artifact extraction root remote restfulartifactcache local artifact root besturlselector tmp combined restfulartifactcache local artifact root besturlselector local key cachekey muppet key fake hash with self setup test file local artifact root as path no cache has key self assertfalse local has key self assertfalse remote has key self assertfalse combined has key no cache returns key self assertfalse bool local use cached files key self assertfalse bool remote use cached files key self assertfalse bool combined use cached files key attempting to use key that no cache had should not change anything self assertfalse local has key self assertfalse remote has key self assertfalse combined has key add to only remote cache remote insert key after insertion to remote remote and only remote should have key self assertfalse local has key self asserttrue remote has key e assertionerror false is not true pants test cache test artifact cache py assertionerror captured stderr setup process process traceback most recent call last file pyenv docker build versions lib multiprocessing process py line in bootstrap self run file pyenv docker build versions lib multiprocessing process py line in run self target self args self kwargs file b f w pants test cache cache server py line in cache server process httpd serve forever file pyenv docker build versions lib socketserver py line in serve forever ready selector select poll interval file pyenv docker build versions lib selectors py line in select fd event list self poll poll timeout file b f w pants base exception sink py line in handle sigterm raise self signalhandlednonlocalexit signum sigterm pants base exception sink signalhandler signalhandlednonlocalexit captured stderr call head muppet key fake hash tgz http code message file not found head muppet key fake hash tgz redir http head muppet key fake hash tgz http code message file not found head muppet key fake hash tgz redir http get muppet key fake hash tgz http code message file not found get muppet key fake hash tgz redir http get muppet key fake hash tgz http code message file not found get muppet key fake hash tgz redir http head muppet key fake hash tgz http code message file not found head muppet key fake hash tgz redir http head muppet key fake hash tgz http code message file not found head muppet key fake hash tgz redir http head muppet key fake hash tgz http code message file not found head muppet key fake hash tgz redir http put muppet key fake hash tgz http head muppet key fake hash tgz http code message file not found head muppet key fake hash tgz redir http captured log call error pants cache artifact cache artifact cache py error while writing to artifact cache failed to put error httpconnectionpool host localhost port max retries exceeded with url muppet key fake hash tgz caused by protocolerror connection aborted brokenpipeerror broken pipe captured stderr teardown process process traceback most recent call last file pyenv docker build versions lib multiprocessing process py line in bootstrap self run file pyenv docker build versions lib multiprocessing process py line in run self target self args self kwargs file b f w pants test cache cache server py line in cache server process httpd serve forever file pyenv docker build versions lib socketserver py line in serve forever ready selector select poll interval file pyenv docker build versions lib selectors py line in select fd event list self poll poll timeout file b f w pants base exception sink py line in handle sigterm raise self signalhandlednonlocalexit signum sigterm pants base exception sink signalhandler signalhandlednonlocalexit warnings summary pex root installed wheels toml none any whl toml decoder py b f w pex root installed wheels toml none any whl toml decoder py deprecationwarning invalid escape sequence time re re compile docs failed passed skipped warning in process process traceback most recent call last file pyenv docker build versions lib multiprocessing process py line in bootstrap self run file pyenv docker build versions lib multiprocessing process py line in run self target self args self kwargs file b f w pants test cache cache server py line in cache server process httpd serve forever file pyenv docker build versions lib socketserver py line in serve forever ready selector select poll interval file pyenv docker build versions lib selectors py line in select fd event list self poll poll timeout file b f w pants base exception sink py line in handle sigterm raise self signalhandlednonlocalexit signum sigterm pants base exception sink signalhandler signalhandlednonlocalexit
| 0
|
757,862
| 26,532,814,758
|
IssuesEvent
|
2023-01-19 13:44:12
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Crash on wallet creation
|
crash priority/P2 QA/Yes release-notes/include feature/web3/wallet OS/Android
|
There is a crash from time to time on wallet creation with such a stack trace:
```
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$BraveWalletServiceGetSelectedCoinParams.<init>(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
<OR> 2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
<OR> 2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
<OR> 2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
<OR> 2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
<OR> 2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.mojo.bindings.Interface$AbstractProxy$HandlerImpl.getMessageReceiver(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.chrome.browser.app.domain.KeyringModel.update(KeyringModel.java:147)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.chrome.browser.app.domain.KeyringModel.keyringCreated(KeyringModel.java:299)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.brave_wallet.mojom.KeyringServiceObserver_Internal$Stub.accept(KeyringServiceObserver_Internal.java:277)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.RouterImpl.handleIncomingMessage(RouterImpl.java:238)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.RouterImpl$HandleIncomingMessageThunk.accept(RouterImpl.java:33)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.Connector.readAndDispatchMessage(Connector.java:210)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.Connector.readOutstandingMessages(Connector.java:176)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.Connector.onWatcherResult(Connector.java:152)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.Connector$WatcherCallback.onResult(Connector.java:142)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.system.impl.WatcherImpl.onHandleReady(WatcherImpl.java:56)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at android.os.MessageQueue.nativePollOnce(Native Method)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at android.os.MessageQueue.next(MessageQueue.java:335)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at android.os.Looper.loopOnce(Looper.java:186)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at android.os.Looper.loop(Looper.java:313)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at android.app.ActivityThread.main(ActivityThread.java:8751)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at java.lang.reflect.Method.invoke(Native Method)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:571)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1135)
```
I could replicate it on 1.49.x release version and it happens 1 out of 5 times to me. STR:
1. Make a fresh Brave install
2. Settings->Wallet
3. Create new wallet
Observe a crash on creating wallet wizard.
A video of the crash:
https://user-images.githubusercontent.com/12011303/213286255-4062c73b-91e2-4c8f-8694-5dd2d4454b5e.mp4
|
1.0
|
Crash on wallet creation - There is a crash from time to time on wallet creation with such a stack trace:
```
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$BraveWalletServiceGetSelectedCoinParams.<init>(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
<OR> 2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
<OR> 2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
<OR> 2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
<OR> 2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
<OR> 2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.mojo.bindings.Interface$AbstractProxy$HandlerImpl.getMessageReceiver(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W java.lang.NullPointerException: Attempt to invoke virtual method 'void org.chromium.brave_wallet.mojom.BraveWalletService_Internal$Proxy.getSelectedCoin(org.chromium.brave_wallet.mojom.BraveWalletService$GetSelectedCoin_Response)' on a null object reference
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.chrome.browser.app.domain.KeyringModel.update(KeyringModel.java:147)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.chrome.browser.app.domain.KeyringModel.keyringCreated(KeyringModel.java:299)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.brave_wallet.mojom.KeyringServiceObserver_Internal$Stub.accept(KeyringServiceObserver_Internal.java:277)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.RouterImpl.handleIncomingMessage(RouterImpl.java:238)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.RouterImpl$HandleIncomingMessageThunk.accept(RouterImpl.java:33)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.Connector.readAndDispatchMessage(Connector.java:210)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.Connector.readOutstandingMessages(Connector.java:176)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.Connector.onWatcherResult(Connector.java:152)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.bindings.Connector$WatcherCallback.onResult(Connector.java:142)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at org.chromium.mojo.system.impl.WatcherImpl.onHandleReady(WatcherImpl.java:56)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at android.os.MessageQueue.nativePollOnce(Native Method)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at android.os.MessageQueue.next(MessageQueue.java:335)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at android.os.Looper.loopOnce(Looper.java:186)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at android.os.Looper.loop(Looper.java:313)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at android.app.ActivityThread.main(ActivityThread.java:8751)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at java.lang.reflect.Method.invoke(Native Method)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:571)
2023-01-13 14:04:07.700 28199-28199 System.err pid-28199 W at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1135)
```
I could replicate it on 1.49.x release version and it happens 1 out of 5 times to me. STR:
1. Make a fresh Brave install
2. Settings->Wallet
3. Create new wallet
Observe a crash on creating wallet wizard.
A video of the crash:
https://user-images.githubusercontent.com/12011303/213286255-4062c73b-91e2-4c8f-8694-5dd2d4454b5e.mp4
|
non_process
|
crash on wallet creation there is a crash from time to time on wallet creation with such a stack trace system err pid w java lang nullpointerexception attempt to invoke virtual method void org chromium brave wallet mojom bravewalletservice internal bravewalletservicegetselectedcoinparams org chromium brave wallet mojom bravewalletservice getselectedcoin response on a null object reference system err pid w java lang nullpointerexception attempt to invoke virtual method void org chromium brave wallet mojom bravewalletservice internal proxy getselectedcoin org chromium brave wallet mojom bravewalletservice getselectedcoin response on a null object reference system err pid w java lang nullpointerexception attempt to invoke virtual method void org chromium brave wallet mojom bravewalletservice internal proxy getselectedcoin org chromium brave wallet mojom bravewalletservice getselectedcoin response on a null object reference system err pid w java lang nullpointerexception attempt to invoke virtual method void org chromium brave wallet mojom bravewalletservice internal proxy getselectedcoin org chromium brave wallet mojom bravewalletservice getselectedcoin response on a null object reference system err pid w java lang nullpointerexception attempt to invoke virtual method void org chromium brave wallet mojom bravewalletservice internal proxy getselectedcoin org chromium brave wallet mojom bravewalletservice getselectedcoin response on a null object reference system err pid w java lang nullpointerexception attempt to invoke virtual method void org chromium brave wallet mojom bravewalletservice internal proxy getselectedcoin org chromium brave wallet mojom bravewalletservice getselectedcoin response on a null object reference system err pid w java lang nullpointerexception attempt to invoke virtual method void org chromium mojo bindings interface abstractproxy handlerimpl getmessagereceiver org chromium brave wallet mojom bravewalletservice getselectedcoin response on a null object reference system err pid w java lang nullpointerexception attempt to invoke virtual method void org chromium brave wallet mojom bravewalletservice internal proxy getselectedcoin org chromium brave wallet mojom bravewalletservice getselectedcoin response on a null object reference system err pid w at org chromium chrome browser app domain keyringmodel update keyringmodel java system err pid w at org chromium chrome browser app domain keyringmodel keyringcreated keyringmodel java system err pid w at org chromium brave wallet mojom keyringserviceobserver internal stub accept keyringserviceobserver internal java system err pid w at org chromium mojo bindings routerimpl handleincomingmessage routerimpl java system err pid w at org chromium mojo bindings routerimpl handleincomingmessagethunk accept routerimpl java system err pid w at org chromium mojo bindings connector readanddispatchmessage connector java system err pid w at org chromium mojo bindings connector readoutstandingmessages connector java system err pid w at org chromium mojo bindings connector onwatcherresult connector java system err pid w at org chromium mojo bindings connector watchercallback onresult connector java system err pid w at org chromium mojo system impl watcherimpl onhandleready watcherimpl java system err pid w at android os messagequeue nativepollonce native method system err pid w at android os messagequeue next messagequeue java system err pid w at android os looper looponce looper java system err pid w at android os looper loop looper java system err pid w at android app activitythread main activitythread java system err pid w at java lang reflect method invoke native method system err pid w at com android internal os runtimeinit methodandargscaller run runtimeinit java system err pid w at com android internal os zygoteinit main zygoteinit java i could replicate it on x release version and it happens out of times to me str make a fresh brave install settings wallet create new wallet observe a crash on creating wallet wizard a video of the crash
| 0
|
348,655
| 31,707,815,771
|
IssuesEvent
|
2023-09-09 00:19:26
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
It is better to update the tooltip of 'Select All' to 'Select All in page' in blob explorer
|
:heavy_check_mark: merged 🧪 testing :gear: blobs :beetle: regression :gear: adls gen2
|
**Storage Explorer Version**: 1.32.0-dev (93)
**Build Number**: 20230906.9
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Ventura 13.5 (Apple M1 Pro)
**Architecture**: x64/x64/x64
**How Found**: Exploratory testing
**Regression From**: Previous release (1.30.2)
## Steps to Reproduce ##
1. Expand one storage account -> Blob Containers.
2. Open one blob container.
3. Hove over the 'Select All' button on the toolbar.
4. Check whether the tooltip is 'Select All in Page'.
## Expected Experience ##
The tooltip is 'Select All in Page'.

## Actual Experience ##
The tooltip is 'Select all items'.

## Additional Context ##
This issue also reproduces for new ADLS Gen2 blob explorer.
|
1.0
|
It is better to update the tooltip of 'Select All' to 'Select All in page' in blob explorer - **Storage Explorer Version**: 1.32.0-dev (93)
**Build Number**: 20230906.9
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Ventura 13.5 (Apple M1 Pro)
**Architecture**: x64/x64/x64
**How Found**: Exploratory testing
**Regression From**: Previous release (1.30.2)
## Steps to Reproduce ##
1. Expand one storage account -> Blob Containers.
2. Open one blob container.
3. Hove over the 'Select All' button on the toolbar.
4. Check whether the tooltip is 'Select All in Page'.
## Expected Experience ##
The tooltip is 'Select All in Page'.

## Actual Experience ##
The tooltip is 'Select all items'.

## Additional Context ##
This issue also reproduces for new ADLS Gen2 blob explorer.
|
non_process
|
it is better to update the tooltip of select all to select all in page in blob explorer storage explorer version dev build number branch main platform os windows linux ubuntu macos ventura apple pro architecture how found exploratory testing regression from previous release steps to reproduce expand one storage account blob containers open one blob container hove over the select all button on the toolbar check whether the tooltip is select all in page expected experience the tooltip is select all in page actual experience the tooltip is select all items additional context this issue also reproduces for new adls blob explorer
| 0
|
38,912
| 19,598,671,936
|
IssuesEvent
|
2022-01-05 21:20:53
|
OpenNeuroOrg/openneuro
|
https://api.github.com/repos/OpenNeuroOrg/openneuro
|
closed
|
Server side rendering for very large dataset snapshots is slow
|
bug performance
|
**Describe the bug**
It does load but client side rendering for ds002685 takes about two seconds and server side is around ten seconds. Server is often faster for small datasets, so this could be optimized.
**To Reproduce**
Steps to reproduce the behavior:
1. Load a dataset with a high file count with server rendering
2. Observe long load time but quick loading after the server render finishes
**Expected behavior**
It shouldn't be different than client side performance at a minimum.
|
True
|
Server side rendering for very large dataset snapshots is slow - **Describe the bug**
It does load but client side rendering for ds002685 takes about two seconds and server side is around ten seconds. Server is often faster for small datasets, so this could be optimized.
**To Reproduce**
Steps to reproduce the behavior:
1. Load a dataset with a high file count with server rendering
2. Observe long load time but quick loading after the server render finishes
**Expected behavior**
It shouldn't be different than client side performance at a minimum.
|
non_process
|
server side rendering for very large dataset snapshots is slow describe the bug it does load but client side rendering for takes about two seconds and server side is around ten seconds server is often faster for small datasets so this could be optimized to reproduce steps to reproduce the behavior load a dataset with a high file count with server rendering observe long load time but quick loading after the server render finishes expected behavior it shouldn t be different than client side performance at a minimum
| 0
|
25,470
| 3,932,862,157
|
IssuesEvent
|
2016-04-25 17:08:04
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
closed
|
C# Language Design Review, Apr 22, 2015
|
Area-Language Design Design Notes Language-C# Language-VB
|
# C# Language Design Review, Apr 22, 2015
## Agenda
See #1921 for an explanation of design reviews and how they differ from design meetings.
1. Expression tree extension
2. Nullable reference types
3. Facilitating wire formats
4. Bucketing
# Expression Trees
Expression trees are currently lagging behind the languages in terms of expressiveness. A full scale upgrade seems like an incredibly big investment, and doesn't seem worth the effort. For instance, implementing `dynamic` and `async` faithfully in expression trees would be daunting.
However, supporting `?.` and string interpolation seems doable even without introducing new kinds of nodes in the expression tree library. We should consider making this work.
# Nullable reference types
A big question facing us is the "two-type" versus the "three-type" approach: We want you to guard member access etc. behind null checks when values are meant to be null, and to prevent you from sticking or leaving null in variables that are not meant to be null. In the "three-type" approach, both "meant to be null" and "not meant to be null" are expressed as new type annotations (`T?` and `T!` respectively) and the existing syntax (`T`) takes on a legacy "unsafe" status. This is great for compatibility, but means that the existing syntax is unhelpful, and you'd only get full benefit of the nullability checking by completely rooting out its use and putting annotations everywhere.
The "two-type" approach still adds "meant to be null" annotations (`T?`), but holds that since you can now express when things *are* meant to be null, you should only use the existing syntax (`T`) when things are *not* meant to be null. This certainly leads to a simpler end result, and also means that you get full benefit of the feature immediately in the form of warnings on all existing unsafe null behavior! Therein of course also lies the problem with the "two-type" approach: in its simplest form it changes the meaning of unannotated `T` in a massively breaking way.
We think that the "three-type" approach is not very helpful, leads to massively rewritten over-adorned code, and is essentially not viable. The "two-type" approach seems desirable if there is an explicit step to opt in to the enforcement of "not meant to be null" on ordinary reference types. You can continue to use C# as it is, and you can even start to add `?` to types to force null checks. Then when you feel ready you can switch on the additional checks to prevent null from making it into reference types without '?'. This may lead to warnings that you can then either fix by adding further `?`s or by putting non-null values into the given variable, depending on your intent.
There are additional compatibility questions around evolution of libraries, but those are somewhat orthogonal: Maybe a library carries an assembly-level attribute saying it has "opted in", and that its unannotated types should be considered non-null.
There are still open design questions around generics and library compat.
# Wire formats
We should focus attention on making it easier to work with wire formats such as JSON, and in particular on how to support strongly typed logic over them without forcing them to be deserialized to strongly typed objects at runtime. Such deserialization is brittle, lossy and clunky as formats evolve out of sync, and extra members e.g. aren't kept and reserialized on the other end.
There's a range of directions we could take here. Assuming there are dictionary-style objects representing the JSON (or other wire data) in a weakly typed way, options include:
* Somehow supporting runtime conversions from such dictionaries to interfaces (and back)
* Compile-time only "types" a la TypeScript, which translate member access etc. to a well-known dictionary pattern
* Compile-time type providers a la F#, that allow custom specification not only of the compile-time types but also the code generated for access.
We'd need to think about construction, not just consumption.
``` c#
var thing = new Thing { name = "...", price = 123.45 }
```
Maybe `Thing` is an interface with an attribute on it:
``` c#
[Json] interface { string name; double price; }
```
Or maybe it is something else. This warrants further exploration; the right feature design here could be an extremely valuable tool for developers talking to wire formats - and who isn't?
# Bucketing
We affirmed that the bucketing in issue #2136 reflects our priorities.
|
2.0
|
C# Language Design Review, Apr 22, 2015 - # C# Language Design Review, Apr 22, 2015
## Agenda
See #1921 for an explanation of design reviews and how they differ from design meetings.
1. Expression tree extension
2. Nullable reference types
3. Facilitating wire formats
4. Bucketing
# Expression Trees
Expression trees are currently lagging behind the languages in terms of expressiveness. A full scale upgrade seems like an incredibly big investment, and doesn't seem worth the effort. For instance, implementing `dynamic` and `async` faithfully in expression trees would be daunting.
However, supporting `?.` and string interpolation seems doable even without introducing new kinds of nodes in the expression tree library. We should consider making this work.
# Nullable reference types
A big question facing us is the "two-type" versus the "three-type" approach: We want you to guard member access etc. behind null checks when values are meant to be null, and to prevent you from sticking or leaving null in variables that are not meant to be null. In the "three-type" approach, both "meant to be null" and "not meant to be null" are expressed as new type annotations (`T?` and `T!` respectively) and the existing syntax (`T`) takes on a legacy "unsafe" status. This is great for compatibility, but means that the existing syntax is unhelpful, and you'd only get full benefit of the nullability checking by completely rooting out its use and putting annotations everywhere.
The "two-type" approach still adds "meant to be null" annotations (`T?`), but holds that since you can now express when things *are* meant to be null, you should only use the existing syntax (`T`) when things are *not* meant to be null. This certainly leads to a simpler end result, and also means that you get full benefit of the feature immediately in the form of warnings on all existing unsafe null behavior! Therein of course also lies the problem with the "two-type" approach: in its simplest form it changes the meaning of unannotated `T` in a massively breaking way.
We think that the "three-type" approach is not very helpful, leads to massively rewritten over-adorned code, and is essentially not viable. The "two-type" approach seems desirable if there is an explicit step to opt in to the enforcement of "not meant to be null" on ordinary reference types. You can continue to use C# as it is, and you can even start to add `?` to types to force null checks. Then when you feel ready you can switch on the additional checks to prevent null from making it into reference types without '?'. This may lead to warnings that you can then either fix by adding further `?`s or by putting non-null values into the given variable, depending on your intent.
There are additional compatibility questions around evolution of libraries, but those are somewhat orthogonal: Maybe a library carries an assembly-level attribute saying it has "opted in", and that its unannotated types should be considered non-null.
There are still open design questions around generics and library compat.
# Wire formats
We should focus attention on making it easier to work with wire formats such as JSON, and in particular on how to support strongly typed logic over them without forcing them to be deserialized to strongly typed objects at runtime. Such deserialization is brittle, lossy and clunky as formats evolve out of sync, and extra members e.g. aren't kept and reserialized on the other end.
There's a range of directions we could take here. Assuming there are dictionary-style objects representing the JSON (or other wire data) in a weakly typed way, options include:
* Somehow supporting runtime conversions from such dictionaries to interfaces (and back)
* Compile-time only "types" a la TypeScript, which translate member access etc. to a well-known dictionary pattern
* Compile-time type providers a la F#, that allow custom specification not only of the compile-time types but also the code generated for access.
We'd need to think about construction, not just consumption.
``` c#
var thing = new Thing { name = "...", price = 123.45 }
```
Maybe `Thing` is an interface with an attribute on it:
``` c#
[Json] interface { string name; double price; }
```
Or maybe it is something else. This warrants further exploration; the right feature design here could be an extremely valuable tool for developers talking to wire formats - and who isn't?
# Bucketing
We affirmed that the bucketing in issue #2136 reflects our priorities.
|
non_process
|
c language design review apr c language design review apr agenda see for an explanation of design reviews and how they differ from design meetings expression tree extension nullable reference types facilitating wire formats bucketing expression trees expression trees are currently lagging behind the languages in terms of expressiveness a full scale upgrade seems like an incredibly big investment and doesn t seem worth the effort for instance implementing dynamic and async faithfully in expression trees would be daunting however supporting and string interpolation seems doable even without introducing new kinds of nodes in the expression tree library we should consider making this work nullable reference types a big question facing us is the two type versus the three type approach we want you to guard member access etc behind null checks when values are meant to be null and to prevent you from sticking or leaving null in variables that are not meant to be null in the three type approach both meant to be null and not meant to be null are expressed as new type annotations t and t respectively and the existing syntax t takes on a legacy unsafe status this is great for compatibility but means that the existing syntax is unhelpful and you d only get full benefit of the nullability checking by completely rooting out its use and putting annotations everywhere the two type approach still adds meant to be null annotations t but holds that since you can now express when things are meant to be null you should only use the existing syntax t when things are not meant to be null this certainly leads to a simpler end result and also means that you get full benefit of the feature immediately in the form of warnings on all existing unsafe null behavior therein of course also lies the problem with the two type approach in its simplest form it changes the meaning of unannotated t in a massively breaking way we think that the three type approach is not very helpful leads to massively rewritten over adorned code and is essentially not viable the two type approach seems desirable if there is an explicit step to opt in to the enforcement of not meant to be null on ordinary reference types you can continue to use c as it is and you can even start to add to types to force null checks then when you feel ready you can switch on the additional checks to prevent null from making it into reference types without this may lead to warnings that you can then either fix by adding further s or by putting non null values into the given variable depending on your intent there are additional compatibility questions around evolution of libraries but those are somewhat orthogonal maybe a library carries an assembly level attribute saying it has opted in and that its unannotated types should be considered non null there are still open design questions around generics and library compat wire formats we should focus attention on making it easier to work with wire formats such as json and in particular on how to support strongly typed logic over them without forcing them to be deserialized to strongly typed objects at runtime such deserialization is brittle lossy and clunky as formats evolve out of sync and extra members e g aren t kept and reserialized on the other end there s a range of directions we could take here assuming there are dictionary style objects representing the json or other wire data in a weakly typed way options include somehow supporting runtime conversions from such dictionaries to interfaces and back compile time only types a la typescript which translate member access etc to a well known dictionary pattern compile time type providers a la f that allow custom specification not only of the compile time types but also the code generated for access we d need to think about construction not just consumption c var thing new thing name price maybe thing is an interface with an attribute on it c interface string name double price or maybe it is something else this warrants further exploration the right feature design here could be an extremely valuable tool for developers talking to wire formats and who isn t bucketing we affirmed that the bucketing in issue reflects our priorities
| 0
|
2,467
| 5,243,092,474
|
IssuesEvent
|
2017-01-31 19:46:07
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
closed
|
Custom processor to get trial source count data to CSV
|
enhancement Processors
|
# Description
For each trial, write a record of:
- ID
- SOURCE_COUNT (# of sources for that trial)
- PRIMARY_ID
- SECONDARY_ID{_1;_2;etc} - iterate over the ids to create columns
- {SOURCE_NAME} ("Yes" or "No") indicating if THIS source provided data.
Example:
```
ID | SOURCE_COUNT | PRIMARY_ID | SECONDARY_ID_1 | SECONDARY_ID_2 | EUCTR | HARP | ACTRN
{UUID} | 2 | A182 | B929 | | Yes | No | Yes
```
# Tasks
- [ ] Write the SQL query to get our required data from the warehouse.
- [ ] Write the [JSON Table Schema](http://dataprotocols.org/json-table-schema/) for the CSV we are creating.
- [ ] Write the [Data Package (Tabular Profile)](http://dataprotocols.org/tabular-data-package/) for the CSV we are creating (the JTS created above is the `schema` for our CSV resource).
- [ ] Target our S3 bucket, and then with a path/namspace of `adhoc` and a containing directory of `trial-sources` (so: `BUCKET_NAME/adhoc/trial-sources/`)
- [ ] In the directory, write:
- [ ] The datapackage to `datapackage.json`
- [ ] The CSV to `data.csv`
- [ ] Run the processor
- [ ] Tests
|
1.0
|
Custom processor to get trial source count data to CSV - # Description
For each trial, write a record of:
- ID
- SOURCE_COUNT (# of sources for that trial)
- PRIMARY_ID
- SECONDARY_ID{_1;_2;etc} - iterate over the ids to create columns
- {SOURCE_NAME} ("Yes" or "No") indicating if THIS source provided data.
Example:
```
ID | SOURCE_COUNT | PRIMARY_ID | SECONDARY_ID_1 | SECONDARY_ID_2 | EUCTR | HARP | ACTRN
{UUID} | 2 | A182 | B929 | | Yes | No | Yes
```
# Tasks
- [ ] Write the SQL query to get our required data from the warehouse.
- [ ] Write the [JSON Table Schema](http://dataprotocols.org/json-table-schema/) for the CSV we are creating.
- [ ] Write the [Data Package (Tabular Profile)](http://dataprotocols.org/tabular-data-package/) for the CSV we are creating (the JTS created above is the `schema` for our CSV resource).
- [ ] Target our S3 bucket, and then with a path/namspace of `adhoc` and a containing directory of `trial-sources` (so: `BUCKET_NAME/adhoc/trial-sources/`)
- [ ] In the directory, write:
- [ ] The datapackage to `datapackage.json`
- [ ] The CSV to `data.csv`
- [ ] Run the processor
- [ ] Tests
|
process
|
custom processor to get trial source count data to csv description for each trial write a record of id source count of sources for that trial primary id secondary id etc iterate over the ids to create columns source name yes or no indicating if this source provided data example id source count primary id secondary id secondary id euctr harp actrn uuid yes no yes tasks write the sql query to get our required data from the warehouse write the for the csv we are creating write the for the csv we are creating the jts created above is the schema for our csv resource target our bucket and then with a path namspace of adhoc and a containing directory of trial sources so bucket name adhoc trial sources in the directory write the datapackage to datapackage json the csv to data csv run the processor tests
| 1
|
9,867
| 12,880,748,231
|
IssuesEvent
|
2020-07-12 07:59:09
|
ruby-processing/propane
|
https://api.github.com/repos/ruby-processing/propane
|
closed
|
There is expected to be an issue with the video library on macosx
|
waiting for vanilla processing
|
### Problem
MacOS binaries should be in the `macosx` folder, but the video library is in `macosx64` folder
### Suggested fix
Re name the native binaries `macosx64` folder to `macosx` (in the library you downloaded to `~/.propane` folder)
You should note that the library loader is scheduled for a major overhaul so it is probably not worth issuing a temporary fix (as I have for JRubyArt)
|
1.0
|
There is expected to be an issue with the video library on macosx - ### Problem
MacOS binaries should be in the `macosx` folder, but the video library is in `macosx64` folder
### Suggested fix
Re name the native binaries `macosx64` folder to `macosx` (in the library you downloaded to `~/.propane` folder)
You should note that the library loader is scheduled for a major overhaul so it is probably not worth issuing a temporary fix (as I have for JRubyArt)
|
process
|
there is expected to be an issue with the video library on macosx problem macos binaries should be in the macosx folder but the video library is in folder suggested fix re name the native binaries folder to macosx in the library you downloaded to propane folder you should note that the library loader is scheduled for a major overhaul so it is probably not worth issuing a temporary fix as i have for jrubyart
| 1
|
222,326
| 17,407,512,484
|
IssuesEvent
|
2021-08-03 08:08:45
|
Azure/azure-sdk-for-js
|
https://api.github.com/repos/Azure/azure-sdk-for-js
|
closed
|
Azure Web PubSub Samples Issue
|
Client WebPubSub needs-team-triage test-manual-pass
|
1.
Section [link1](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub/samples/v1/javascript/directMessage.js#L20),[link2](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub/samples/v1/typescript/src/directMessage.ts#L20),[link3](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub/samples-dev/directMessage.ts#L20):

Reason:
According to the commments, it should use `sendToConnection`
Suggestion:
Update `sendToUser` to `sendToConnection`.
@lilyjma ,@ramya-rao-a ,@nickzhums ,@bterlson and @jongio for notification.
|
1.0
|
Azure Web PubSub Samples Issue - 1.
Section [link1](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub/samples/v1/javascript/directMessage.js#L20),[link2](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub/samples/v1/typescript/src/directMessage.ts#L20),[link3](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub/samples-dev/directMessage.ts#L20):

Reason:
According to the commments, it should use `sendToConnection`
Suggestion:
Update `sendToUser` to `sendToConnection`.
@lilyjma ,@ramya-rao-a ,@nickzhums ,@bterlson and @jongio for notification.
|
non_process
|
azure web pubsub samples issue section reason according to the commments it should use sendtoconnection suggestion update sendtouser to sendtoconnection lilyjma ramya rao a nickzhums bterlson and jongio for notification
| 0
|
298,265
| 25,809,740,813
|
IssuesEvent
|
2022-12-11 18:45:25
|
Spacha/PoliisiautoServer
|
https://api.github.com/repos/Spacha/PoliisiautoServer
|
closed
|
Add feature tests to the server
|
testing
|
* Test all endpoints - basically all the endpoints that are added in Postman
* Authentication and security in general
|
1.0
|
Add feature tests to the server - * Test all endpoints - basically all the endpoints that are added in Postman
* Authentication and security in general
|
non_process
|
add feature tests to the server test all endpoints basically all the endpoints that are added in postman authentication and security in general
| 0
|
170,567
| 6,447,769,964
|
IssuesEvent
|
2017-08-14 09:00:35
|
Caleydo/ordino
|
https://api.github.com/repos/Caleydo/ordino
|
opened
|
bug: welcome arrow not visible
|
bug low priority
|
* Release number or git hash: latest dev
* Web browser version and OS: firefox, win10
* Environment (local or deployed): deployed
see

|
1.0
|
bug: welcome arrow not visible - * Release number or git hash: latest dev
* Web browser version and OS: firefox, win10
* Environment (local or deployed): deployed
see

|
non_process
|
bug welcome arrow not visible release number or git hash latest dev web browser version and os firefox environment local or deployed deployed see
| 0
|
465
| 2,903,456,853
|
IssuesEvent
|
2015-06-18 13:33:06
|
pwittchen/prefser
|
https://api.github.com/repos/pwittchen/prefser
|
closed
|
Deploy new version of the library to Maven Central Repository
|
release process
|
Deploy library v. 1.0.5. after updating version.
|
1.0
|
Deploy new version of the library to Maven Central Repository - Deploy library v. 1.0.5. after updating version.
|
process
|
deploy new version of the library to maven central repository deploy library v after updating version
| 1
|
22,267
| 30,820,377,984
|
IssuesEvent
|
2023-08-01 15:55:31
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
ProcessTest::testWaitStoppedDeadProcess() passing even though it should actually fail
|
Bug Process Status: Needs Review
|
### Symfony version(s) affected
*
### Description
The `ProcessTest::testWaitStoppedDeadProcess()` uses the `ErrorProcessInitiator.php` which itself includes the Composer autoload.php. This file, however, does not exist at all in `symfony/symfony`.
### How to reproduce
```
public function testWaitStoppedDeadProcess()
{
$process = $this->getProcess(self::$phpBin.' '.__DIR__.'/ErrorProcessInitiator.php -e '.self::$phpBin);
$process->start();
$process->setTimeout(2);
$process->wait();
$this->assertFalse($process->isRunning());
var_dump($process->getOutput());
}
```
Then run `./phpunit src/Symfony/Component/Process --filter ProcessTest::testWaitStoppedDeadProcess`
It will contain issues saying the `autoload.php` could not be found.
### Possible Solution
Not entirely sure because I did not write that test. Maybe it would be enough to include the `autoload.php` of the mono repository.
Also, I think we should output some `dummy` content and assert that output in order to have the test fail if suddenly something doesn't work anymore. At the moment, this is kind of a false-positive.
### Additional Context
_No response_
|
1.0
|
ProcessTest::testWaitStoppedDeadProcess() passing even though it should actually fail - ### Symfony version(s) affected
*
### Description
The `ProcessTest::testWaitStoppedDeadProcess()` uses the `ErrorProcessInitiator.php` which itself includes the Composer autoload.php. This file, however, does not exist at all in `symfony/symfony`.
### How to reproduce
```
public function testWaitStoppedDeadProcess()
{
$process = $this->getProcess(self::$phpBin.' '.__DIR__.'/ErrorProcessInitiator.php -e '.self::$phpBin);
$process->start();
$process->setTimeout(2);
$process->wait();
$this->assertFalse($process->isRunning());
var_dump($process->getOutput());
}
```
Then run `./phpunit src/Symfony/Component/Process --filter ProcessTest::testWaitStoppedDeadProcess`
It will contain issues saying the `autoload.php` could not be found.
### Possible Solution
Not entirely sure because I did not write that test. Maybe it would be enough to include the `autoload.php` of the mono repository.
Also, I think we should output some `dummy` content and assert that output in order to have the test fail if suddenly something doesn't work anymore. At the moment, this is kind of a false-positive.
### Additional Context
_No response_
|
process
|
processtest testwaitstoppeddeadprocess passing even though it should actually fail symfony version s affected description the processtest testwaitstoppeddeadprocess uses the errorprocessinitiator php which itself includes the composer autoload php this file however does not exist at all in symfony symfony how to reproduce public function testwaitstoppeddeadprocess process this getprocess self phpbin dir errorprocessinitiator php e self phpbin process start process settimeout process wait this assertfalse process isrunning var dump process getoutput then run phpunit src symfony component process filter processtest testwaitstoppeddeadprocess it will contain issues saying the autoload php could not be found possible solution not entirely sure because i did not write that test maybe it would be enough to include the autoload php of the mono repository also i think we should output some dummy content and assert that output in order to have the test fail if suddenly something doesn t work anymore at the moment this is kind of a false positive additional context no response
| 1
|
309,625
| 26,671,135,193
|
IssuesEvent
|
2023-01-26 10:22:35
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/group2/dashboard·ts - lens app - group 2 lens dashboard tests CSV export action exists in panel context menu
|
Team:Visualizations failed-test
|
A test failed on a tracked branch
```
Error: expected testSubject(embeddablePanelAction-ACTION_EXPORT_CSV) to exist
at TestSubjects.existOrFail (test_subjects.ts:71:13)
at Context.<anonymous> (dashboard.ts:158:7)
at Object.apply (wrap_function.js:73:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/25904#0185dfe2-de82-49bb-87a4-203dda69c1b8)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/group2/dashboard·ts","test.name":"lens app - group 2 lens dashboard tests CSV export action exists in panel context menu","test.failCount":1}} -->
|
1.0
|
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/group2/dashboard·ts - lens app - group 2 lens dashboard tests CSV export action exists in panel context menu - A test failed on a tracked branch
```
Error: expected testSubject(embeddablePanelAction-ACTION_EXPORT_CSV) to exist
at TestSubjects.existOrFail (test_subjects.ts:71:13)
at Context.<anonymous> (dashboard.ts:158:7)
at Object.apply (wrap_function.js:73:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/25904#0185dfe2-de82-49bb-87a4-203dda69c1b8)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/group2/dashboard·ts","test.name":"lens app - group 2 lens dashboard tests CSV export action exists in panel context menu","test.failCount":1}} -->
|
non_process
|
failing test chrome x pack ui functional tests x pack test functional apps lens dashboard·ts lens app group lens dashboard tests csv export action exists in panel context menu a test failed on a tracked branch error expected testsubject embeddablepanelaction action export csv to exist at testsubjects existorfail test subjects ts at context dashboard ts at object apply wrap function js first failure
| 0
|
1,948
| 4,770,934,480
|
IssuesEvent
|
2016-10-26 16:31:31
|
mozilla/tofino
|
https://api.github.com/repos/mozilla/tofino
|
closed
|
The commandline API doesn't properly check if running in a test or development environment
|
backend:main-process bug in progress
|
The following line line in app/main/command-line.js:
```process.env.NODE_ENV !== 'development' && process.env.NODE_ENV !== 'test'```
is completely broken since the node environment can never be "test" (instead, a `TEST` flag exists on the node environment variables), and the "development" build needs to be tested via the build config, since tests can run in production mode but on development builds.
|
1.0
|
The commandline API doesn't properly check if running in a test or development environment - The following line line in app/main/command-line.js:
```process.env.NODE_ENV !== 'development' && process.env.NODE_ENV !== 'test'```
is completely broken since the node environment can never be "test" (instead, a `TEST` flag exists on the node environment variables), and the "development" build needs to be tested via the build config, since tests can run in production mode but on development builds.
|
process
|
the commandline api doesn t properly check if running in a test or development environment the following line line in app main command line js process env node env development process env node env test is completely broken since the node environment can never be test instead a test flag exists on the node environment variables and the development build needs to be tested via the build config since tests can run in production mode but on development builds
| 1
|
15,601
| 19,723,955,613
|
IssuesEvent
|
2022-01-13 17:58:57
|
dtcenter/MET
|
https://api.github.com/repos/dtcenter/MET
|
closed
|
Modify the interpretation of the message_type_group_map values to support the use of regular expressions.
|
type: enhancement priority: high requestor: Community reporting: DTC NCAR Base required: FOR OFFICIAL RELEASE MET: PreProcessing Tools (Point)
|
## Describe the Enhancement ##
This issue arose via METplus Discussions dtcenter/METplus#1232. While the user was able to run madis2nc to compute time summaries, he was NOT able to get Point-Stat to read them to verify forecasts of daily temperature min/max.
I was able to replicate the problem using the sample data he provided in this [comment](https://github.com/dtcenter/METplus/discussions/1232#discussioncomment-1653826). Close inspection reveals that madis2nc is writing the output level values as bad data. Next I inspected the output from the nightly build and found the same to be true there.
```
# on kiowa
ncdump -v obs_lvl NB20211116/MET-develop/test_output/madis2nc/metar_20120409_time_summary.nc
obs_lvl = _, _, _, _, _, _, _, _, _, _, _, _, _, _, _,
```
In general, Point/Ensemble-Stat have no way of processing observations with a bad level value.
However non-time-summary output from madis2nc does work in Point/Ensemble-Stat because of special handling for "surface" message types. The non-time-summary madis2nc output for METAR inputs has message_type = ADPSFC. However the time-summary output sets has message_type = ADPSFC_MIN_030000 (for example). Since that string is NOT included in the surface entry of the message_type_group_map, Point/Ensemble-Stat cannot process those observations.
```
message_type_group_map = [
{ key = "SURFACE"; val = "ADPSFC,SFCSHP,MSONET,ADPSFC_MIN_030000,ADPSFC_MAX_030000"; },
```
This task is to modify the processing of each entry in the comma-separated "val" string. Interpret each entry as a regular expression instead of just doing string matching. Care must be give to differentiate between commas inside of RE's versus those that separate the list items.
Once that works, consider updating the message_type_group_map settings in default config file to match any message_type that begins with the specified string.
### Time Estimate ###
1 day?
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
2702691
## Define the Metadata ##
### Assignee ###
- [x] Select **engineer(s)** or **no engineer** required: @hsoh-u
- [x] Select **scientist(s)** or **no scientist** required: none required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Organization** level **Project** for support of the current coordinated release
- [x] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next bugfix version
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
No impacts.
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
|
1.0
|
Modify the interpretation of the message_type_group_map values to support the use of regular expressions. - ## Describe the Enhancement ##
This issue arose via METplus Discussions dtcenter/METplus#1232. While the user was able to run madis2nc to compute time summaries, he was NOT able to get Point-Stat to read them to verify forecasts of daily temperature min/max.
I was able to replicate the problem using the sample data he provided in this [comment](https://github.com/dtcenter/METplus/discussions/1232#discussioncomment-1653826). Close inspection reveals that madis2nc is writing the output level values as bad data. Next I inspected the output from the nightly build and found the same to be true there.
```
# on kiowa
ncdump -v obs_lvl NB20211116/MET-develop/test_output/madis2nc/metar_20120409_time_summary.nc
obs_lvl = _, _, _, _, _, _, _, _, _, _, _, _, _, _, _,
```
In general, Point/Ensemble-Stat have no way of processing observations with a bad level value.
However non-time-summary output from madis2nc does work in Point/Ensemble-Stat because of special handling for "surface" message types. The non-time-summary madis2nc output for METAR inputs has message_type = ADPSFC. However the time-summary output sets has message_type = ADPSFC_MIN_030000 (for example). Since that string is NOT included in the surface entry of the message_type_group_map, Point/Ensemble-Stat cannot process those observations.
```
message_type_group_map = [
{ key = "SURFACE"; val = "ADPSFC,SFCSHP,MSONET,ADPSFC_MIN_030000,ADPSFC_MAX_030000"; },
```
This task is to modify the processing of each entry in the comma-separated "val" string. Interpret each entry as a regular expression instead of just doing string matching. Care must be give to differentiate between commas inside of RE's versus those that separate the list items.
Once that works, consider updating the message_type_group_map settings in default config file to match any message_type that begins with the specified string.
### Time Estimate ###
1 day?
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
2702691
## Define the Metadata ##
### Assignee ###
- [x] Select **engineer(s)** or **no engineer** required: @hsoh-u
- [x] Select **scientist(s)** or **no scientist** required: none required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Organization** level **Project** for support of the current coordinated release
- [x] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next bugfix version
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
No impacts.
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
|
process
|
modify the interpretation of the message type group map values to support the use of regular expressions describe the enhancement this issue arose via metplus discussions dtcenter metplus while the user was able to run to compute time summaries he was not able to get point stat to read them to verify forecasts of daily temperature min max i was able to replicate the problem using the sample data he provided in this close inspection reveals that is writing the output level values as bad data next i inspected the output from the nightly build and found the same to be true there on kiowa ncdump v obs lvl met develop test output metar time summary nc obs lvl in general point ensemble stat have no way of processing observations with a bad level value however non time summary output from does work in point ensemble stat because of special handling for surface message types the non time summary output for metar inputs has message type adpsfc however the time summary output sets has message type adpsfc min for example since that string is not included in the surface entry of the message type group map point ensemble stat cannot process those observations message type group map key surface val adpsfc sfcshp msonet adpsfc min adpsfc max this task is to modify the processing of each entry in the comma separated val string interpret each entry as a regular expression instead of just doing string matching care must be give to differentiate between commas inside of re s versus those that separate the list items once that works consider updating the message type group map settings in default config file to match any message type that begins with the specified string time estimate day relevant deadlines list relevant project deadlines here or state none funding source define the metadata assignee select engineer s or no engineer required hsoh u select scientist s or no scientist required none required labels select component s select priority select requestor s projects and milestone select organization level project for support of the current coordinated release select repository level project for development toward the next official release or add alert need project assignment label select milestone as the next bugfix version define related issue s consider the impact to the other metplus components no impacts enhancement checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
| 1
|
16,706
| 21,843,253,392
|
IssuesEvent
|
2022-05-18 00:15:00
|
lbryio/scribe
|
https://api.github.com/repos/lbryio/scribe
|
closed
|
Scribe writer does not use `multi_get` api
|
area: block processor type: feature request
|
The wrtier can be made a good bit faster by batching the RevertablePut and RevertableDelete ops given to `RevertableOpStack.extend_ops`, internally combining them into fewer `multi_get` calls instead of verifying integrity on each key.
|
1.0
|
Scribe writer does not use `multi_get` api - The wrtier can be made a good bit faster by batching the RevertablePut and RevertableDelete ops given to `RevertableOpStack.extend_ops`, internally combining them into fewer `multi_get` calls instead of verifying integrity on each key.
|
process
|
scribe writer does not use multi get api the wrtier can be made a good bit faster by batching the revertableput and revertabledelete ops given to revertableopstack extend ops internally combining them into fewer multi get calls instead of verifying integrity on each key
| 1
|
7,367
| 10,511,370,358
|
IssuesEvent
|
2019-09-27 15:18:37
|
prisma/studio
|
https://api.github.com/repos/prisma/studio
|
closed
|
Easy browser-based development setup
|
kind/improvement process/candidate
|
- live reload
- easy local linking
- example data sets
- preview deployments (e.g. Netlify)
|
1.0
|
Easy browser-based development setup - - live reload
- easy local linking
- example data sets
- preview deployments (e.g. Netlify)
|
process
|
easy browser based development setup live reload easy local linking example data sets preview deployments e g netlify
| 1
|
21,422
| 29,359,592,438
|
IssuesEvent
|
2023-05-28 00:36:59
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Hibrido / Belo Horizonte] Data Analyst na Coodesh
|
SALVADOR BANCO DE DADOS DATA SCIENCE PYTHON SQL DJANGO REQUISITOS PROCESSOS GITHUB INGLÊS SEGURANÇA UMA POWER BI MODELAGEM DE DADOS pandas ALOCADO Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/analista-de-dados-152349096?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>VP6</strong> está em busca de <strong><ins>Data Analyst</ins></strong> para compor seu time!</p>
<p>A VP6 é uma software house que desenvolve ferramentas para otimização de processos empresariais. Construímos soluções personalizadas para cada cliente e integráveis a outras ferramentas para fortalecer as operações do seu negócio, aliando tecnologia à performance.<br></p>
## VP6:
<p>A VP6 é uma software house que desenvolve ferramentas para otimização de processos empresariais.</p>
<p>Construímos soluções personalizadas para cada cliente e integráveis a outras ferramentas para fortalecer as operações do seu negócio, aliando tecnologia à performance.</p><a href='https://coodesh.com/empresas/vp6'>Veja mais no site</a>
## Habilidades:
- Python
- PowerBI
- Banco de dados relacionais (SQL)
## Local:
Belo Horizonte
## Requisitos:
- Noções de Django
- Python para dados (pandas not e outras bibliotecas) Intermediário/avançado;
- Experiência em modelagem de dados e desenvolvimento de relatórios/dashboards via Power BI;
- Experiência com práticas de segurança e configurações do Power BI (por exemplo: organização de datasets, workspaces, níveis de acesso, gateway);
- Experiência em consultas SQL;
- Experiência em Dax;
- Aprendizado contínuo e compartilhamento de suas percepções;
- Habilidade para trabalho em equipe e bom relacionamento com cliente.
## Benefícios:
- GymPass;
- Inglês;
- Plano de Saúde;
- Horários flexíveis;
- Bônus por resultado.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Data Analyst na VP6](https://coodesh.com/vagas/analista-de-dados-152349096?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Categoria
Data Science
|
1.0
|
[Hibrido / Belo Horizonte] Data Analyst na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/analista-de-dados-152349096?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>VP6</strong> está em busca de <strong><ins>Data Analyst</ins></strong> para compor seu time!</p>
<p>A VP6 é uma software house que desenvolve ferramentas para otimização de processos empresariais. Construímos soluções personalizadas para cada cliente e integráveis a outras ferramentas para fortalecer as operações do seu negócio, aliando tecnologia à performance.<br></p>
## VP6:
<p>A VP6 é uma software house que desenvolve ferramentas para otimização de processos empresariais.</p>
<p>Construímos soluções personalizadas para cada cliente e integráveis a outras ferramentas para fortalecer as operações do seu negócio, aliando tecnologia à performance.</p><a href='https://coodesh.com/empresas/vp6'>Veja mais no site</a>
## Habilidades:
- Python
- PowerBI
- Banco de dados relacionais (SQL)
## Local:
Belo Horizonte
## Requisitos:
- Noções de Django
- Python para dados (pandas not e outras bibliotecas) Intermediário/avançado;
- Experiência em modelagem de dados e desenvolvimento de relatórios/dashboards via Power BI;
- Experiência com práticas de segurança e configurações do Power BI (por exemplo: organização de datasets, workspaces, níveis de acesso, gateway);
- Experiência em consultas SQL;
- Experiência em Dax;
- Aprendizado contínuo e compartilhamento de suas percepções;
- Habilidade para trabalho em equipe e bom relacionamento com cliente.
## Benefícios:
- GymPass;
- Inglês;
- Plano de Saúde;
- Horários flexíveis;
- Bônus por resultado.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Data Analyst na VP6](https://coodesh.com/vagas/analista-de-dados-152349096?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Categoria
Data Science
|
process
|
data analyst na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a está em busca de data analyst para compor seu time a é uma software house que desenvolve ferramentas para otimização de processos empresariais construímos soluções personalizadas para cada cliente e integráveis a outras ferramentas para fortalecer as operações do seu negócio aliando tecnologia à performance a é uma software house que desenvolve ferramentas para otimização de processos empresariais construímos soluções personalizadas para cada cliente e integráveis a outras ferramentas para fortalecer as operações do seu negócio aliando tecnologia à performance habilidades python powerbi banco de dados relacionais sql local belo horizonte requisitos noções de django python para dados pandas not e outras bibliotecas intermediário avançado experiência em modelagem de dados e desenvolvimento de relatórios dashboards via power bi experiência com práticas de segurança e configurações do power bi por exemplo organização de datasets workspaces níveis de acesso gateway experiência em consultas sql experiência em dax aprendizado contínuo e compartilhamento de suas percepções habilidade para trabalho em equipe e bom relacionamento com cliente benefícios gympass inglês plano de saúde horários flexíveis bônus por resultado como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação alocado categoria data science
| 1
|
508,454
| 14,700,539,949
|
IssuesEvent
|
2021-01-04 10:22:04
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
m.chaturbate.com - video or audio doesn't play
|
browser-fenix engine-gecko ml-needsdiagnosis-false priority-important
|
<!-- @browser: Firefox Mobile 86.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:86.0) Gecko/86.0 Firefox/86.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64710 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://m.chaturbate.com/milkyredclover/
**Browser / Version**: Firefox Mobile 86.0
**Operating System**: Android
**Tested Another Browser**: Yes Opera
**Problem type**: Video or audio doesn't play
**Description**: Media controls are broken or missing
**Steps to Reproduce**:
Video does not play
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/2d96850e-4fde-4252-9fa1-5d3e44b7785e.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201225095506</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/e7816810-56f0-4617-a15e-67eb5862c926)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
m.chaturbate.com - video or audio doesn't play - <!-- @browser: Firefox Mobile 86.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:86.0) Gecko/86.0 Firefox/86.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64710 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://m.chaturbate.com/milkyredclover/
**Browser / Version**: Firefox Mobile 86.0
**Operating System**: Android
**Tested Another Browser**: Yes Opera
**Problem type**: Video or audio doesn't play
**Description**: Media controls are broken or missing
**Steps to Reproduce**:
Video does not play
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/2d96850e-4fde-4252-9fa1-5d3e44b7785e.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201225095506</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/e7816810-56f0-4617-a15e-67eb5862c926)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
m chaturbate com video or audio doesn t play url browser version firefox mobile operating system android tested another browser yes opera problem type video or audio doesn t play description media controls are broken or missing steps to reproduce video does not play view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
6,309
| 9,311,282,864
|
IssuesEvent
|
2019-03-25 20:57:58
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Define support level for pkg-config based builds.
|
type: process
|
The current CMake configuration supports using [`pkg-config`](https://www.freedesktop.org/wiki/Software/pkg-config/) to discover the flags for our dependencies. Basically the CMake file calls `pkg-config(1)` to discover what compiler and linker flags are needed to find `libcurl`, or `OpenSSL`, or `protobuf`.
In some Linux distribution it seems like this is the preferred way to configure dependencies (e.g. Gentoo [packages google-cloud-cpp](https://github.com/gentoo/gentoo/tree/master/net-libs/google-cloud-cpp) using `pkg-config(1)`.
I think we have basically three options:
1. Drop the support for `pkg-config(1)` completely, we remove the CMake code that uses `pkg-config(1)` to discover the dependencies.
1. Support `pkg-config(1)` as "contributed", meaning we accept patches to improve it, but do not create a build that runs it or fix the configuration ourselves (this is basically the current support).
1. Support `pkg-config(1)` for "realsies", meaning we create at least one build with it, and keep it working.
|
1.0
|
Define support level for pkg-config based builds. - The current CMake configuration supports using [`pkg-config`](https://www.freedesktop.org/wiki/Software/pkg-config/) to discover the flags for our dependencies. Basically the CMake file calls `pkg-config(1)` to discover what compiler and linker flags are needed to find `libcurl`, or `OpenSSL`, or `protobuf`.
In some Linux distribution it seems like this is the preferred way to configure dependencies (e.g. Gentoo [packages google-cloud-cpp](https://github.com/gentoo/gentoo/tree/master/net-libs/google-cloud-cpp) using `pkg-config(1)`.
I think we have basically three options:
1. Drop the support for `pkg-config(1)` completely, we remove the CMake code that uses `pkg-config(1)` to discover the dependencies.
1. Support `pkg-config(1)` as "contributed", meaning we accept patches to improve it, but do not create a build that runs it or fix the configuration ourselves (this is basically the current support).
1. Support `pkg-config(1)` for "realsies", meaning we create at least one build with it, and keep it working.
|
process
|
define support level for pkg config based builds the current cmake configuration supports using to discover the flags for our dependencies basically the cmake file calls pkg config to discover what compiler and linker flags are needed to find libcurl or openssl or protobuf in some linux distribution it seems like this is the preferred way to configure dependencies e g gentoo using pkg config i think we have basically three options drop the support for pkg config completely we remove the cmake code that uses pkg config to discover the dependencies support pkg config as contributed meaning we accept patches to improve it but do not create a build that runs it or fix the configuration ourselves this is basically the current support support pkg config for realsies meaning we create at least one build with it and keep it working
| 1
|
357,289
| 10,604,699,613
|
IssuesEvent
|
2019-10-10 18:46:04
|
OpenLiberty/ci.maven
|
https://api.github.com/repos/OpenLiberty/ci.maven
|
closed
|
liberty:run doesn't deploy the application
|
high priority vNext
|
Per @sdaschner:
> One issue that my updated getting started guide (in the PR) still has is that the minimal example doesn't work with :run, only with :dev OOTB. It doesn't deploy the application (the WAR file) that's being produced by Maven and defined in the server.xml. Calling :deploy explicitly complains that I need to specify the artifact, which I don't want if I just want to use the default convention. Can we just make run behave in the same way as dev -- without debug and listening for file changes?
@gkwan-ibm is interested for the guides too.
|
1.0
|
liberty:run doesn't deploy the application - Per @sdaschner:
> One issue that my updated getting started guide (in the PR) still has is that the minimal example doesn't work with :run, only with :dev OOTB. It doesn't deploy the application (the WAR file) that's being produced by Maven and defined in the server.xml. Calling :deploy explicitly complains that I need to specify the artifact, which I don't want if I just want to use the default convention. Can we just make run behave in the same way as dev -- without debug and listening for file changes?
@gkwan-ibm is interested for the guides too.
|
non_process
|
liberty run doesn t deploy the application per sdaschner one issue that my updated getting started guide in the pr still has is that the minimal example doesn t work with run only with dev ootb it doesn t deploy the application the war file that s being produced by maven and defined in the server xml calling deploy explicitly complains that i need to specify the artifact which i don t want if i just want to use the default convention can we just make run behave in the same way as dev without debug and listening for file changes gkwan ibm is interested for the guides too
| 0
|
423,782
| 28,933,347,745
|
IssuesEvent
|
2023-05-09 02:37:53
|
jaenyeong/Teach_Wanted-PreOnBoarding-Backend-Challenge
|
https://api.github.com/repos/jaenyeong/Teach_Wanted-PreOnBoarding-Backend-Challenge
|
opened
|
[사전 과제 제출]
|
documentation
|
# 원티드 프리온보딩 백엔드 챌린지 6월 과정 사전 과제 제출
답안을 모두 작성하지 못해도 괜찮습니다 :)
부담 갖지 마시고 편안하게 푸시길 바랍니다
## Question
**Java 입문서('이것이 자바다', '자바의 정석' 등)를 완독한 적이 있나요? 기억에 남는 내용을 설명해 주세요!**
>
**Java 공식 문서를 10분 이상 살펴본 적이 있나요? 있다면 어떤 내용을 살펴보았나요?**
>
**인터프리터 방식과 컴파일 방식의 차이점을 서술해 주세요.**
>
**프로세스와 스레드의 차이점을 서술해 주세요.**
>
**JVM의 정의와 메모리 구조를 아는 대로 서술해 주세요.**
>
**Java의 GC 알고리즘 중 하나만 선택해 아는 대로 서술해 주세요.**
>
**세마포어에 대해서 아는 대로 서술해 주세요.**
>
**Java의 `synchronized`에 대해서 아는 대로 서술해 주세요.**
>
**강의 커리큘럼과 관련하여 기대하는 내용이나 다뤘으면 하는 내용이 있나요?**
>
**회사 생활 또는 개발자로서 고민과 질문, 강의에서 바라는 점 등 하고 싶은 말씀을 편하게 남겨주세요!**
>
|
1.0
|
[사전 과제 제출] - # 원티드 프리온보딩 백엔드 챌린지 6월 과정 사전 과제 제출
답안을 모두 작성하지 못해도 괜찮습니다 :)
부담 갖지 마시고 편안하게 푸시길 바랍니다
## Question
**Java 입문서('이것이 자바다', '자바의 정석' 등)를 완독한 적이 있나요? 기억에 남는 내용을 설명해 주세요!**
>
**Java 공식 문서를 10분 이상 살펴본 적이 있나요? 있다면 어떤 내용을 살펴보았나요?**
>
**인터프리터 방식과 컴파일 방식의 차이점을 서술해 주세요.**
>
**프로세스와 스레드의 차이점을 서술해 주세요.**
>
**JVM의 정의와 메모리 구조를 아는 대로 서술해 주세요.**
>
**Java의 GC 알고리즘 중 하나만 선택해 아는 대로 서술해 주세요.**
>
**세마포어에 대해서 아는 대로 서술해 주세요.**
>
**Java의 `synchronized`에 대해서 아는 대로 서술해 주세요.**
>
**강의 커리큘럼과 관련하여 기대하는 내용이나 다뤘으면 하는 내용이 있나요?**
>
**회사 생활 또는 개발자로서 고민과 질문, 강의에서 바라는 점 등 하고 싶은 말씀을 편하게 남겨주세요!**
>
|
non_process
|
원티드 프리온보딩 백엔드 챌린지 과정 사전 과제 제출 답안을 모두 작성하지 못해도 괜찮습니다 부담 갖지 마시고 편안하게 푸시길 바랍니다 question java 입문서 이것이 자바다 자바의 정석 등 를 완독한 적이 있나요 기억에 남는 내용을 설명해 주세요 java 공식 문서를 이상 살펴본 적이 있나요 있다면 어떤 내용을 살펴보았나요 인터프리터 방식과 컴파일 방식의 차이점을 서술해 주세요 프로세스와 스레드의 차이점을 서술해 주세요 jvm의 정의와 메모리 구조를 아는 대로 서술해 주세요 java의 gc 알고리즘 중 하나만 선택해 아는 대로 서술해 주세요 세마포어에 대해서 아는 대로 서술해 주세요 java의 synchronized 에 대해서 아는 대로 서술해 주세요 강의 커리큘럼과 관련하여 기대하는 내용이나 다뤘으면 하는 내용이 있나요 회사 생활 또는 개발자로서 고민과 질문 강의에서 바라는 점 등 하고 싶은 말씀을 편하게 남겨주세요
| 0
|
19,467
| 25,762,750,681
|
IssuesEvent
|
2022-12-08 22:05:41
|
IHE/publications
|
https://api.github.com/repos/IHE/publications
|
closed
|
Delayed Document Assembly vol 1 and 2 conflict on hash value
|
CP-processing
|
Volume 1 says hash and size are 0
Volume 2 says size is 0, and hash is the valid hash value of a zero length file.
Recommend that Volume 1 be fixed, as this solution allows hash to always be calculated properly.
https://profiles.ihe.net/ITI/TF/Volume1/ch-10.html#10.2.10
|
1.0
|
Delayed Document Assembly vol 1 and 2 conflict on hash value - Volume 1 says hash and size are 0
Volume 2 says size is 0, and hash is the valid hash value of a zero length file.
Recommend that Volume 1 be fixed, as this solution allows hash to always be calculated properly.
https://profiles.ihe.net/ITI/TF/Volume1/ch-10.html#10.2.10
|
process
|
delayed document assembly vol and conflict on hash value volume says hash and size are volume says size is and hash is the valid hash value of a zero length file recommend that volume be fixed as this solution allows hash to always be calculated properly
| 1
|
13,528
| 16,060,961,407
|
IssuesEvent
|
2021-04-23 12:30:06
|
CERT-Polska/drakvuf-sandbox
|
https://api.github.com/repos/CERT-Polska/drakvuf-sandbox
|
closed
|
RVAs of apicalls
|
certpl drakrun/postprocessing enhancement priority:medium
|
I have a list of 1024 apicalls, which come from 36 DLLs.
For each snapshot (or karton task) I need RVAs of this apicalls inside the DLLs.
I prefer to get these RVAs from karton tasks, I would have everything in one place.
Since there are only 1024 of these functions, it shouldn't be a problem?
But it's not a "must be" for me.
The list of apicalls is here:
https://github.com/danielplohmann/apiscout/blob/master/apiscout/data/winapi1024v1.txt
and the list of DDLs is as follows:
```
['GdiPlus.dll', 'Wldap32.dll', 'advapi32.dll', 'comctl32.dll', 'crypt32.dll', 'dnsapi.dll', 'gdi32.dll', 'imagehlp.dll', 'imm32.dll', 'iphlpapi.dll', 'kernel32.dll', 'mpr.dll', 'msacm32.dll', 'msvcrt.dll', 'netapi32.dll', 'ntdll.dll', 'ole32.dll', 'oleaut32.dll', 'powrprof.dll', 'psapi.dll', 'rpcrt4.dll', 'secur32.dll', 'sensapi.dll', 'shell32.dll', 'shlwapi.dll', 'urlmon.dll', 'user32.dll', 'userenv.dll', 'version.dll', 'winhttp.dll', 'wininet.dll', 'winmm.dll', 'winspool.drv', 'ws2_32.dll', 'wsock32.dll', 'wtsapi32.dll']
```
|
1.0
|
RVAs of apicalls - I have a list of 1024 apicalls, which come from 36 DLLs.
For each snapshot (or karton task) I need RVAs of this apicalls inside the DLLs.
I prefer to get these RVAs from karton tasks, I would have everything in one place.
Since there are only 1024 of these functions, it shouldn't be a problem?
But it's not a "must be" for me.
The list of apicalls is here:
https://github.com/danielplohmann/apiscout/blob/master/apiscout/data/winapi1024v1.txt
and the list of DDLs is as follows:
```
['GdiPlus.dll', 'Wldap32.dll', 'advapi32.dll', 'comctl32.dll', 'crypt32.dll', 'dnsapi.dll', 'gdi32.dll', 'imagehlp.dll', 'imm32.dll', 'iphlpapi.dll', 'kernel32.dll', 'mpr.dll', 'msacm32.dll', 'msvcrt.dll', 'netapi32.dll', 'ntdll.dll', 'ole32.dll', 'oleaut32.dll', 'powrprof.dll', 'psapi.dll', 'rpcrt4.dll', 'secur32.dll', 'sensapi.dll', 'shell32.dll', 'shlwapi.dll', 'urlmon.dll', 'user32.dll', 'userenv.dll', 'version.dll', 'winhttp.dll', 'wininet.dll', 'winmm.dll', 'winspool.drv', 'ws2_32.dll', 'wsock32.dll', 'wtsapi32.dll']
```
|
process
|
rvas of apicalls i have a list of apicalls which come from dlls for each snapshot or karton task i need rvas of this apicalls inside the dlls i prefer to get these rvas from karton tasks i would have everything in one place since there are only of these functions it shouldn t be a problem but it s not a must be for me the list of apicalls is here and the list of ddls is as follows
| 1
|
243,830
| 26,289,301,277
|
IssuesEvent
|
2023-01-08 07:35:24
|
kaidisn/encore
|
https://api.github.com/repos/kaidisn/encore
|
closed
|
CVE-2015-9251 (Medium) detected in github.com/golang/tools-gopls/v0.5.0-pre1 - autoclosed
|
security vulnerability
|
## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/tools-gopls/v0.5.0-pre1</b></p></summary>
<p>[mirror] Go Tools</p>
<p>
Dependency Hierarchy:
- :x: **github.com/golang/tools-gopls/v0.5.0-pre1** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kaidisn/encore/commit/7f308ee27451ab65e928f17d457ffcc5ece46781">7f308ee27451ab65e928f17d457ffcc5ece46781</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - 3.0.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2015-9251 (Medium) detected in github.com/golang/tools-gopls/v0.5.0-pre1 - autoclosed - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/tools-gopls/v0.5.0-pre1</b></p></summary>
<p>[mirror] Go Tools</p>
<p>
Dependency Hierarchy:
- :x: **github.com/golang/tools-gopls/v0.5.0-pre1** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kaidisn/encore/commit/7f308ee27451ab65e928f17d457ffcc5ece46781">7f308ee27451ab65e928f17d457ffcc5ece46781</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - 3.0.0</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in github com golang tools gopls autoclosed cve medium severity vulnerability vulnerable library github com golang tools gopls go tools dependency hierarchy x github com golang tools gopls vulnerable library found in head commit a href found in base branch main vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery
| 0
|
52,458
| 7,763,959,254
|
IssuesEvent
|
2018-06-01 18:28:19
|
CICE-Consortium/CICE
|
https://api.github.com/repos/CICE-Consortium/CICE
|
closed
|
document basalstress scheme and ellipse-related changes
|
CICEdyn Documentation
|
We need an overview description of the fast-ice scheme, referencing publication(s), and including full documentation of the new namelist variables. Also describe hwater, bathymetry, and normalization of principal stresses. Remove prs_sig from the namelist index, and add sigP.
|
1.0
|
document basalstress scheme and ellipse-related changes - We need an overview description of the fast-ice scheme, referencing publication(s), and including full documentation of the new namelist variables. Also describe hwater, bathymetry, and normalization of principal stresses. Remove prs_sig from the namelist index, and add sigP.
|
non_process
|
document basalstress scheme and ellipse related changes we need an overview description of the fast ice scheme referencing publication s and including full documentation of the new namelist variables also describe hwater bathymetry and normalization of principal stresses remove prs sig from the namelist index and add sigp
| 0
|
6,354
| 9,414,577,474
|
IssuesEvent
|
2019-04-10 10:28:27
|
meumobi/sitebuilder
|
https://api.github.com/repos/meumobi/sitebuilder
|
opened
|
Extract yt API key form src code
|
process-remote-media
|
### Expected behaviour
Should migrate it on config
### Actual behaviour
API key is hard coded on `./src/sitebuilder/lib/meumobi/sitebuilder/services/ProcessRemoteMedia/YoutubeHandler.php`
### Expected responses
- How to fix it
- How to test
|
1.0
|
Extract yt API key form src code - ### Expected behaviour
Should migrate it on config
### Actual behaviour
API key is hard coded on `./src/sitebuilder/lib/meumobi/sitebuilder/services/ProcessRemoteMedia/YoutubeHandler.php`
### Expected responses
- How to fix it
- How to test
|
process
|
extract yt api key form src code expected behaviour should migrate it on config actual behaviour api key is hard coded on src sitebuilder lib meumobi sitebuilder services processremotemedia youtubehandler php expected responses how to fix it how to test
| 1
|
401,621
| 11,795,244,740
|
IssuesEvent
|
2020-03-18 08:34:00
|
thaliawww/concrexit
|
https://api.github.com/repos/thaliawww/concrexit
|
opened
|
Missing singlepages translations
|
priority: low technical change
|
In GitLab by @joren485 on Mar 13, 2020, 16:27
### Description
Add all missing singlepages translations.
After I run `../manage.py makemessages --locale nl --no-obsolete` in `website/singlepages` 170+ translations seem to be missing.
|
1.0
|
Missing singlepages translations - In GitLab by @joren485 on Mar 13, 2020, 16:27
### Description
Add all missing singlepages translations.
After I run `../manage.py makemessages --locale nl --no-obsolete` in `website/singlepages` 170+ translations seem to be missing.
|
non_process
|
missing singlepages translations in gitlab by on mar description add all missing singlepages translations after i run manage py makemessages locale nl no obsolete in website singlepages translations seem to be missing
| 0
|
69,593
| 22,552,759,411
|
IssuesEvent
|
2022-06-27 07:32:48
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
Dialog Framework : onHide option is not working
|
:lady_beetle: defect
|
### Describe the bug
`onHide` option is listed in the full list of configuration options at the following URL, but it is not working.
https://primefaces.github.io/primefaces/11_0_0/#/core/dialogframework
### Reproducer
_No response_
### Expected behavior
The process set by the onHide option is executed when the dialog is hidden.
For example, in the following cases, an alert is displayed when the dialog is hidden.
```'java'
Map<String,Object> options = new HashMap<>();
options.put("onHide", "alert('onHide')");
PrimeFaces.current().dialog().openDynamic("dialog.xhtml", options, null);
```
### PrimeFaces edition
_No response_
### PrimeFaces version
11.0.0
### Theme
_No response_
### JSF implementation
_No response_
### JSF version
_No response_
### Browser(s)
_No response_
|
1.0
|
Dialog Framework : onHide option is not working - ### Describe the bug
`onHide` option is listed in the full list of configuration options at the following URL, but it is not working.
https://primefaces.github.io/primefaces/11_0_0/#/core/dialogframework
### Reproducer
_No response_
### Expected behavior
The process set by the onHide option is executed when the dialog is hidden.
For example, in the following cases, an alert is displayed when the dialog is hidden.
```'java'
Map<String,Object> options = new HashMap<>();
options.put("onHide", "alert('onHide')");
PrimeFaces.current().dialog().openDynamic("dialog.xhtml", options, null);
```
### PrimeFaces edition
_No response_
### PrimeFaces version
11.0.0
### Theme
_No response_
### JSF implementation
_No response_
### JSF version
_No response_
### Browser(s)
_No response_
|
non_process
|
dialog framework onhide option is not working describe the bug onhide option is listed in the full list of configuration options at the following url but it is not working reproducer no response expected behavior the process set by the onhide option is executed when the dialog is hidden for example in the following cases an alert is displayed when the dialog is hidden java map options new hashmap options put onhide alert onhide primefaces current dialog opendynamic dialog xhtml options null primefaces edition no response primefaces version theme no response jsf implementation no response jsf version no response browser s no response
| 0
|
16,468
| 21,391,667,233
|
IssuesEvent
|
2022-04-21 07:46:04
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Sign in screen > UI issue
|
Bug P1 iOS UI Process: Fixed Process: Tested dev Process: Reopened
|
Sign in screen > 'New User? Sign up' is getting wrapped into two lines
[Note: Issue observed in iPhone XS]

|
3.0
|
[iOS] Sign in screen > UI issue - Sign in screen > 'New User? Sign up' is getting wrapped into two lines
[Note: Issue observed in iPhone XS]

|
process
|
sign in screen ui issue sign in screen new user sign up is getting wrapped into two lines
| 1
|
25,055
| 24,645,576,470
|
IssuesEvent
|
2022-10-17 14:38:58
|
VirtusLab/git-machete
|
https://api.github.com/repos/VirtusLab/git-machete
|
closed
|
Gif in README is too fast (?)
|
docs usability
|
Too much text appears, I tried to show the gif as a demo to my folks and couldn't keep up with what happens 😅
|
True
|
Gif in README is too fast (?) - Too much text appears, I tried to show the gif as a demo to my folks and couldn't keep up with what happens 😅
|
non_process
|
gif in readme is too fast too much text appears i tried to show the gif as a demo to my folks and couldn t keep up with what happens 😅
| 0
|
41,801
| 6,948,570,854
|
IssuesEvent
|
2017-12-06 01:03:56
|
microsoftgraph/microsoft-graph-docs
|
https://api.github.com/repos/microsoftgraph/microsoft-graph-docs
|
closed
|
Hyperlink issue because id="search" is both attached to the section and the search button
|
bug: documentation
|
Issue:
In the documentation "Use query parameters to customize responses", try to click on the `$search` hyperlink from the parameter table. You will notice it will not link to the related paragraph, but to the search button. This is because both the `<h2>` tag of the `$search` section and the search button have the same `id` HTML attribute.
Article:
[https://developer.microsoft.com/en-us/graph/docs/concepts/query_parameters](https://developer.microsoft.com/en-us/graph/docs/concepts/query_parameters)
|
1.0
|
Hyperlink issue because id="search" is both attached to the section and the search button - Issue:
In the documentation "Use query parameters to customize responses", try to click on the `$search` hyperlink from the parameter table. You will notice it will not link to the related paragraph, but to the search button. This is because both the `<h2>` tag of the `$search` section and the search button have the same `id` HTML attribute.
Article:
[https://developer.microsoft.com/en-us/graph/docs/concepts/query_parameters](https://developer.microsoft.com/en-us/graph/docs/concepts/query_parameters)
|
non_process
|
hyperlink issue because id search is both attached to the section and the search button issue in the documentation use query parameters to customize responses try to click on the search hyperlink from the parameter table you will notice it will not link to the related paragraph but to the search button this is because both the tag of the search section and the search button have the same id html attribute article
| 0
|
20,997
| 27,864,133,838
|
IssuesEvent
|
2023-03-21 09:04:18
|
GoogleCloudPlatform/dotnet-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
|
closed
|
Fix Spanner Samples Conflicting Tests
|
type: process priority: p1 api: spanner samples
|
There a bunch of Spanner samples tests that are sharing a common resource (Albums table in this case), which are hence causing flakiness in the tests. To resolve the issue the conflicting samples tests are updated to use their dedicated database.
Some of the conflicting tests includes:
- [UpdateDataAsyncTest](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/main/spanner/api/Spanner.Samples.Tests/UpdateDataAsyncTest.cs)
- [UpdateDataWithTimestampColumnTest](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/main/spanner/api/Spanner.Samples.Tests/UpdateDataWithTimestampColumnTest.cs)
- [UpdateUsingBatchDmlCoreAsyncTest](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/main/spanner/api/Spanner.Samples.Tests/UpdateUsingBatchDmlCoreAsyncTest.cs)
- [UpdateUsingDmlCoreAsyncTest](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/a499f82b027f9c639f881c12a4ee0d40d2d0176e/spanner/api/Spanner.Samples/UpdateUsingDmlCoreAsync.cs)
- [UpdateUsingPartitionedDmlCoreAsyncTest](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/a499f82b027f9c639f881c12a4ee0d40d2d0176e/spanner/api/Spanner.Samples/UpdateUsingPartitionedDmlCoreAsync.cs)
|
1.0
|
Fix Spanner Samples Conflicting Tests - There a bunch of Spanner samples tests that are sharing a common resource (Albums table in this case), which are hence causing flakiness in the tests. To resolve the issue the conflicting samples tests are updated to use their dedicated database.
Some of the conflicting tests includes:
- [UpdateDataAsyncTest](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/main/spanner/api/Spanner.Samples.Tests/UpdateDataAsyncTest.cs)
- [UpdateDataWithTimestampColumnTest](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/main/spanner/api/Spanner.Samples.Tests/UpdateDataWithTimestampColumnTest.cs)
- [UpdateUsingBatchDmlCoreAsyncTest](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/main/spanner/api/Spanner.Samples.Tests/UpdateUsingBatchDmlCoreAsyncTest.cs)
- [UpdateUsingDmlCoreAsyncTest](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/a499f82b027f9c639f881c12a4ee0d40d2d0176e/spanner/api/Spanner.Samples/UpdateUsingDmlCoreAsync.cs)
- [UpdateUsingPartitionedDmlCoreAsyncTest](https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/a499f82b027f9c639f881c12a4ee0d40d2d0176e/spanner/api/Spanner.Samples/UpdateUsingPartitionedDmlCoreAsync.cs)
|
process
|
fix spanner samples conflicting tests there a bunch of spanner samples tests that are sharing a common resource albums table in this case which are hence causing flakiness in the tests to resolve the issue the conflicting samples tests are updated to use their dedicated database some of the conflicting tests includes
| 1
|
2,640
| 5,415,328,709
|
IssuesEvent
|
2017-03-01 21:19:02
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Native Queries w/ an initial null causes the type of a column to be marked as "type/*"
|
Bug Priority/P2 Query Processor
|
When doing Month over Month (or any time period over previous time period) metrics, the first value is often null.
This makes the end column unchartable, and requires the use of a weirdo subselect
```sql
SELECT * from
(MYREALQUERY) a
WHERE column IS NOT NULL
```
which is kind of ghetto.
|
1.0
|
Native Queries w/ an initial null causes the type of a column to be marked as "type/*" - When doing Month over Month (or any time period over previous time period) metrics, the first value is often null.
This makes the end column unchartable, and requires the use of a weirdo subselect
```sql
SELECT * from
(MYREALQUERY) a
WHERE column IS NOT NULL
```
which is kind of ghetto.
|
process
|
native queries w an initial null causes the type of a column to be marked as type when doing month over month or any time period over previous time period metrics the first value is often null this makes the end column unchartable and requires the use of a weirdo subselect sql select from myrealquery a where column is not null which is kind of ghetto
| 1
|
575,427
| 17,030,894,904
|
IssuesEvent
|
2021-07-04 14:40:32
|
vdjagilev/nmap-formatter
|
https://api.github.com/repos/vdjagilev/nmap-formatter
|
opened
|
Add more use-cases with jq
|
priority/low type/other
|
How to use this tool with jq (show hosts that are up, show only http service ports, show only filtered ports, count ports for each host, et cetera)
|
1.0
|
Add more use-cases with jq - How to use this tool with jq (show hosts that are up, show only http service ports, show only filtered ports, count ports for each host, et cetera)
|
non_process
|
add more use cases with jq how to use this tool with jq show hosts that are up show only http service ports show only filtered ports count ports for each host et cetera
| 0
|
12,958
| 15,339,443,135
|
IssuesEvent
|
2021-02-27 02:01:33
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Inputs should not be cleared after reloading in Firefox
|
AREA: client BROWSER: Firefox STATE: Stale SYSTEM: client side processing TYPE: enhancement
|
Firefox preserves values and states of inputs after reloading, but under Hammerhead inputs get empty. E.g. checked checkbox should remains checked, and text inputs should have the same text.
|
1.0
|
Inputs should not be cleared after reloading in Firefox - Firefox preserves values and states of inputs after reloading, but under Hammerhead inputs get empty. E.g. checked checkbox should remains checked, and text inputs should have the same text.
|
process
|
inputs should not be cleared after reloading in firefox firefox preserves values and states of inputs after reloading but under hammerhead inputs get empty e g checked checkbox should remains checked and text inputs should have the same text
| 1
|
22,512
| 31,563,822,793
|
IssuesEvent
|
2023-09-03 15:16:55
|
n0tknowing/chibicc
|
https://api.github.com/repos/n0tknowing/chibicc
|
closed
|
Macro expansion consumes GiB of memory
|
preprocessor
|
Not surprised with the current compiler design.
Here's result from `memusage` and `perf` when run https://github.com/swansontec/map-macro.
`memusage` results:
```
Memory usage summary: heap total: 1141447160, heap peak: 1141349037, stack peak: 4768
total calls total memory failed calls
malloc| 86 20485 0
realloc| 12 24 0 (nomove:11, dec:10, free:0)
calloc| 69531994 1141426651 0
free| 19 18744
Histogram for block sizes:
0-15 72 <1%
16-31 69273163 99% ==================================================
32-47 1615 <1%
48-63 78 <1%
64-79 3 <1%
112-127 5 <1%
128-143 257121 <1%
160-175 1 <1%
384-399 2 <1%
464-479 3 <1%
496-511 10 <1%
528-543 1 <1%
768-783 1 <1%
1024-1039 1 <1%
1536-1551 1 <1%
1920-1935 1 <1%
3072-3087 1 <1%
4096-4111 3 <1%
8192-8207 10 <1%
```
For the slowness, it's mostly from the allocator when doing hideset stuff, `perf` reports:
```
Overhead Command Shared Object Symbol
45.06% chibicc libc.so.6 [.] _int_malloc
25.37% chibicc libc.so.6 [.] __libc_calloc
9.42% chibicc libc.so.6 [.] __mcount_internal
6.68% chibicc libc.so.6 [.] _mcount
5.70% chibicc chibicc [.] new_hideset
2.17% chibicc chibicc [.] hideset_union
1.17% chibicc [unknown] [k] 0xffffffff986012b0
1.06% chibicc libc.so.6 [.] alloc_perturb
1.05% chibicc libc.so.6 [.] __strlen_sse2
0.65% chibicc chibicc [.] hideset_contains
0.61% chibicc chibicc [.] calloc@plt
0.37% chibicc libc.so.6 [.] __strncmp_sse42
0.07% chibicc chibicc [.] copy_token
0.06% chibicc libc.so.6 [.] sysmalloc
0.06% chibicc chibicc [.] expand_macro
0.06% chibicc chibicc [.] equal
0.06% chibicc chibicc [.] strlen@plt
0.05% chibicc chibicc [.] preprocess2
0.04% chibicc chibicc [.] add_hideset
0.04% chibicc libc.so.6 [.] __memset_sse2_unaligned_erms
0.03% chibicc libc.so.6 [.] __default_morecore@GLIBC_2.2.5
0.02% chibicc libc.so.6 [.] __memset_sse2_unaligned
0.02% chibicc chibicc [.] fnv_hash
0.01% chibicc chibicc [.] strncmp@plt
0.01% chibicc chibicc [.] append
```
|
1.0
|
Macro expansion consumes GiB of memory - Not surprised with the current compiler design.
Here's result from `memusage` and `perf` when run https://github.com/swansontec/map-macro.
`memusage` results:
```
Memory usage summary: heap total: 1141447160, heap peak: 1141349037, stack peak: 4768
total calls total memory failed calls
malloc| 86 20485 0
realloc| 12 24 0 (nomove:11, dec:10, free:0)
calloc| 69531994 1141426651 0
free| 19 18744
Histogram for block sizes:
0-15 72 <1%
16-31 69273163 99% ==================================================
32-47 1615 <1%
48-63 78 <1%
64-79 3 <1%
112-127 5 <1%
128-143 257121 <1%
160-175 1 <1%
384-399 2 <1%
464-479 3 <1%
496-511 10 <1%
528-543 1 <1%
768-783 1 <1%
1024-1039 1 <1%
1536-1551 1 <1%
1920-1935 1 <1%
3072-3087 1 <1%
4096-4111 3 <1%
8192-8207 10 <1%
```
For the slowness, it's mostly from the allocator when doing hideset stuff, `perf` reports:
```
Overhead Command Shared Object Symbol
45.06% chibicc libc.so.6 [.] _int_malloc
25.37% chibicc libc.so.6 [.] __libc_calloc
9.42% chibicc libc.so.6 [.] __mcount_internal
6.68% chibicc libc.so.6 [.] _mcount
5.70% chibicc chibicc [.] new_hideset
2.17% chibicc chibicc [.] hideset_union
1.17% chibicc [unknown] [k] 0xffffffff986012b0
1.06% chibicc libc.so.6 [.] alloc_perturb
1.05% chibicc libc.so.6 [.] __strlen_sse2
0.65% chibicc chibicc [.] hideset_contains
0.61% chibicc chibicc [.] calloc@plt
0.37% chibicc libc.so.6 [.] __strncmp_sse42
0.07% chibicc chibicc [.] copy_token
0.06% chibicc libc.so.6 [.] sysmalloc
0.06% chibicc chibicc [.] expand_macro
0.06% chibicc chibicc [.] equal
0.06% chibicc chibicc [.] strlen@plt
0.05% chibicc chibicc [.] preprocess2
0.04% chibicc chibicc [.] add_hideset
0.04% chibicc libc.so.6 [.] __memset_sse2_unaligned_erms
0.03% chibicc libc.so.6 [.] __default_morecore@GLIBC_2.2.5
0.02% chibicc libc.so.6 [.] __memset_sse2_unaligned
0.02% chibicc chibicc [.] fnv_hash
0.01% chibicc chibicc [.] strncmp@plt
0.01% chibicc chibicc [.] append
```
|
process
|
macro expansion consumes gib of memory not surprised with the current compiler design here s result from memusage and perf when run memusage results memory usage summary heap total heap peak stack peak total calls total memory failed calls malloc realloc nomove dec free calloc free histogram for block sizes for the slowness it s mostly from the allocator when doing hideset stuff perf reports overhead command shared object symbol chibicc libc so int malloc chibicc libc so libc calloc chibicc libc so mcount internal chibicc libc so mcount chibicc chibicc new hideset chibicc chibicc hideset union chibicc chibicc libc so alloc perturb chibicc libc so strlen chibicc chibicc hideset contains chibicc chibicc calloc plt chibicc libc so strncmp chibicc chibicc copy token chibicc libc so sysmalloc chibicc chibicc expand macro chibicc chibicc equal chibicc chibicc strlen plt chibicc chibicc chibicc chibicc add hideset chibicc libc so memset unaligned erms chibicc libc so default morecore glibc chibicc libc so memset unaligned chibicc chibicc fnv hash chibicc chibicc strncmp plt chibicc chibicc append
| 1
|
21,713
| 30,214,563,829
|
IssuesEvent
|
2023-07-05 14:47:00
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] Port summarization sidebar GUI
|
.Frontend Querying/GUI .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
We need to port this UI for aggregations and breakouts:
<img width="309" alt="image" src="https://github.com/metabase/metabase/assets/1455846/c341c0ce-ef11-41be-9e08-ea639b7d9625">
You get here by clicking the `Summarize` button in the upper-right when looking at query results.
We should probably sequence this after we finish the notebook editor versions of this stuff #30509 and #30511
|
1.0
|
[MLv2] Port summarization sidebar GUI - We need to port this UI for aggregations and breakouts:
<img width="309" alt="image" src="https://github.com/metabase/metabase/assets/1455846/c341c0ce-ef11-41be-9e08-ea639b7d9625">
You get here by clicking the `Summarize` button in the upper-right when looking at query results.
We should probably sequence this after we finish the notebook editor versions of this stuff #30509 and #30511
|
process
|
port summarization sidebar gui we need to port this ui for aggregations and breakouts img width alt image src you get here by clicking the summarize button in the upper right when looking at query results we should probably sequence this after we finish the notebook editor versions of this stuff and
| 1
|
8,718
| 11,855,127,448
|
IssuesEvent
|
2020-03-25 03:11:19
|
kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines
|
closed
|
Released code in PyPI doesn't match tagged version + release from github
|
area/release kind/bug kind/process priority/p1 status/triaged
|
### What steps did you take:
I'm installing the kubeflow pipelines sdk (`kfp=0.2.5`) through the conda feedstock https://github.com/conda-forge/kfp-feedstock/blob/master/recipe/meta.yaml , which just repackages the archive from https://pypi.org/project/kfp/#files . However, the contents of that archive do not match the contents of the github-released archive from https://github.com/kubeflow/pipelines/releases/tag/0.2.5
### What happened:
The source code installed by the conda feedstock -> pypi package pathway is different from the github release. In particular, I'm looking at https://github.com/kubeflow/pipelines/blame/0.2.5/sdk/python/kfp/_client.py (which has the changes from https://github.com/kubeflow/pipelines/pull/3173 ). The version of this package in PyPI has an older version that hardcodes the `cron_schedule` in `schedule_pipeline`.
You can see the difference in `_client.py` by downloading the archives from those sources and comparing them.
### What did you expect to happen:
Version 0.2.5 should be the same across github and pypi.
/kind bug
// /area sdk
|
1.0
|
Released code in PyPI doesn't match tagged version + release from github - ### What steps did you take:
I'm installing the kubeflow pipelines sdk (`kfp=0.2.5`) through the conda feedstock https://github.com/conda-forge/kfp-feedstock/blob/master/recipe/meta.yaml , which just repackages the archive from https://pypi.org/project/kfp/#files . However, the contents of that archive do not match the contents of the github-released archive from https://github.com/kubeflow/pipelines/releases/tag/0.2.5
### What happened:
The source code installed by the conda feedstock -> pypi package pathway is different from the github release. In particular, I'm looking at https://github.com/kubeflow/pipelines/blame/0.2.5/sdk/python/kfp/_client.py (which has the changes from https://github.com/kubeflow/pipelines/pull/3173 ). The version of this package in PyPI has an older version that hardcodes the `cron_schedule` in `schedule_pipeline`.
You can see the difference in `_client.py` by downloading the archives from those sources and comparing them.
### What did you expect to happen:
Version 0.2.5 should be the same across github and pypi.
/kind bug
// /area sdk
|
process
|
released code in pypi doesn t match tagged version release from github what steps did you take i m installing the kubeflow pipelines sdk kfp through the conda feedstock which just repackages the archive from however the contents of that archive do not match the contents of the github released archive from what happened the source code installed by the conda feedstock pypi package pathway is different from the github release in particular i m looking at which has the changes from the version of this package in pypi has an older version that hardcodes the cron schedule in schedule pipeline you can see the difference in client py by downloading the archives from those sources and comparing them what did you expect to happen version should be the same across github and pypi kind bug area sdk
| 1
|
9,064
| 12,138,294,811
|
IssuesEvent
|
2020-04-23 17:00:47
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
CircleCI exits as passing when Cypress tests have failed in our development
|
priority: high❗️ process: tests stage: needs review
|
### Current behavior:
For our own internal tests, sometimes the CircleCI job will pass even though there was a failure within a Cypress test. The CircleCI is falsely passing. There is some suspicious that maybe `lerna` is suppressing the exit code.
Example: https://github.com/cypress-io/cypress/pull/7014
CircleCI Jobs that should have failed but didn't (these are just random ones I clicked on):
- https://circleci.com/gh/cypress-io/cypress/302013
- https://circleci.com/gh/cypress-io/cypress/302036
- https://circleci.com/gh/cypress-io/cypress/302011
- https://circleci.com/gh/cypress-io/cypress/302019
<img width="770" alt="Screen Shot 2020-04-16 at 1 44 45 PM" src="https://user-images.githubusercontent.com/1271364/79425948-762ca900-7fe8-11ea-87ff-f914ab8432ba.png">

### Desired behavior:
CircleCI jobs to fail when any tests fail.
|
1.0
|
CircleCI exits as passing when Cypress tests have failed in our development - ### Current behavior:
For our own internal tests, sometimes the CircleCI job will pass even though there was a failure within a Cypress test. The CircleCI is falsely passing. There is some suspicious that maybe `lerna` is suppressing the exit code.
Example: https://github.com/cypress-io/cypress/pull/7014
CircleCI Jobs that should have failed but didn't (these are just random ones I clicked on):
- https://circleci.com/gh/cypress-io/cypress/302013
- https://circleci.com/gh/cypress-io/cypress/302036
- https://circleci.com/gh/cypress-io/cypress/302011
- https://circleci.com/gh/cypress-io/cypress/302019
<img width="770" alt="Screen Shot 2020-04-16 at 1 44 45 PM" src="https://user-images.githubusercontent.com/1271364/79425948-762ca900-7fe8-11ea-87ff-f914ab8432ba.png">

### Desired behavior:
CircleCI jobs to fail when any tests fail.
|
process
|
circleci exits as passing when cypress tests have failed in our development current behavior for our own internal tests sometimes the circleci job will pass even though there was a failure within a cypress test the circleci is falsely passing there is some suspicious that maybe lerna is suppressing the exit code example circleci jobs that should have failed but didn t these are just random ones i clicked on img width alt screen shot at pm src desired behavior circleci jobs to fail when any tests fail
| 1
|
219,472
| 16,832,996,760
|
IssuesEvent
|
2021-06-18 08:14:42
|
ihrapsa/KlipperWrt
|
https://api.github.com/repos/ihrapsa/KlipperWrt
|
opened
|
[guide] Add timelapse instructions
|
documentation enhancement
|
Update guide with instructions to install timelapse moonraker component + dependencies
|
1.0
|
[guide] Add timelapse instructions - Update guide with instructions to install timelapse moonraker component + dependencies
|
non_process
|
add timelapse instructions update guide with instructions to install timelapse moonraker component dependencies
| 0
|
20,314
| 26,957,913,079
|
IssuesEvent
|
2023-02-08 16:05:14
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
closed
|
increase minimum version of google-cloud-core to 1.6.0
|
api: bigquery status: blocked type: process priority: p3
|
Once enough time has passed to give people time to upgrade (TBD how long that is), we should increase the minimum version and clean up and logic that switches based on package version that was added in https://github.com/googleapis/python-bigquery/pull/492.
|
1.0
|
increase minimum version of google-cloud-core to 1.6.0 - Once enough time has passed to give people time to upgrade (TBD how long that is), we should increase the minimum version and clean up and logic that switches based on package version that was added in https://github.com/googleapis/python-bigquery/pull/492.
|
process
|
increase minimum version of google cloud core to once enough time has passed to give people time to upgrade tbd how long that is we should increase the minimum version and clean up and logic that switches based on package version that was added in
| 1
|
17,215
| 22,822,317,082
|
IssuesEvent
|
2022-07-12 04:31:04
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Python 3.6 Out of Support but listed as "recommended". Does this need to be updated?
|
automation/svc triaged cxp doc-enhancement process-automation/subsvc Pri1
|
Hi, Python 3.6 has been out of support for quite some time. Why is this Python unsupported version recommended per:
https://docs.microsoft.com/en-us/azure/automation/automation-runbook-types#advantages-3:~:text=For%20Python%203%20Hybrid%20jobs%20on%20Linux%20machines%2C%20we%20depend%20on%20the%20Python%203%20version%20installed%20on%20the%20machine%20to%20run%20DSC%20OMSConfig%20and%20the%20Linux%20Hybrid%20Worker.%20We%20recommend%20installing%203.6%20on%20Linux%20machines.
Thank you. -Mark
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8081200f-2bf4-db58-c957-c8ab7af5f90b
* Version Independent ID: b135cf1a-c391-03e5-41e7-e13571351e91
* Content: [Azure Automation runbook types](https://docs.microsoft.com/en-us/azure/automation/automation-runbook-types#advantages-3)
* Content Source: [articles/automation/automation-runbook-types.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-runbook-types.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **sudhirsneha**
|
1.0
|
Python 3.6 Out of Support but listed as "recommended". Does this need to be updated? - Hi, Python 3.6 has been out of support for quite some time. Why is this Python unsupported version recommended per:
https://docs.microsoft.com/en-us/azure/automation/automation-runbook-types#advantages-3:~:text=For%20Python%203%20Hybrid%20jobs%20on%20Linux%20machines%2C%20we%20depend%20on%20the%20Python%203%20version%20installed%20on%20the%20machine%20to%20run%20DSC%20OMSConfig%20and%20the%20Linux%20Hybrid%20Worker.%20We%20recommend%20installing%203.6%20on%20Linux%20machines.
Thank you. -Mark
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8081200f-2bf4-db58-c957-c8ab7af5f90b
* Version Independent ID: b135cf1a-c391-03e5-41e7-e13571351e91
* Content: [Azure Automation runbook types](https://docs.microsoft.com/en-us/azure/automation/automation-runbook-types#advantages-3)
* Content Source: [articles/automation/automation-runbook-types.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-runbook-types.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **sudhirsneha**
|
process
|
python out of support but listed as recommended does this need to be updated hi python has been out of support for quite some time why is this python unsupported version recommended per thank you mark document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login sgsneha microsoft alias sudhirsneha
| 1
|
21,912
| 30,441,307,842
|
IssuesEvent
|
2023-07-15 05:00:31
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
opened
|
Revert change to Bibliography style updating?
|
Word Processor Integration Regression
|
https://forums.zotero.org/discussion/106215/zotero-7-refreshing-the-bibliography-will-overwrite-the-bibliography-style-in-word
It seemed like that code was there for a reason…
|
1.0
|
Revert change to Bibliography style updating? - https://forums.zotero.org/discussion/106215/zotero-7-refreshing-the-bibliography-will-overwrite-the-bibliography-style-in-word
It seemed like that code was there for a reason…
|
process
|
revert change to bibliography style updating it seemed like that code was there for a reason…
| 1
|
9,337
| 12,341,256,900
|
IssuesEvent
|
2020-05-14 21:31:52
|
paul-buerkner/brms
|
https://api.github.com/repos/paul-buerkner/brms
|
closed
|
emmeans interface
|
feature post-processing
|
I was alerted to some discussion on interfacing to the **emmeans** package in the Google group. I was going to contribute to the discussion there, but (based on browser warnings) was unable to find a way to securely log in. So I will comment on a few things here, in hopes that it will clarify some things
> **emmeans** primarily does frequentist analysis (even for Bayesian models) and I don’t want to actively support that kind of analysis.
I admit that **emmeans** places a lot of emphasis on frequentist methods, but I want to adequately support Bayesian approaches as well.
> Also, most of emmeans summaries are frequentist ... there is some mcmc functionality implemented, but more as a side track than as a main feature I feel.
Please enlighten me. My impression has been that Bayesians already have the tools they need in packages like **bayesplot**, **coda**, etc., to do what they want, once they get a posterior sample; and that I do provide. So can you give me an idea of what else I could provide? I’d be happy to consider additional or alternative methods that are needed to make it more appealing to Bayesians. Perhaps the `summary()` method should display something different. I can believe that; describe what you'd like to see.
> I have been reading the documentation of extending **emmeans**. The problem I see is that we can only support a very minor class of **brms** models with **emmeans**, which implies a lot of special case coding and checking.
Is this really true? Most of what **emmeans** does for Bayesian models is very, very simple in concept: It just calculates a posterior sample of predictions at each node in the reference grid, which the user may then access via `as.mcmc()`. I have hardly looked at this package yet, but I find it pretty difficult to believe that it would be hard to support a good share of the models, assuming they are based on linear or generalized linear models -- especially since you don't really want all those frequentist results like variances and covariances. All it really needs to do is create a model matrix for the reference grid. One of the real advantages of Bayesian methods is that you don't get into thorny issues like standard errors and degrees of freedom. I will take a look and see what I can do easily; and report back.
|
1.0
|
emmeans interface - I was alerted to some discussion on interfacing to the **emmeans** package in the Google group. I was going to contribute to the discussion there, but (based on browser warnings) was unable to find a way to securely log in. So I will comment on a few things here, in hopes that it will clarify some things
> **emmeans** primarily does frequentist analysis (even for Bayesian models) and I don’t want to actively support that kind of analysis.
I admit that **emmeans** places a lot of emphasis on frequentist methods, but I want to adequately support Bayesian approaches as well.
> Also, most of emmeans summaries are frequentist ... there is some mcmc functionality implemented, but more as a side track than as a main feature I feel.
Please enlighten me. My impression has been that Bayesians already have the tools they need in packages like **bayesplot**, **coda**, etc., to do what they want, once they get a posterior sample; and that I do provide. So can you give me an idea of what else I could provide? I’d be happy to consider additional or alternative methods that are needed to make it more appealing to Bayesians. Perhaps the `summary()` method should display something different. I can believe that; describe what you'd like to see.
> I have been reading the documentation of extending **emmeans**. The problem I see is that we can only support a very minor class of **brms** models with **emmeans**, which implies a lot of special case coding and checking.
Is this really true? Most of what **emmeans** does for Bayesian models is very, very simple in concept: It just calculates a posterior sample of predictions at each node in the reference grid, which the user may then access via `as.mcmc()`. I have hardly looked at this package yet, but I find it pretty difficult to believe that it would be hard to support a good share of the models, assuming they are based on linear or generalized linear models -- especially since you don't really want all those frequentist results like variances and covariances. All it really needs to do is create a model matrix for the reference grid. One of the real advantages of Bayesian methods is that you don't get into thorny issues like standard errors and degrees of freedom. I will take a look and see what I can do easily; and report back.
|
process
|
emmeans interface i was alerted to some discussion on interfacing to the emmeans package in the google group i was going to contribute to the discussion there but based on browser warnings was unable to find a way to securely log in so i will comment on a few things here in hopes that it will clarify some things emmeans primarily does frequentist analysis even for bayesian models and i don’t want to actively support that kind of analysis i admit that emmeans places a lot of emphasis on frequentist methods but i want to adequately support bayesian approaches as well also most of emmeans summaries are frequentist there is some mcmc functionality implemented but more as a side track than as a main feature i feel please enlighten me my impression has been that bayesians already have the tools they need in packages like bayesplot coda etc to do what they want once they get a posterior sample and that i do provide so can you give me an idea of what else i could provide i’d be happy to consider additional or alternative methods that are needed to make it more appealing to bayesians perhaps the summary method should display something different i can believe that describe what you d like to see i have been reading the documentation of extending emmeans the problem i see is that we can only support a very minor class of brms models with emmeans which implies a lot of special case coding and checking is this really true most of what emmeans does for bayesian models is very very simple in concept it just calculates a posterior sample of predictions at each node in the reference grid which the user may then access via as mcmc i have hardly looked at this package yet but i find it pretty difficult to believe that it would be hard to support a good share of the models assuming they are based on linear or generalized linear models especially since you don t really want all those frequentist results like variances and covariances all it really needs to do is create a model matrix for the reference grid one of the real advantages of bayesian methods is that you don t get into thorny issues like standard errors and degrees of freedom i will take a look and see what i can do easily and report back
| 1
|
7,251
| 10,418,460,370
|
IssuesEvent
|
2019-09-15 08:49:52
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
add geometry attributes fails on multipoint layers
|
Bug Crash/Data Corruption High Priority Processing
|
Author Name: **Alain FERRATON** (@FERRATON)
Original Redmine Issue: [21352](https://issues.qgis.org/issues/21352)
Affected QGIS version: 3.7(master)
Redmine category:processing/qgis
Assignee: Nyall Dawson
---
see attached layer.
the algorithm works if you use before
multipart to single part
algorithm works in all case with QGIS 2.16
---
- [multipoint.zip](https://issues.qgis.org/attachments/download/14396/multipoint.zip) (Alain FERRATON)
|
1.0
|
add geometry attributes fails on multipoint layers - Author Name: **Alain FERRATON** (@FERRATON)
Original Redmine Issue: [21352](https://issues.qgis.org/issues/21352)
Affected QGIS version: 3.7(master)
Redmine category:processing/qgis
Assignee: Nyall Dawson
---
see attached layer.
the algorithm works if you use before
multipart to single part
algorithm works in all case with QGIS 2.16
---
- [multipoint.zip](https://issues.qgis.org/attachments/download/14396/multipoint.zip) (Alain FERRATON)
|
process
|
add geometry attributes fails on multipoint layers author name alain ferraton ferraton original redmine issue affected qgis version master redmine category processing qgis assignee nyall dawson see attached layer the algorithm works if you use before multipart to single part algorithm works in all case with qgis alain ferraton
| 1
|
798,329
| 28,244,366,152
|
IssuesEvent
|
2023-04-06 09:35:56
|
ppy/osu-web
|
https://api.github.com/repos/ppy/osu-web
|
closed
|
Add the ability to edit online tags
|
area:admin area:beatmap-info priority:2
|
online tags, only editable by NAT/GMT on old website, are used to add additional tags to Ranked and Loved maps to improve searchability without altering the status of the map, we currently have to go through the old website to do that which is a hassle.
it'd be really nice to have a basic online tags editor available for NAT/GMT on the current website.
|
1.0
|
Add the ability to edit online tags - online tags, only editable by NAT/GMT on old website, are used to add additional tags to Ranked and Loved maps to improve searchability without altering the status of the map, we currently have to go through the old website to do that which is a hassle.
it'd be really nice to have a basic online tags editor available for NAT/GMT on the current website.
|
non_process
|
add the ability to edit online tags online tags only editable by nat gmt on old website are used to add additional tags to ranked and loved maps to improve searchability without altering the status of the map we currently have to go through the old website to do that which is a hassle it d be really nice to have a basic online tags editor available for nat gmt on the current website
| 0
|
47,726
| 25,159,151,802
|
IssuesEvent
|
2022-11-10 15:35:02
|
WHOIGit/ifcbdb
|
https://api.github.com/repos/WHOIGit/ifcbdb
|
closed
|
change default for include_coordinates on api_bin to "false"
|
bug performance
|
the default behavior (which the UI might be using instead of explicitly setting include_coordinates to "true", so a mod would be needed there as well) causes a mosaic generation backlog when scripts are calling `/api/bin`.
https://github.com/WHOIGit/ifcbdb/blob/dfe8798eaf1a5bc87e526089426c8650849b2a8f/ifcbdb/dashboard/views.py#L920
|
True
|
change default for include_coordinates on api_bin to "false" - the default behavior (which the UI might be using instead of explicitly setting include_coordinates to "true", so a mod would be needed there as well) causes a mosaic generation backlog when scripts are calling `/api/bin`.
https://github.com/WHOIGit/ifcbdb/blob/dfe8798eaf1a5bc87e526089426c8650849b2a8f/ifcbdb/dashboard/views.py#L920
|
non_process
|
change default for include coordinates on api bin to false the default behavior which the ui might be using instead of explicitly setting include coordinates to true so a mod would be needed there as well causes a mosaic generation backlog when scripts are calling api bin
| 0
|
25,988
| 4,187,812,840
|
IssuesEvent
|
2016-06-23 18:39:51
|
metadatacenter/cedar-project
|
https://api.github.com/repos/metadatacenter/cedar-project
|
closed
|
Develop and implement test plan for all REST calls
|
test
|
Based on comprehensive documentation of all REST call in metadatacenter/cedar-template-server#2 create GitHub issues for each set of tests. Goal is to have a tests for every REST call PUT/POST/DELETE/GET and parameter combinations.
|
1.0
|
Develop and implement test plan for all REST calls - Based on comprehensive documentation of all REST call in metadatacenter/cedar-template-server#2 create GitHub issues for each set of tests. Goal is to have a tests for every REST call PUT/POST/DELETE/GET and parameter combinations.
|
non_process
|
develop and implement test plan for all rest calls based on comprehensive documentation of all rest call in metadatacenter cedar template server create github issues for each set of tests goal is to have a tests for every rest call put post delete get and parameter combinations
| 0
|
5,379
| 8,205,082,425
|
IssuesEvent
|
2018-09-03 08:59:55
|
openvstorage/framework-cinder-plugin
|
https://api.github.com/repos/openvstorage/framework-cinder-plugin
|
closed
|
Cinder / nova plugin loose ends
|
process_wontfix
|
Follow up on https://github.com/openvstorage/framework-cinder-plugin/issues/17 where features of cinder backup and nova migration are not implemented/tested, so following items need to be taken into account when continuing the work on this.
- [ ] Package the OVS FWK rest client
- [ ] Think about possible integration of OVS template capability
- [ ] Upload volume to image: investigate why the removal of the image from the local path immediately after upload to glance seems to be nothing more than a reference in glance. Is the upload async or done with wrong params?
- [ ] Support for change volume type? Purely OVS wise this would mean the volume has to end up on another vpool, when different vpools would use the same backend the best integration would be a volume detach/attach to namespace operation. In all other cases we require a full data copy.
- [ ] Nova snapshot fails with a not implemented error. Needs to be investigated further(libvirt or qemu) and see what we can do.
|
1.0
|
Cinder / nova plugin loose ends - Follow up on https://github.com/openvstorage/framework-cinder-plugin/issues/17 where features of cinder backup and nova migration are not implemented/tested, so following items need to be taken into account when continuing the work on this.
- [ ] Package the OVS FWK rest client
- [ ] Think about possible integration of OVS template capability
- [ ] Upload volume to image: investigate why the removal of the image from the local path immediately after upload to glance seems to be nothing more than a reference in glance. Is the upload async or done with wrong params?
- [ ] Support for change volume type? Purely OVS wise this would mean the volume has to end up on another vpool, when different vpools would use the same backend the best integration would be a volume detach/attach to namespace operation. In all other cases we require a full data copy.
- [ ] Nova snapshot fails with a not implemented error. Needs to be investigated further(libvirt or qemu) and see what we can do.
|
process
|
cinder nova plugin loose ends follow up on where features of cinder backup and nova migration are not implemented tested so following items need to be taken into account when continuing the work on this package the ovs fwk rest client think about possible integration of ovs template capability upload volume to image investigate why the removal of the image from the local path immediately after upload to glance seems to be nothing more than a reference in glance is the upload async or done with wrong params support for change volume type purely ovs wise this would mean the volume has to end up on another vpool when different vpools would use the same backend the best integration would be a volume detach attach to namespace operation in all other cases we require a full data copy nova snapshot fails with a not implemented error needs to be investigated further libvirt or qemu and see what we can do
| 1
|
9,470
| 12,466,468,162
|
IssuesEvent
|
2020-05-28 15:31:22
|
Fracappo87/XBTs_classification
|
https://api.github.com/repos/Fracappo87/XBTs_classification
|
closed
|
Add bad data filtering and non-standard name mappings to preprocessing
|
preprocessing
|
Initial data exploration has shown some previously unknown issues with the data. There are not many but they do cause problems for the explorations and classification code that are not easily resolved automatically. For now therefore we can simply exclude observations with these issues, so the code runs. Current issues:
* bad dates - some dates have a bad day of the month recorded
* negative depths - some profiles have a negative depth recorded. It is not clear what this means and further clarification has been sought from domain experts. For now we can exclude these, but in future we may do additional processing based on advice from the ocean scientists.
* some of the labels don't match the standard model names, for example "XBT-4". Is this a different probe model, or should this be labelled a T4? We will need to check with the ocean scientists and come up with a mapping from the given labelled to the standard labels.
|
1.0
|
Add bad data filtering and non-standard name mappings to preprocessing - Initial data exploration has shown some previously unknown issues with the data. There are not many but they do cause problems for the explorations and classification code that are not easily resolved automatically. For now therefore we can simply exclude observations with these issues, so the code runs. Current issues:
* bad dates - some dates have a bad day of the month recorded
* negative depths - some profiles have a negative depth recorded. It is not clear what this means and further clarification has been sought from domain experts. For now we can exclude these, but in future we may do additional processing based on advice from the ocean scientists.
* some of the labels don't match the standard model names, for example "XBT-4". Is this a different probe model, or should this be labelled a T4? We will need to check with the ocean scientists and come up with a mapping from the given labelled to the standard labels.
|
process
|
add bad data filtering and non standard name mappings to preprocessing initial data exploration has shown some previously unknown issues with the data there are not many but they do cause problems for the explorations and classification code that are not easily resolved automatically for now therefore we can simply exclude observations with these issues so the code runs current issues bad dates some dates have a bad day of the month recorded negative depths some profiles have a negative depth recorded it is not clear what this means and further clarification has been sought from domain experts for now we can exclude these but in future we may do additional processing based on advice from the ocean scientists some of the labels don t match the standard model names for example xbt is this a different probe model or should this be labelled a we will need to check with the ocean scientists and come up with a mapping from the given labelled to the standard labels
| 1
|
408,722
| 27,704,835,975
|
IssuesEvent
|
2023-03-14 10:30:03
|
squidfunk/mkdocs-material
|
https://api.github.com/repos/squidfunk/mkdocs-material
|
closed
|
Bad hyperlink
|
documentation
|
### Description
There is a bad hyperlink in this [section of the documentation](https://squidfunk.github.io/mkdocs-material/reference/icons-emojis/#configuration). The [Emoji with custom icons](https://squidfunk.github.io/mkdocs-material/setup/extensions/python-markdown-extensions/#custom-icons) under the configuration options list links to a non-existent permalink.

### Related links
- [Docs section that issue is located](https://squidfunk.github.io/mkdocs-material/reference/icons-emojis/#configuration)
- Bad Permalink: [Emoji with custom icons](https://squidfunk.github.io/mkdocs-material/setup/extensions/python-markdown-extensions/#custom-icons)
- Should link to this [permalink](https://squidfunk.github.io/mkdocs-material/setup/extensions/python-markdown-extensions/#+pymdownx.emoji.options.custom_icons)

### Proposed change
Change: https://github.com/squidfunk/mkdocs-material/blob/master/docs/reference/icons-emojis.md?plain=1#L66
### Before submitting
- [X] I have read and followed the [documentation issue reporting guidelines](https://squidfunk.github.io/mkdocs-material/contributing/reporting-a-docs-issue/).
- [X] I have attached the links to the described sections of [the documentation](https://squidfunk.github.io/mkdocs-material/contributing/reporting-a-docs-issue/#related-links)
|
1.0
|
Bad hyperlink - ### Description
There is a bad hyperlink in this [section of the documentation](https://squidfunk.github.io/mkdocs-material/reference/icons-emojis/#configuration). The [Emoji with custom icons](https://squidfunk.github.io/mkdocs-material/setup/extensions/python-markdown-extensions/#custom-icons) under the configuration options list links to a non-existent permalink.

### Related links
- [Docs section that issue is located](https://squidfunk.github.io/mkdocs-material/reference/icons-emojis/#configuration)
- Bad Permalink: [Emoji with custom icons](https://squidfunk.github.io/mkdocs-material/setup/extensions/python-markdown-extensions/#custom-icons)
- Should link to this [permalink](https://squidfunk.github.io/mkdocs-material/setup/extensions/python-markdown-extensions/#+pymdownx.emoji.options.custom_icons)

### Proposed change
Change: https://github.com/squidfunk/mkdocs-material/blob/master/docs/reference/icons-emojis.md?plain=1#L66
### Before submitting
- [X] I have read and followed the [documentation issue reporting guidelines](https://squidfunk.github.io/mkdocs-material/contributing/reporting-a-docs-issue/).
- [X] I have attached the links to the described sections of [the documentation](https://squidfunk.github.io/mkdocs-material/contributing/reporting-a-docs-issue/#related-links)
|
non_process
|
bad hyperlink description there is a bad hyperlink in this the under the configuration options list links to a non existent permalink related links bad permalink should link to this proposed change change before submitting i have read and followed the i have attached the links to the described sections of
| 0
|
55,999
| 11,494,138,411
|
IssuesEvent
|
2020-02-12 00:44:58
|
toolbox-team/reddit-moderator-toolbox
|
https://api.github.com/repos/toolbox-team/reddit-moderator-toolbox
|
opened
|
Convert `subredditColor` options to a general setting
|
code quality
|
Currently, the setting that controls whether or not things get color-coded per subreddit is a setting in queue tools. It gets duplicated to the profile pro module, and it applies even outside of queues, so I think it makes sense to migrate it to a general setting that all modules can pull from.
|
1.0
|
Convert `subredditColor` options to a general setting - Currently, the setting that controls whether or not things get color-coded per subreddit is a setting in queue tools. It gets duplicated to the profile pro module, and it applies even outside of queues, so I think it makes sense to migrate it to a general setting that all modules can pull from.
|
non_process
|
convert subredditcolor options to a general setting currently the setting that controls whether or not things get color coded per subreddit is a setting in queue tools it gets duplicated to the profile pro module and it applies even outside of queues so i think it makes sense to migrate it to a general setting that all modules can pull from
| 0
|
769,810
| 27,018,826,319
|
IssuesEvent
|
2023-02-10 22:25:37
|
googleapis/nodejs-automl
|
https://api.github.com/repos/googleapis/nodejs-automl
|
closed
|
Automl Natural Language Entity Extraction Predict Test: should predict failed
|
type: bug priority: p1 api: automl flakybot: issue
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 97411a2bb514b9921bb3932543a2d895c452d5c6
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/747bdae9-70d9-4f6d-863f-e2afc76ad9ae), [Sponge](http://sponge2/747bdae9-70d9-4f6d-863f-e2afc76ad9ae)
status: failed
<details><summary>Test output</summary><br><pre>Command failed: node language_entity_extraction_predict.js long-door-651 us-central1 TEN2238627664384491520 'Constitutional mutations in the WT1 gene in patients with Denys-Drash syndrome.'
16 UNAUTHENTICATED: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
Error: Command failed: node language_entity_extraction_predict.js long-door-651 us-central1 TEN2238627664384491520 'Constitutional mutations in the WT1 gene in patients with Denys-Drash syndrome.'
16 UNAUTHENTICATED: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
at checkExecSyncError (child_process.js:635:11)
at Object.execSync (child_process.js:671:15)
at execSync (test/language_entity_extraction_predict.test.js:23:28)
at Context.<anonymous> (test/language_entity_extraction_predict.test.js:56:27)</pre></details>
|
1.0
|
Automl Natural Language Entity Extraction Predict Test: should predict failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 97411a2bb514b9921bb3932543a2d895c452d5c6
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/747bdae9-70d9-4f6d-863f-e2afc76ad9ae), [Sponge](http://sponge2/747bdae9-70d9-4f6d-863f-e2afc76ad9ae)
status: failed
<details><summary>Test output</summary><br><pre>Command failed: node language_entity_extraction_predict.js long-door-651 us-central1 TEN2238627664384491520 'Constitutional mutations in the WT1 gene in patients with Denys-Drash syndrome.'
16 UNAUTHENTICATED: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
Error: Command failed: node language_entity_extraction_predict.js long-door-651 us-central1 TEN2238627664384491520 'Constitutional mutations in the WT1 gene in patients with Denys-Drash syndrome.'
16 UNAUTHENTICATED: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
at checkExecSyncError (child_process.js:635:11)
at Object.execSync (child_process.js:671:15)
at execSync (test/language_entity_extraction_predict.test.js:23:28)
at Context.<anonymous> (test/language_entity_extraction_predict.test.js:56:27)</pre></details>
|
non_process
|
automl natural language entity extraction predict test should predict failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output command failed node language entity extraction predict js long door us constitutional mutations in the gene in patients with denys drash syndrome unauthenticated request had invalid authentication credentials expected oauth access token login cookie or other valid authentication credential see error command failed node language entity extraction predict js long door us constitutional mutations in the gene in patients with denys drash syndrome unauthenticated request had invalid authentication credentials expected oauth access token login cookie or other valid authentication credential see at checkexecsyncerror child process js at object execsync child process js at execsync test language entity extraction predict test js at context test language entity extraction predict test js
| 0
|
18,964
| 24,927,876,022
|
IssuesEvent
|
2022-10-31 09:07:49
|
GIScience/sketch-map-tool
|
https://api.github.com/repos/GIScience/sketch-map-tool
|
closed
|
Use UUIDs for output file locations and status updates
|
component:analyses component:map-generation component:upload-processing priority:high
|
Instead of different combinations of bboxes, paper formats, times, random character sequences etc. use UUIDs for storing output files, their status updates, and retrieving them. As a first step directories with the UUID as name can be used to achieve that.
|
1.0
|
Use UUIDs for output file locations and status updates - Instead of different combinations of bboxes, paper formats, times, random character sequences etc. use UUIDs for storing output files, their status updates, and retrieving them. As a first step directories with the UUID as name can be used to achieve that.
|
process
|
use uuids for output file locations and status updates instead of different combinations of bboxes paper formats times random character sequences etc use uuids for storing output files their status updates and retrieving them as a first step directories with the uuid as name can be used to achieve that
| 1
|
90,401
| 10,680,614,018
|
IssuesEvent
|
2019-10-21 21:52:59
|
RaRe-Technologies/gensim
|
https://api.github.com/repos/RaRe-Technologies/gensim
|
closed
|
tutorials fail doctests
|
Low priority Low severity bug documentation
|
If you run "python -m doctest docs/src/tut1.rst" you get a ton of output. Most of the problems are because of incorrect formatting, e.g.
```python
>>> x = [1,
>>> 2]
```
should really be:
```python
>>> x = [1,
... 2]
```
For whatever reason, it doesn't look like the tutorials were ever tested with doctest. There's no reason for it to be this way, so we should make them doctest-compatible in the future.
|
1.0
|
tutorials fail doctests - If you run "python -m doctest docs/src/tut1.rst" you get a ton of output. Most of the problems are because of incorrect formatting, e.g.
```python
>>> x = [1,
>>> 2]
```
should really be:
```python
>>> x = [1,
... 2]
```
For whatever reason, it doesn't look like the tutorials were ever tested with doctest. There's no reason for it to be this way, so we should make them doctest-compatible in the future.
|
non_process
|
tutorials fail doctests if you run python m doctest docs src rst you get a ton of output most of the problems are because of incorrect formatting e g python x should really be python x for whatever reason it doesn t look like the tutorials were ever tested with doctest there s no reason for it to be this way so we should make them doctest compatible in the future
| 0
|
276,405
| 20,982,590,771
|
IssuesEvent
|
2022-03-28 21:38:38
|
openmpf/openmpf
|
https://api.github.com/repos/openmpf/openmpf
|
opened
|
Use io.swagger.core.v3:swagger-annotations
|
documentation
|
In order to address Trivy scan issues, we're migrating from springfox.swagger.version 2.1 to 3.0.0. As such, in `SwaggerConfig.java` we're migrating from `DocumentationType.SWAGGER_2` to `DocumentationType.OAS_30`. However, `openmpf-projects/openmpf/trunk/mpf-rest-api/pom.xml` still contains:
```xml
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-annotations</artifactId>
<version>1.5.20</version>
</dependency>
```
To support the 3.0 Open API Specification (OAS_30) we should upgrade from `io.swagger:swagger-annotations` (above) to `io.swagger.core.v3:swagger-annotations`. Refer to [this page](https://springdoc.org/migrating-from-springfox.html).
|
1.0
|
Use io.swagger.core.v3:swagger-annotations - In order to address Trivy scan issues, we're migrating from springfox.swagger.version 2.1 to 3.0.0. As such, in `SwaggerConfig.java` we're migrating from `DocumentationType.SWAGGER_2` to `DocumentationType.OAS_30`. However, `openmpf-projects/openmpf/trunk/mpf-rest-api/pom.xml` still contains:
```xml
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-annotations</artifactId>
<version>1.5.20</version>
</dependency>
```
To support the 3.0 Open API Specification (OAS_30) we should upgrade from `io.swagger:swagger-annotations` (above) to `io.swagger.core.v3:swagger-annotations`. Refer to [this page](https://springdoc.org/migrating-from-springfox.html).
|
non_process
|
use io swagger core swagger annotations in order to address trivy scan issues we re migrating from springfox swagger version to as such in swaggerconfig java we re migrating from documentationtype swagger to documentationtype oas however openmpf projects openmpf trunk mpf rest api pom xml still contains xml io swagger swagger annotations to support the open api specification oas we should upgrade from io swagger swagger annotations above to io swagger core swagger annotations refer to
| 0
|
29,020
| 11,706,182,469
|
IssuesEvent
|
2020-03-07 20:27:19
|
vlaship/hadoop-wc
|
https://api.github.com/repos/vlaship/hadoop-wc
|
opened
|
CVE-2018-11307 (High) detected in jackson-databind-2.9.5.jar
|
security vulnerability
|
## CVE-2018-11307 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- hadoop-client-3.2.0.jar (Root Library)
- hadoop-common-3.2.0.jar
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vlaship/hadoop-wc/commit/f1363bd417f4ca7591b0fef369881a3acd4cdeb5">f1363bd417f4ca7591b0fef369881a3acd4cdeb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.5. Use of Jackson default typing along with a gadget class from iBatis allows exfiltration of content. Fixed in 2.7.9.4, 2.8.11.2, and 2.9.6.
<p>Publish Date: 2019-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11307>CVE-2018-11307</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2032">https://github.com/FasterXML/jackson-databind/issues/2032</a></p>
<p>Release Date: 2019-03-17</p>
<p>Fix Resolution: jackson-databind-2.9.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-11307 (High) detected in jackson-databind-2.9.5.jar - ## CVE-2018-11307 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- hadoop-client-3.2.0.jar (Root Library)
- hadoop-common-3.2.0.jar
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vlaship/hadoop-wc/commit/f1363bd417f4ca7591b0fef369881a3acd4cdeb5">f1363bd417f4ca7591b0fef369881a3acd4cdeb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.5. Use of Jackson default typing along with a gadget class from iBatis allows exfiltration of content. Fixed in 2.7.9.4, 2.8.11.2, and 2.9.6.
<p>Publish Date: 2019-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11307>CVE-2018-11307</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2032">https://github.com/FasterXML/jackson-databind/issues/2032</a></p>
<p>Release Date: 2019-03-17</p>
<p>Fix Resolution: jackson-databind-2.9.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy hadoop client jar root library hadoop common jar x jackson databind jar vulnerable library found in head commit a href vulnerability details an issue was discovered in fasterxml jackson databind through use of jackson default typing along with a gadget class from ibatis allows exfiltration of content fixed in and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jackson databind step up your open source security game with whitesource
| 0
|
84,059
| 24,213,492,572
|
IssuesEvent
|
2022-09-26 03:14:26
|
intel/media-driver
|
https://api.github.com/repos/intel/media-driver
|
closed
|
[Bug]: 32 bit build regression: error: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 3 has type ‘uint64_t’ {aka ‘long long unsigned int’} [-Werror=format=]
|
P2 Build Common
|
### Which component impacted?
Build
### Is it regression? Good in old configuration?
Yes, it's good in old version
### What happened?
The build of 22.5.1 is now failing on Debian's i386 architecture:
```
[ 19%] Building CXX object media_driver/CMakeFiles/iHD_drv_video_CODEC.dir/agnostic/common/codec/hal/codechal_encode_avc_base.cpp.o
cd /<<PKGBUILDDIR>>/obj-i686-linux-gnu/media_driver && /usr/bin/c++ -DCLASS_TRACE=0 -DENABLE_KERNELS -DIGFX_GEN11_ICLLP_SUPPORTED -DIGFX_GEN11_JSL_SUPPORTED -DIGFX_GEN11_SUPPORTED -DIGFX_GEN12_ADLN_SUPPORTED -DIGFX_GEN12_ADLP_SUPPORTED -DIGFX_GEN12_ADLS_SUPPORTED -DIGFX_GEN12_DG1_SUPPORTED -DIGFX_GEN12_RKL_SUPPORTED -DIGFX_GEN12_SUPPORTED -DIGFX_GEN12_TGLLP_CMFCPATCH_SUPPORTED -DIGFX_GEN12_TGLLP_CMFC_SUPPORTED -DIGFX_GEN12_TGLLP_SUPPORTED -DIGFX_GEN12_TGLLP_SWSB_SUPPORTED -DIGFX_GEN8_BDW_SUPPORTED -DIGFX_GEN8_SUPPORTED -DIGFX_GEN9_BXT_SUPPORTED -DIGFX_GEN9_CFL_SUPPORTED -DIGFX_GEN9_CML_SUPPORTED -DIGFX_GEN9_CMPV_SUPPORTED -DIGFX_GEN9_GLK_SUPPORTED -DIGFX_GEN9_KBL_SUPPORTED -DIGFX_GEN9_SKL_SUPPORTED -DIGFX_GEN9_SUPPORTED -DIGFX_MHW_INTERFACES_NEXT_SUPPORT -DMEDIA_VERSION=\"22.5.1\" -DMEDIA_VERSION_DETAILS=\"\" -DVEBOX_AUTO_DENOISE_SUPPORTED=0 -DX11_FOUND -D_AV1_DECODE_SUPPORTED -D_AV1_ENCODE_VDENC_SUPPORTED -D_AVC_DECODE_SUPPORTED -D_AVC_ENCODE_VDENC_SUPPORTED -D_COMMON_ENCODE_SUPPORTED -D_FULL_OPEN_SOURCE -D_HEVC_DECODE_SUPPORTED -D_HEVC_ENCODE_VDENC_SUPPORTED -D_JPEG_DECODE_SUPPORTED -D_JPEG_ENCODE_SUPPORTED -D_MPEG2_DECODE_SUPPORTED -D_RELEASE -D_VP8_DECODE_SUPPORTED -D_VP9_DECODE_SUPPORTED -D_VP9_ENCODE_VDENC_SUPPORTED -D__STDC_CONSTANT_MACROS -D__STDC_LIMIT_MACROS -D__VPHAL_SFC_SUPPORTED=1 -I/<<PKGBUILDDIR>>/media_driver/linux/common/os/i915/include -I/<<PKGBUILDDIR>>/media_driver/linux/common/os/i915/include/uapi -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/os -I/<<PKGBUILDDIR>>/media_driver/linux/common/os -I/<<PKGBUILDDIR>>/media_common/agnostic/common/os -I/<<PKGBUILDDIR>>/media_common/linux/common/os -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/os -I/<<PKGBUILDDIR>>/media_softlet/linux/common/os/osservice -I/<<PKGBUILDDIR>>/media_softlet/linux/common/os -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/media_interfaces -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/shared/user_setting -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/shared/mediacopy -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/shared -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8_bdw/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8_bdw/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_bxt/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_skl/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_glk/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_kbl/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11_jsl_ehl/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/shared -I/<<PKGBUILDDIR>>/media_driver/agnostic/g12/g12_base/hw/render -I/<<PKGBUILDDIR>>/media_driver/agnostic/Xe_M/Xe_M_base/shared -I/<<PKGBUILDDIR>>/media_driver/agnostic/Xe_R/Xe_HP_base/hw/render -I/<<PKGBUILDDIR>>/media_driver/agnostic/Xe_R/Xe_HP_base/hw/blt -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/common/shared/media_sfc_interface -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/common/shared/scalability -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/m12/m12_0/shared/mediacopy -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/g12/g12_base/renderhal -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/g12/g12_0/renderhal -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/g12/g12_1/renderhal -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_R/Xe_HP_Base/renderhal -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m8_bdw -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m9_bxt -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m9_skl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m9_cfl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m9_glk -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m9_kbl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m10_cnl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m11_icllp -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m11_jsl_ehl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_tgllp -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_dg1 -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_rkl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_adls -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_adlp -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_adln -I/<<PKGBUILDDIR>>/media_common/agnostic/common/heap_manager -I/<<PKGBUILDDIR>>/media_common/agnostic/common/hw/vdbox -I/<<PKGBUILDDIR>>/media_common/agnostic/common/hw -I/<<PKGBUILDDIR>>/media_common/agnostic/common/shared/user_setting -I/<<PKGBUILDDIR>>/media_common/agnostic/common/shared -I/<<PKGBUILDDIR>>/media_common/agnostic/common/media_interfaces -I/<<PKGBUILDDIR>>/media_common/agnostic/common/renderhal -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/mediacontext -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/heap_manager -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/hw/vdbox -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/hw -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/renderhal -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/media_interfaces -I/<<PKGBUILDDIR>>/media_common/agnostic/common/vp/kernel -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/vp/kdll -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_bxt/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_glk/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11_icllp/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11_icllp/vp/kernel_free -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11_jsl_ehl/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/vp/hal -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12_tgllp/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12_tgllp/vp/kernel_free/cmfc -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12_tgllp/vp/kernel_free/cmfcpatch -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/common/vp/hal/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/common/vp/hal/shared/scalability -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_tgllp/vp/hal/platform_interface -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/m12/m12/vp/hal/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/m12/m12_0/vp/hal/feature_manager -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/cm_fc_ld -I/<<PKGBUILDDIR>>/media_common/agnostic/common/vp/hal -I/<<PKGBUILDDIR>>/media_common/agnostic/common/vp/kdll -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/bufferMgr -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/feature_manager -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/statusreport -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/utils -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/platform_interface -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/shared/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/kdll -I/usr/include/igdgmm/util -I/usr/include/igdgmm/inc/common -I/usr/include/igdgmm/inc -I/usr/include/igdgmm/GmmLib/inc -I/usr/include/igdgmm/GmmLib -I/usr/include/igdgmm -I/<<PKGBUILDDIR>>/media_common/agnostic/common/codec/shared -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/codec/kernel -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/codec/shared -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/cp -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8_bdw/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_bxt/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_skl/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_glk/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_kbl/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/codec/share -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/codec/kernel_free -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/codec/share -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11_icllp/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/codec/shared -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec/av1/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec/av1/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec/av1/features -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec/av1 -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec/shared -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/shared -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12_tgllp/vp/kernel_free -I/<<PKGBUILDDIR>>/media_driver/linux/common/cm/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/common/cm/hal/osservice -I/<<PKGBUILDDIR>>/media_driver/linux/common/cm/hal -I/<<PKGBUILDDIR>>/media_driver/linux/common/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/common/codec/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/common/cp/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/common/cp/os -I/<<PKGBUILDDIR>>/media_driver/linux/common/cp/hw -I/<<PKGBUILDDIR>>/media_driver/linux/common/cp/shared -I/<<PKGBUILDDIR>>/media_driver/linux/common/vp/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/common/hw -I/<<PKGBUILDDIR>>/media_driver/linux/gen8/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9_bxt/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9_skl/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9_kbl/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9_glk/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9_cfl/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen10/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen10_cnl/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen11/codec/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen11/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen12/codec/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen12/ddi -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal/dec/av1/features -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal/dec/av1/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal/dec/av1/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal/dec/shared/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal/dec/shared/hucitf -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/hevc/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/hevc/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/hevc/mmc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/hevc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/avc/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/avc/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/avc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/vp9/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/vp9/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/vp9/mmc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/vp9/hucitf -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/vp9 -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/mpeg2/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/mpeg2/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/mpeg2/mmc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/mpeg2 -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/jpeg/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/jpeg/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/jpeg -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/enc/av1/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/enc/av1/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/enc/shared -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/enc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/task -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/statusreport -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/bufferMgr -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/mediacopy -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/media_sfc_interface -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/classtrace -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/profiler -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/cp -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/avc/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/avc/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/hevc/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/hevc/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/hevc/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/hevc/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/vp9/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/vp9/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/vp9/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/vp9/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/mpeg2/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/mpeg2/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/mpeg2/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/jpeg/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/jpeg/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/jpeg/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/jpeg/bitstream -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/bufferMgr -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/statusreport -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/hucItf -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/av1/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/av1/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/av1/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/hevc/features/roi -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/hevc/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/hevc/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/hevc/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/bitstreamWriter -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/bufferMgr -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/statusreport -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/shared -I/<<PKGBUILDDIR>>/media_softlet/linux/common/cp -I/<<PKGBUILDDIR>>/media_softlet/linux/common/shared/user_setting -I/<<PKGBUILDDIR>>/../gmmlib/Source/inc -I/<<PKGBUILDDIR>>/../gmmlib/Source/inc/common -I/<<PKGBUILDDIR>>/../gmmlib/Source/inc/platform/iAlm -I/<<PKGBUILDDIR>>/../gmmlib/Source/inc/umKmInc -I/<<PKGBUILDDIR>>/../gmmlib/Source/GmmLib/inc -I/<<PKGBUILDDIR>>/../huc/inc -I/linux -g -O2 -ffile-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -Wformat -Werror=format-security -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Wdate-time -D_FORTIFY_SOURCE=2 -Wreorder -Wsign-promo -Wnon-virtual-dtor -Wno-invalid-offsetof -fvisibility-inlines-hidden -fno-use-cxa-atexit -frtti -fexceptions -fpermissive -fcheck-new -std=c++1y -std=c++11 -O3 -DNDEBUG -fPIC -Werror -Wall -Winit-self -Wpointer-arith -Wno-unused -Wno-unknown-pragmas -Wno-comments -Wno-sign-compare -Wno-attributes -Wno-narrowing -Wno-overflow -Wno-parentheses -Wno-delete-incomplete -Werror=address -Werror=format-security -Werror=non-virtual-dtor -Werror=return-type -finline-functions -funswitch-loops -fno-short-enums -Wa,--noexecstack -fno-strict-aliasing -fmessage-length=0 -fvisibility=hidden -fdata-sections -ffunction-sections -Wl,--gc-sections -DLINUX=1 -DLINUX -DNO_RTTI -DNO_EXCEPTION_HANDLING -DINTEL_NOT_PUBLIC -g -D__linux__ -fno-tree-pre -fPIC -Wl,--no-as-needed -O2 -fno-omit-frame-pointer -finline-limit=100 -MD -MT media_driver/CMakeFiles/iHD_drv_video_CODEC.dir/agnostic/common/codec/hal/codechal_encode_avc_base.cpp.o -MF CMakeFiles/iHD_drv_video_CODEC.dir/agnostic/common/codec/hal/codechal_encode_avc_base.cpp.o.d -o CMakeFiles/iHD_drv_video_CODEC.dir/agnostic/common/codec/hal/codechal_encode_avc_base.cpp.o -c /<<PKGBUILDDIR>>/media_driver/agnostic/common/codec/hal/codechal_encode_avc_base.cpp
/<<PKGBUILDDIR>>/media_softlet/linux/common/os/osservice/mos_utilities_specific.cpp: In static member function ‘static MOS_STATUS MosUtilitiesSpecificNext::UserFeatureDumpDataToFile(const char*, MOS_PUF_KEYLIST)’:
/<<PKGBUILDDIR>>/media_softlet/linux/common/os/osservice/mos_utilities_specific.cpp:1023:44: error: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 3 has type ‘uint64_t’ {aka ‘long long unsigned int’} [-Werror=format=]
1023 | fprintf(File, "\t\t\t%lu\n",
| ~~^
| |
| long unsigned int
| %llu
1024 | *(uint64_t*)(pKeyTmp->pElem->pValueArray[j].ulValueBuf));
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| |
| uint64_t {aka long long unsigned int}
```
See https://buildd.debian.org/status/fetch.php?pkg=intel-media-driver&arch=i386&ver=22.5.1%2Bdfsg1-1&stamp=1659479611&raw=0 for the full log and https://bugs.debian.org/1016953 for the Debian bug report.
### What's the usage scenario when you are seeing the problem?
Others
### What impacted?
_No response_
### Debug Information
_No response_
### Do you want to contribute a patch to fix the issue?
No.
|
1.0
|
[Bug]: 32 bit build regression: error: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 3 has type ‘uint64_t’ {aka ‘long long unsigned int’} [-Werror=format=] - ### Which component impacted?
Build
### Is it regression? Good in old configuration?
Yes, it's good in old version
### What happened?
The build of 22.5.1 is now failing on Debian's i386 architecture:
```
[ 19%] Building CXX object media_driver/CMakeFiles/iHD_drv_video_CODEC.dir/agnostic/common/codec/hal/codechal_encode_avc_base.cpp.o
cd /<<PKGBUILDDIR>>/obj-i686-linux-gnu/media_driver && /usr/bin/c++ -DCLASS_TRACE=0 -DENABLE_KERNELS -DIGFX_GEN11_ICLLP_SUPPORTED -DIGFX_GEN11_JSL_SUPPORTED -DIGFX_GEN11_SUPPORTED -DIGFX_GEN12_ADLN_SUPPORTED -DIGFX_GEN12_ADLP_SUPPORTED -DIGFX_GEN12_ADLS_SUPPORTED -DIGFX_GEN12_DG1_SUPPORTED -DIGFX_GEN12_RKL_SUPPORTED -DIGFX_GEN12_SUPPORTED -DIGFX_GEN12_TGLLP_CMFCPATCH_SUPPORTED -DIGFX_GEN12_TGLLP_CMFC_SUPPORTED -DIGFX_GEN12_TGLLP_SUPPORTED -DIGFX_GEN12_TGLLP_SWSB_SUPPORTED -DIGFX_GEN8_BDW_SUPPORTED -DIGFX_GEN8_SUPPORTED -DIGFX_GEN9_BXT_SUPPORTED -DIGFX_GEN9_CFL_SUPPORTED -DIGFX_GEN9_CML_SUPPORTED -DIGFX_GEN9_CMPV_SUPPORTED -DIGFX_GEN9_GLK_SUPPORTED -DIGFX_GEN9_KBL_SUPPORTED -DIGFX_GEN9_SKL_SUPPORTED -DIGFX_GEN9_SUPPORTED -DIGFX_MHW_INTERFACES_NEXT_SUPPORT -DMEDIA_VERSION=\"22.5.1\" -DMEDIA_VERSION_DETAILS=\"\" -DVEBOX_AUTO_DENOISE_SUPPORTED=0 -DX11_FOUND -D_AV1_DECODE_SUPPORTED -D_AV1_ENCODE_VDENC_SUPPORTED -D_AVC_DECODE_SUPPORTED -D_AVC_ENCODE_VDENC_SUPPORTED -D_COMMON_ENCODE_SUPPORTED -D_FULL_OPEN_SOURCE -D_HEVC_DECODE_SUPPORTED -D_HEVC_ENCODE_VDENC_SUPPORTED -D_JPEG_DECODE_SUPPORTED -D_JPEG_ENCODE_SUPPORTED -D_MPEG2_DECODE_SUPPORTED -D_RELEASE -D_VP8_DECODE_SUPPORTED -D_VP9_DECODE_SUPPORTED -D_VP9_ENCODE_VDENC_SUPPORTED -D__STDC_CONSTANT_MACROS -D__STDC_LIMIT_MACROS -D__VPHAL_SFC_SUPPORTED=1 -I/<<PKGBUILDDIR>>/media_driver/linux/common/os/i915/include -I/<<PKGBUILDDIR>>/media_driver/linux/common/os/i915/include/uapi -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/os -I/<<PKGBUILDDIR>>/media_driver/linux/common/os -I/<<PKGBUILDDIR>>/media_common/agnostic/common/os -I/<<PKGBUILDDIR>>/media_common/linux/common/os -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/os -I/<<PKGBUILDDIR>>/media_softlet/linux/common/os/osservice -I/<<PKGBUILDDIR>>/media_softlet/linux/common/os -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/media_interfaces -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/shared/user_setting -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/shared/mediacopy -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/shared -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8_bdw/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8_bdw/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_bxt/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_skl/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_glk/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_kbl/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11_jsl_ehl/renderhal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/cm -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/hw/vdbox -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/hw -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/shared -I/<<PKGBUILDDIR>>/media_driver/agnostic/g12/g12_base/hw/render -I/<<PKGBUILDDIR>>/media_driver/agnostic/Xe_M/Xe_M_base/shared -I/<<PKGBUILDDIR>>/media_driver/agnostic/Xe_R/Xe_HP_base/hw/render -I/<<PKGBUILDDIR>>/media_driver/agnostic/Xe_R/Xe_HP_base/hw/blt -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/common/shared/media_sfc_interface -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/common/shared/scalability -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/m12/m12_0/shared/mediacopy -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/g12/g12_base/renderhal -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/g12/g12_0/renderhal -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/g12/g12_1/renderhal -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_R/Xe_HP_Base/renderhal -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m8_bdw -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m9_bxt -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m9_skl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m9_cfl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m9_glk -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m9_kbl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m10_cnl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m11_icllp -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m11_jsl_ehl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_tgllp -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_dg1 -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_rkl -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_adls -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_adlp -I/<<PKGBUILDDIR>>/media_driver/media_interface/media_interfaces_m12_adln -I/<<PKGBUILDDIR>>/media_common/agnostic/common/heap_manager -I/<<PKGBUILDDIR>>/media_common/agnostic/common/hw/vdbox -I/<<PKGBUILDDIR>>/media_common/agnostic/common/hw -I/<<PKGBUILDDIR>>/media_common/agnostic/common/shared/user_setting -I/<<PKGBUILDDIR>>/media_common/agnostic/common/shared -I/<<PKGBUILDDIR>>/media_common/agnostic/common/media_interfaces -I/<<PKGBUILDDIR>>/media_common/agnostic/common/renderhal -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/mediacontext -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/heap_manager -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/hw/vdbox -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/hw -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/renderhal -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/media_interfaces -I/<<PKGBUILDDIR>>/media_common/agnostic/common/vp/kernel -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/vp/kdll -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_bxt/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_glk/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11_icllp/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11_icllp/vp/kernel_free -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11_jsl_ehl/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/vp/hal -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12_tgllp/vp/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12_tgllp/vp/kernel_free/cmfc -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12_tgllp/vp/kernel_free/cmfcpatch -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/common/vp/hal/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/common/vp/hal/shared/scalability -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_tgllp/vp/hal/platform_interface -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/m12/m12/vp/hal/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/m12/m12_0/vp/hal/feature_manager -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/cm_fc_ld -I/<<PKGBUILDDIR>>/media_common/agnostic/common/vp/hal -I/<<PKGBUILDDIR>>/media_common/agnostic/common/vp/kdll -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/bufferMgr -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/feature_manager -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/statusreport -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/utils -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/platform_interface -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal/shared/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/hal -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/vp/kdll -I/usr/include/igdgmm/util -I/usr/include/igdgmm/inc/common -I/usr/include/igdgmm/inc -I/usr/include/igdgmm/GmmLib/inc -I/usr/include/igdgmm/GmmLib -I/usr/include/igdgmm -I/<<PKGBUILDDIR>>/media_common/agnostic/common/codec/shared -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/codec/kernel -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/codec/shared -I/<<PKGBUILDDIR>>/media_driver/agnostic/common/cp -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen8_bdw/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_bxt/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_skl/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_glk/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen9_kbl/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen10/codec/share -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/codec/kernel_free -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11/codec/share -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen11_icllp/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12/codec/shared -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec/av1/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec/av1/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec/av1/features -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec/av1 -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec/shared -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/dec -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal/shared -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12/codec/hal -I/<<PKGBUILDDIR>>/media_driver/agnostic/gen12_tgllp/vp/kernel_free -I/<<PKGBUILDDIR>>/media_driver/linux/common/cm/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/common/cm/hal/osservice -I/<<PKGBUILDDIR>>/media_driver/linux/common/cm/hal -I/<<PKGBUILDDIR>>/media_driver/linux/common/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/common/codec/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/common/cp/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/common/cp/os -I/<<PKGBUILDDIR>>/media_driver/linux/common/cp/hw -I/<<PKGBUILDDIR>>/media_driver/linux/common/cp/shared -I/<<PKGBUILDDIR>>/media_driver/linux/common/vp/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/common/hw -I/<<PKGBUILDDIR>>/media_driver/linux/gen8/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9_bxt/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9_skl/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9_kbl/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9_glk/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen9_cfl/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen10/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen10_cnl/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen11/codec/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen11/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen12/codec/ddi -I/<<PKGBUILDDIR>>/media_driver/linux/gen12/ddi -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal/dec/av1/features -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal/dec/av1/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal/dec/av1/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal/dec/shared/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal/dec/shared/hucitf -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/gen12_base/codec/hal -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/hevc/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/hevc/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/hevc/mmc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/hevc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/avc/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/avc/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/avc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/vp9/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/vp9/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/vp9/mmc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/vp9/hucitf -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/vp9 -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/mpeg2/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/mpeg2/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/mpeg2/mmc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/mpeg2 -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/jpeg/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/jpeg/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec/jpeg -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/dec -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/enc/av1/packet -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/enc/av1/pipeline -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/enc/shared -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal/enc -I/<<PKGBUILDDIR>>/media_driver/media_softlet/agnostic/Xe_M/Xe_M_base/codec/hal -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/task -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/statusreport -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/bufferMgr -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/mediacopy -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/media_sfc_interface -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/classtrace -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/shared/profiler -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/cp -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/avc/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/avc/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/hevc/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/hevc/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/hevc/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/hevc/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/vp9/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/vp9/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/vp9/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/vp9/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/mpeg2/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/mpeg2/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/mpeg2/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/jpeg/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/jpeg/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/jpeg/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/jpeg/bitstream -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/bufferMgr -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/statusreport -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared/hucItf -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/dec/shared -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/av1/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/av1/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/av1/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/hevc/features/roi -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/hevc/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/hevc/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/hevc/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/bitstreamWriter -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/pipeline -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/packet -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/features -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/bufferMgr -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/scalability -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/statusreport -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared/mmc -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/enc/shared -I/<<PKGBUILDDIR>>/media_softlet/agnostic/common/codec/hal/shared -I/<<PKGBUILDDIR>>/media_softlet/linux/common/cp -I/<<PKGBUILDDIR>>/media_softlet/linux/common/shared/user_setting -I/<<PKGBUILDDIR>>/../gmmlib/Source/inc -I/<<PKGBUILDDIR>>/../gmmlib/Source/inc/common -I/<<PKGBUILDDIR>>/../gmmlib/Source/inc/platform/iAlm -I/<<PKGBUILDDIR>>/../gmmlib/Source/inc/umKmInc -I/<<PKGBUILDDIR>>/../gmmlib/Source/GmmLib/inc -I/<<PKGBUILDDIR>>/../huc/inc -I/linux -g -O2 -ffile-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -Wformat -Werror=format-security -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Wdate-time -D_FORTIFY_SOURCE=2 -Wreorder -Wsign-promo -Wnon-virtual-dtor -Wno-invalid-offsetof -fvisibility-inlines-hidden -fno-use-cxa-atexit -frtti -fexceptions -fpermissive -fcheck-new -std=c++1y -std=c++11 -O3 -DNDEBUG -fPIC -Werror -Wall -Winit-self -Wpointer-arith -Wno-unused -Wno-unknown-pragmas -Wno-comments -Wno-sign-compare -Wno-attributes -Wno-narrowing -Wno-overflow -Wno-parentheses -Wno-delete-incomplete -Werror=address -Werror=format-security -Werror=non-virtual-dtor -Werror=return-type -finline-functions -funswitch-loops -fno-short-enums -Wa,--noexecstack -fno-strict-aliasing -fmessage-length=0 -fvisibility=hidden -fdata-sections -ffunction-sections -Wl,--gc-sections -DLINUX=1 -DLINUX -DNO_RTTI -DNO_EXCEPTION_HANDLING -DINTEL_NOT_PUBLIC -g -D__linux__ -fno-tree-pre -fPIC -Wl,--no-as-needed -O2 -fno-omit-frame-pointer -finline-limit=100 -MD -MT media_driver/CMakeFiles/iHD_drv_video_CODEC.dir/agnostic/common/codec/hal/codechal_encode_avc_base.cpp.o -MF CMakeFiles/iHD_drv_video_CODEC.dir/agnostic/common/codec/hal/codechal_encode_avc_base.cpp.o.d -o CMakeFiles/iHD_drv_video_CODEC.dir/agnostic/common/codec/hal/codechal_encode_avc_base.cpp.o -c /<<PKGBUILDDIR>>/media_driver/agnostic/common/codec/hal/codechal_encode_avc_base.cpp
/<<PKGBUILDDIR>>/media_softlet/linux/common/os/osservice/mos_utilities_specific.cpp: In static member function ‘static MOS_STATUS MosUtilitiesSpecificNext::UserFeatureDumpDataToFile(const char*, MOS_PUF_KEYLIST)’:
/<<PKGBUILDDIR>>/media_softlet/linux/common/os/osservice/mos_utilities_specific.cpp:1023:44: error: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 3 has type ‘uint64_t’ {aka ‘long long unsigned int’} [-Werror=format=]
1023 | fprintf(File, "\t\t\t%lu\n",
| ~~^
| |
| long unsigned int
| %llu
1024 | *(uint64_t*)(pKeyTmp->pElem->pValueArray[j].ulValueBuf));
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| |
| uint64_t {aka long long unsigned int}
```
See https://buildd.debian.org/status/fetch.php?pkg=intel-media-driver&arch=i386&ver=22.5.1%2Bdfsg1-1&stamp=1659479611&raw=0 for the full log and https://bugs.debian.org/1016953 for the Debian bug report.
### What's the usage scenario when you are seeing the problem?
Others
### What impacted?
_No response_
### Debug Information
_No response_
### Do you want to contribute a patch to fix the issue?
No.
|
non_process
|
bit build regression error format ‘ lu’ expects argument of type ‘long unsigned int’ but argument has type ‘ t’ aka ‘long long unsigned int’ which component impacted build is it regression good in old configuration yes it s good in old version what happened the build of is now failing on debian s architecture building cxx object media driver cmakefiles ihd drv video codec dir agnostic common codec hal codechal encode avc base cpp o cd obj linux gnu media driver usr bin c dclass trace denable kernels digfx icllp supported digfx jsl supported digfx supported digfx adln supported digfx adlp supported digfx adls supported digfx supported digfx rkl supported digfx supported digfx tgllp cmfcpatch supported digfx tgllp cmfc supported digfx tgllp supported digfx tgllp swsb supported digfx bdw supported digfx supported digfx bxt supported digfx cfl supported digfx cml supported digfx cmpv supported digfx glk supported digfx kbl supported digfx skl supported digfx supported digfx mhw interfaces next support dmedia version dmedia version details dvebox auto denoise supported found d decode supported d encode vdenc supported d avc decode supported d avc encode vdenc supported d common encode supported d full open source d hevc decode supported d hevc encode vdenc supported d jpeg decode supported d jpeg encode supported d decode supported d release d decode supported d decode supported d encode vdenc supported d stdc constant macros d stdc limit macros d vphal sfc supported i media driver linux common os include i media driver linux common os include uapi i media driver agnostic common os i media driver linux common os i media common agnostic common os i media common linux common os i media softlet agnostic common os i media softlet linux common os osservice i media softlet linux common os i media driver agnostic common cm i media driver agnostic common hw vdbox i media driver agnostic common hw i media driver agnostic common media interfaces i media driver agnostic common renderhal i media driver agnostic common shared user setting i media driver agnostic common shared mediacopy i media driver agnostic common shared i media driver agnostic cm i media driver agnostic hw vdbox i media driver agnostic hw i media driver agnostic renderhal i media driver agnostic bdw hw vdbox i media driver agnostic bdw renderhal i media driver agnostic cm i media driver agnostic hw vdbox i media driver agnostic hw i media driver agnostic renderhal i media driver agnostic bxt hw vdbox i media driver agnostic skl hw vdbox i media driver agnostic glk hw vdbox i media driver agnostic kbl hw vdbox i media driver agnostic cm i media driver agnostic hw vdbox i media driver agnostic hw i media driver agnostic renderhal i media driver agnostic cm i media driver agnostic hw vdbox i media driver agnostic hw i media driver agnostic renderhal i media driver agnostic jsl ehl renderhal i media driver agnostic cm i media driver agnostic hw vdbox i media driver agnostic hw i media driver agnostic shared i media driver agnostic base hw render i media driver agnostic xe m xe m base shared i media driver agnostic xe r xe hp base hw render i media driver agnostic xe r xe hp base hw blt i media driver media softlet agnostic common shared media sfc interface i media driver media softlet agnostic common shared scalability i media driver media softlet agnostic shared mediacopy i media driver media softlet agnostic base renderhal i media driver media softlet agnostic renderhal i media driver media softlet agnostic renderhal i media driver media softlet agnostic xe r xe hp base renderhal i media driver media interface media interfaces bdw i media driver media interface media interfaces bxt i media driver media interface media interfaces skl i media driver media interface media interfaces cfl i media driver media interface media interfaces glk i media driver media interface media interfaces kbl i media driver media interface media interfaces cnl i media driver media interface media interfaces icllp i media driver media interface media interfaces jsl ehl i media driver media interface media interfaces tgllp i media driver media interface media interfaces i media driver media interface media interfaces rkl i media driver media interface media interfaces adls i media driver media interface media interfaces adlp i media driver media interface media interfaces adln i media common agnostic common heap manager i media common agnostic common hw vdbox i media common agnostic common hw i media common agnostic common shared user setting i media common agnostic common shared i media common agnostic common media interfaces i media common agnostic common renderhal i media softlet agnostic common shared scalability i media softlet agnostic common shared mediacontext i media softlet agnostic common shared i media softlet agnostic common heap manager i media softlet agnostic common hw vdbox i media softlet agnostic common hw i media softlet agnostic common renderhal i media softlet agnostic common media interfaces i media common agnostic common vp kernel i media driver agnostic common vp hal i media driver agnostic common vp kdll i media driver agnostic vp hal i media driver agnostic vp hal i media driver agnostic bxt vp hal i media driver agnostic glk vp hal i media driver agnostic vp hal i media driver agnostic vp hal i media driver agnostic icllp vp hal i media driver agnostic icllp vp kernel free i media driver agnostic jsl ehl vp hal i media driver agnostic vp hal i media driver media softlet agnostic vp hal i media driver agnostic tgllp vp hal i media driver agnostic tgllp vp kernel free cmfc i media driver agnostic tgllp vp kernel free cmfcpatch i media driver media softlet agnostic common vp hal packet i media driver media softlet agnostic common vp hal shared scalability i media driver media softlet agnostic tgllp vp hal platform interface i media driver media softlet agnostic vp hal packet i media driver media softlet agnostic vp hal feature manager i media softlet agnostic common vp cm fc ld i media common agnostic common vp hal i media common agnostic common vp kdll i media softlet agnostic common vp hal buffermgr i media softlet agnostic common vp hal feature manager i media softlet agnostic common vp hal features i media softlet agnostic common vp hal mmc i media softlet agnostic common vp hal packet i media softlet agnostic common vp hal pipeline i media softlet agnostic common vp hal scalability i media softlet agnostic common vp hal statusreport i media softlet agnostic common vp hal utils i media softlet agnostic common vp hal platform interface i media softlet agnostic common vp hal shared scalability i media softlet agnostic common vp hal i media softlet agnostic common vp kdll i usr include igdgmm util i usr include igdgmm inc common i usr include igdgmm inc i usr include igdgmm gmmlib inc i usr include igdgmm gmmlib i usr include igdgmm i media common agnostic common codec shared i media driver agnostic common codec hal i media driver agnostic common codec kernel i media driver agnostic common codec shared i media driver agnostic common cp i media driver agnostic codec hal i media driver agnostic bdw codec hal i media driver agnostic codec hal i media driver agnostic bxt codec hal i media driver agnostic skl codec hal i media driver agnostic glk codec hal i media driver agnostic kbl codec hal i media driver agnostic codec hal i media driver agnostic codec share i media driver agnostic codec hal i media driver agnostic codec kernel free i media driver agnostic codec share i media driver agnostic icllp codec hal i media driver agnostic codec hal i media driver agnostic codec shared i media driver media softlet agnostic codec hal dec pipeline i media driver media softlet agnostic codec hal dec packet i media driver media softlet agnostic codec hal dec features i media driver media softlet agnostic codec hal dec i media driver media softlet agnostic codec hal dec shared i media driver media softlet agnostic codec hal dec i media driver media softlet agnostic codec hal shared i media driver media softlet agnostic codec hal i media driver agnostic tgllp vp kernel free i media driver linux common cm ddi i media driver linux common cm hal osservice i media driver linux common cm hal i media driver linux common ddi i media driver linux common codec ddi i media driver linux common cp ddi i media driver linux common cp os i media driver linux common cp hw i media driver linux common cp shared i media driver linux common vp ddi i media driver linux common hw i media driver linux ddi i media driver linux ddi i media driver linux bxt ddi i media driver linux skl ddi i media driver linux kbl ddi i media driver linux glk ddi i media driver linux cfl ddi i media driver linux ddi i media driver linux cnl ddi i media driver linux codec ddi i media driver linux ddi i media driver linux codec ddi i media driver linux ddi i media driver media softlet agnostic base codec hal dec features i media driver media softlet agnostic base codec hal dec pipeline i media driver media softlet agnostic base codec hal dec packet i media driver media softlet agnostic base codec hal dec shared packet i media driver media softlet agnostic base codec hal dec shared hucitf i media driver media softlet agnostic base codec hal i media driver media softlet agnostic xe m xe m base codec hal dec hevc pipeline i media driver media softlet agnostic xe m xe m base codec hal dec hevc packet i media driver media softlet agnostic xe m xe m base codec hal dec hevc mmc i media driver media softlet agnostic xe m xe m base codec hal dec hevc i media driver media softlet agnostic xe m xe m base codec hal dec avc pipeline i media driver media softlet agnostic xe m xe m base codec hal dec avc packet i media driver media softlet agnostic xe m xe m base codec hal dec avc i media driver media softlet agnostic xe m xe m base codec hal dec pipeline i media driver media softlet agnostic xe m xe m base codec hal dec packet i media driver media softlet agnostic xe m xe m base codec hal dec mmc i media driver media softlet agnostic xe m xe m base codec hal dec hucitf i media driver media softlet agnostic xe m xe m base codec hal dec i media driver media softlet agnostic xe m xe m base codec hal dec pipeline i media driver media softlet agnostic xe m xe m base codec hal dec packet i media driver media softlet agnostic xe m xe m base codec hal dec mmc i media driver media softlet agnostic xe m xe m base codec hal dec i media driver media softlet agnostic xe m xe m base codec hal dec jpeg pipeline i media driver media softlet agnostic xe m xe m base codec hal dec jpeg packet i media driver media softlet agnostic xe m xe m base codec hal dec jpeg i media driver media softlet agnostic xe m xe m base codec hal dec i media driver media softlet agnostic xe m xe m base codec hal enc packet i media driver media softlet agnostic xe m xe m base codec hal enc pipeline i media driver media softlet agnostic xe m xe m base codec hal enc shared i media driver media softlet agnostic xe m xe m base codec hal enc i media driver media softlet agnostic xe m xe m base codec hal i media softlet agnostic common shared pipeline i media softlet agnostic common shared packet i media softlet agnostic common shared features i media softlet agnostic common shared task i media softlet agnostic common shared statusreport i media softlet agnostic common shared mmc i media softlet agnostic common shared buffermgr i media softlet agnostic common shared mediacopy i media softlet agnostic common shared media sfc interface i media softlet agnostic common shared classtrace i media softlet agnostic common shared profiler i media softlet agnostic common cp i media softlet agnostic common codec hal i media softlet agnostic common codec hal dec avc pipeline i media softlet agnostic common codec hal dec avc features i media softlet agnostic common codec hal dec hevc pipeline i media softlet agnostic common codec hal dec hevc features i media softlet agnostic common codec hal dec hevc scalability i media softlet agnostic common codec hal dec hevc mmc i media softlet agnostic common codec hal dec pipeline i media softlet agnostic common codec hal dec features i media softlet agnostic common codec hal dec scalability i media softlet agnostic common codec hal dec mmc i media softlet agnostic common codec hal dec pipeline i media softlet agnostic common codec hal dec features i media softlet agnostic common codec hal dec mmc i media softlet agnostic common codec hal dec jpeg pipeline i media softlet agnostic common codec hal dec jpeg features i media softlet agnostic common codec hal dec jpeg packet i media softlet agnostic common codec hal dec jpeg bitstream i media softlet agnostic common codec hal dec shared pipeline i media softlet agnostic common codec hal dec shared packet i media softlet agnostic common codec hal dec shared features i media softlet agnostic common codec hal dec shared buffermgr i media softlet agnostic common codec hal dec shared scalability i media softlet agnostic common codec hal dec shared statusreport i media softlet agnostic common codec hal dec shared mmc i media softlet agnostic common codec hal dec shared hucitf i media softlet agnostic common codec hal dec shared i media softlet agnostic common codec hal enc packet i media softlet agnostic common codec hal enc pipeline i media softlet agnostic common codec hal enc features i media softlet agnostic common codec hal enc hevc features roi i media softlet agnostic common codec hal enc hevc features i media softlet agnostic common codec hal enc hevc packet i media softlet agnostic common codec hal enc hevc pipeline i media softlet agnostic common codec hal enc shared bitstreamwriter i media softlet agnostic common codec hal enc shared pipeline i media softlet agnostic common codec hal enc shared packet i media softlet agnostic common codec hal enc shared features i media softlet agnostic common codec hal enc shared buffermgr i media softlet agnostic common codec hal enc shared scalability i media softlet agnostic common codec hal enc shared statusreport i media softlet agnostic common codec hal enc shared mmc i media softlet agnostic common codec hal enc shared i media softlet agnostic common codec hal shared i media softlet linux common cp i media softlet linux common shared user setting i gmmlib source inc i gmmlib source inc common i gmmlib source inc platform ialm i gmmlib source inc umkminc i gmmlib source gmmlib inc i huc inc i linux g ffile prefix map fstack protector strong wformat werror format security d largefile source d file offset bits wdate time d fortify source wreorder wsign promo wnon virtual dtor wno invalid offsetof fvisibility inlines hidden fno use cxa atexit frtti fexceptions fpermissive fcheck new std c std c dndebug fpic werror wall winit self wpointer arith wno unused wno unknown pragmas wno comments wno sign compare wno attributes wno narrowing wno overflow wno parentheses wno delete incomplete werror address werror format security werror non virtual dtor werror return type finline functions funswitch loops fno short enums wa noexecstack fno strict aliasing fmessage length fvisibility hidden fdata sections ffunction sections wl gc sections dlinux dlinux dno rtti dno exception handling dintel not public g d linux fno tree pre fpic wl no as needed fno omit frame pointer finline limit md mt media driver cmakefiles ihd drv video codec dir agnostic common codec hal codechal encode avc base cpp o mf cmakefiles ihd drv video codec dir agnostic common codec hal codechal encode avc base cpp o d o cmakefiles ihd drv video codec dir agnostic common codec hal codechal encode avc base cpp o c media driver agnostic common codec hal codechal encode avc base cpp media softlet linux common os osservice mos utilities specific cpp in static member function ‘static mos status mosutilitiesspecificnext userfeaturedumpdatatofile const char mos puf keylist ’ media softlet linux common os osservice mos utilities specific cpp error format ‘ lu’ expects argument of type ‘long unsigned int’ but argument has type ‘ t’ aka ‘long long unsigned int’ fprintf file t t t lu n long unsigned int llu t pkeytmp pelem pvaluearray ulvaluebuf t aka long long unsigned int see for the full log and for the debian bug report what s the usage scenario when you are seeing the problem others what impacted no response debug information no response do you want to contribute a patch to fix the issue no
| 0
|
33,728
| 4,858,118,921
|
IssuesEvent
|
2016-11-12 23:43:40
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
: TestStoreRangeDownReplicate failed under stress
|
Robot test-failure
|
SHA: https://github.com/cockroachdb/cockroach/commits/59693a140f93a4658174054884db891012ac0380
Stress build found a failed test:
```
I161112 09:05:59.708852 15285 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161112 09:05:59.710388 15285 gossip/gossip.go:237 [n?] initial resolvers: []
W161112 09:05:59.710805 15285 gossip/gossip.go:1055 [n?] no resolvers found; use --join to specify a connected node
I161112 09:05:59.711122 15285 base/node_id.go:62 NodeID set to 1
I161112 09:05:59.742196 15285 storage/store.go:1188 [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161112 09:05:59.742527 15285 gossip/gossip.go:280 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:60427" > attrs:<> locality:<>
I161112 09:05:59.754150 15321 storage/replica_proposal.go:328 [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900.000124ms following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=1970-01-01 00:00:00.000000123 +0000 UTC]
I161112 09:05:59.770204 15285 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161112 09:05:59.773778 15285 gossip/gossip.go:237 [n?] initial resolvers: [127.0.0.1:60427]
W161112 09:05:59.774216 15285 gossip/gossip.go:1057 [n?] no incoming or outgoing connections
I161112 09:05:59.774550 15285 base/node_id.go:62 NodeID set to 2
I161112 09:05:59.804376 15285 storage/store.go:1188 [n2,s2]: failed initial metrics computation: [n2,s2]: system config not yet available
I161112 09:05:59.804704 15285 gossip/gossip.go:280 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:35233" > attrs:<> locality:<>
I161112 09:05:59.809434 15285 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161112 09:05:59.811685 15285 gossip/gossip.go:237 [n?] initial resolvers: [127.0.0.1:60427]
W161112 09:05:59.812106 15285 gossip/gossip.go:1057 [n?] no incoming or outgoing connections
I161112 09:05:59.812423 15285 base/node_id.go:62 NodeID set to 3
I161112 09:05:59.824939 15354 gossip/client.go:125 [n2] started gossip client to 127.0.0.1:60427
I161112 09:05:59.846685 15285 storage/store.go:1188 [n3,s3]: failed initial metrics computation: [n3,s3]: system config not yet available
I161112 09:05:59.847009 15285 gossip/gossip.go:280 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:49117" > attrs:<> locality:<>
I161112 09:05:59.858346 15480 gossip/client.go:125 [n3] started gossip client to 127.0.0.1:60427
I161112 09:05:59.861901 15285 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161112 09:05:59.863994 15285 gossip/gossip.go:237 [n?] initial resolvers: [127.0.0.1:60427]
W161112 09:05:59.864419 15285 gossip/gossip.go:1057 [n?] no incoming or outgoing connections
I161112 09:05:59.864727 15285 base/node_id.go:62 NodeID set to 4
I161112 09:05:59.885355 15285 storage/store.go:1188 [n4,s4]: failed initial metrics computation: [n4,s4]: system config not yet available
I161112 09:05:59.885677 15285 gossip/gossip.go:280 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:44427" > attrs:<> locality:<>
I161112 09:05:59.887436 15483 gossip/client.go:125 [n4] started gossip client to 127.0.0.1:60427
I161112 09:05:59.891772 15285 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161112 09:05:59.900001 15285 gossip/gossip.go:237 [n?] initial resolvers: [127.0.0.1:60427]
W161112 09:05:59.900432 15285 gossip/gossip.go:1057 [n?] no incoming or outgoing connections
I161112 09:05:59.900745 15285 base/node_id.go:62 NodeID set to 5
I161112 09:05:59.930284 15285 storage/store.go:1188 [n5,s5]: failed initial metrics computation: [n5,s5]: system config not yet available
I161112 09:05:59.930607 15285 gossip/gossip.go:280 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:51325" > attrs:<> locality:<>
I161112 09:05:59.937120 15599 gossip/client.go:125 [n5] started gossip client to 127.0.0.1:60427
I161112 09:05:59.968386 15583 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 3 ({tcp 127.0.0.1:49117})
I161112 09:05:59.983678 15583 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:44427})
I161112 09:05:59.984317 15599 gossip/client.go:130 [n5] closing client to node 1 (127.0.0.1:60427): received forward from node 1 to 3 (127.0.0.1:49117)
I161112 09:05:59.984873 15609 gossip/client.go:125 [n5] started gossip client to 127.0.0.1:49117
I161112 09:05:59.992627 15583 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:44427})
I161112 09:05:59.993049 15583 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:35233})
I161112 09:06:00.089402 15285 storage/client_test.go:414 gossip network initialized
I161112 09:06:00.093818 15285 storage/replica_raftstorage.go:445 [s1,r1/1:/M{in-ax}] generated snapshot 20f38742 at index 21 in 103.603µs.
I161112 09:06:00.098784 15285 storage/store.go:3127 [s1,r1/1:/M{in-ax}] streamed snapshot: kv pairs: 40, log entries: 11
I161112 09:06:00.101447 15646 storage/replica_raftstorage.go:589 [s2,r1/?:{-}] applying preemptive snapshot at index 21 (id=20f38742, encoded size=16, 1 rocksdb batches, 11 log entries)
I161112 09:06:00.104311 15646 storage/replica_raftstorage.go:592 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 0.003s
I161112 09:06:00.109090 15285 storage/replica_command.go:3245 [s1,r1/1:/M{in-ax}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161112 09:06:00.131387 15732 storage/replica.go:2055 [s1,r1/1:/M{in-ax}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161112 09:06:00.158002 15285 storage/replica_raftstorage.go:445 [s1,r1/1:/M{in-ax}] generated snapshot 65c65bce at index 24 in 99.203µs.
I161112 09:06:00.160636 15285 storage/store.go:3127 [s1,r1/1:/M{in-ax}] streamed snapshot: kv pairs: 44, log entries: 14
I161112 09:06:00.165451 15771 storage/replica_raftstorage.go:589 [s3,r1/?:{-}] applying preemptive snapshot at index 24 (id=65c65bce, encoded size=16, 1 rocksdb batches, 14 log entries)
I161112 09:06:00.165676 15719 storage/raft_transport.go:436 raft transport stream to node 1 established
I161112 09:06:00.192963 15771 storage/replica_raftstorage.go:592 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 0.006s
I161112 09:06:00.290788 15285 storage/replica_command.go:3245 [s1,r1/1:/M{in-ax}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161112 09:06:00.440458 15312 storage/replica.go:2055 [s1,r1/1:/M{in-ax}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161112 09:06:00.650870 15823 storage/raft_transport.go:436 raft transport stream to node 1 established
I161112 09:06:01.102336 15285 storage/replica_command.go:2361 initiating a split of this range at key "m" [r2]
I161112 09:06:01.726387 15285 storage/replica_raftstorage.go:445 [s1,r2/1:{"m"-/Max}] generated snapshot bd16f07b at index 10 in 124.803µs.
I161112 09:06:01.748622 15285 storage/store.go:3127 [s1,r2/1:{"m"-/Max}] streamed snapshot: kv pairs: 28, log entries: 0
I161112 09:06:01.771306 13904 storage/replica_raftstorage.go:589 [s4,r2/?:{-}] applying preemptive snapshot at index 10 (id=bd16f07b, encoded size=16, 1 rocksdb batches, 0 log entries)
I161112 09:06:01.772583 13904 storage/replica_raftstorage.go:592 [s4,r2/?:{"m"-/Max}] applied preemptive snapshot in 0.001s
I161112 09:06:01.833079 15285 storage/replica_command.go:3245 [s1,r2/1:{"m"-/Max}] change replicas: read existing descriptor range_id:2 start_key:"m" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > next_replica_id:4
W161112 09:06:02.185653 16031 storage/stores.go:218 range not contained in one range: [/Meta2/Max,"m\x00"), but have [/Min,"m")
I161112 09:06:02.634034 16077 storage/replica.go:2055 [s1,r2/1:{"m"-/Max}] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:4}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:02.815432 15285 storage/replica_raftstorage.go:445 [s1,r2/1:{"m"-/Max}] generated snapshot 43406c1e at index 14 in 105.002µs.
I161112 09:06:02.844643 16114 storage/raft_transport.go:436 raft transport stream to node 1 established
I161112 09:06:02.849288 15285 storage/store.go:3127 [s1,r2/1:{"m"-/Max}] streamed snapshot: kv pairs: 30, log entries: 4
I161112 09:06:02.878292 16049 storage/replica_raftstorage.go:589 [s5,r2/?:{-}] applying preemptive snapshot at index 14 (id=43406c1e, encoded size=16, 1 rocksdb batches, 4 log entries)
I161112 09:06:02.880235 16049 storage/replica_raftstorage.go:592 [s5,r2/?:{"m"-/Max}] applied preemptive snapshot in 0.002s
I161112 09:06:03.002093 15285 storage/replica_command.go:3245 [s1,r2/1:{"m"-/Max}] change replicas: read existing descriptor range_id:2 start_key:"m" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > replicas:<node_id:4 store_id:4 replica_id:4 > next_replica_id:5
I161112 09:06:03.717514 16170 storage/replica.go:2055 [s1,r2/1:{"m"-/Max}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:5}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3} {NodeID:4 StoreID:4 ReplicaID:4} {NodeID:5 StoreID:5 ReplicaID:5}]
I161112 09:06:03.930439 16128 storage/raft_transport.go:436 raft transport stream to node 1 established
I161112 09:06:04.065009 16227 storage/raft_transport.go:436 raft transport stream to node 2 established
I161112 09:06:04.519898 16199 storage/raft_transport.go:436 raft transport stream to node 3 established
I161112 09:06:05.980153 15396 storage/replica.go:2102 [s2,r2/2:{"m"-/Max}] not quiescing: 1 pending commands
I161112 09:06:06.011372 15395 storage/replica.go:2102 [s2,r2/2:{"m"-/Max}] not quiescing: 1 pending commands
I161112 09:06:06.307278 15377 storage/replica.go:2102 [s2,r2/2:{"m"-/Max}] not quiescing: 1 pending commands
I161112 09:06:06.333669 15377 storage/replica.go:2102 [s2,r2/2:{"m"-/Max}] not quiescing: 1 pending commands
I161112 09:06:07.241497 15468 storage/replica_proposal.go:328 [s3,r1/3:{/Min-"m"}] new range lease replica {3 3 3} 1970-01-01 00:00:04.500000128 +0000 UTC 1.800000002s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 4.500000128s [physicalTime=1970-01-01 00:00:09.000000133 +0000 UTC]
E161112 09:06:07.708697 15380 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
E161112 09:06:07.730539 15607 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
I161112 09:06:09.447662 15397 storage/replica_proposal.go:328 [s2,r1/2:{/Min-"m"}] new range lease replica {2 2 2} 1970-01-01 00:00:22.500000148 +0000 UTC 9.00000001s following replica {3 3 3} 1970-01-01 00:00:04.500000128 +0000 UTC 18.00000002s [physicalTime=1970-01-01 00:00:32.400000159 +0000 UTC]
I161112 09:06:10.004053 15326 storage/replica_proposal.go:328 [s1,r1/1:{/Min-"m"}] new range lease replica {1 1 1} 1970-01-01 00:00:40.500000168 +0000 UTC 14.400000016s following replica {2 2 2} 1970-01-01 00:00:22.500000148 +0000 UTC 18.00000002s [physicalTime=1970-01-01 00:00:54.000000183 +0000 UTC]
I161112 09:06:10.650302 16397 storage/raft_transport.go:436 raft transport stream to node 2 established
I161112 09:06:10.676011 16448 storage/raft_transport.go:436 raft transport stream to node 4 established
I161112 09:06:10.698787 16449 storage/raft_transport.go:436 raft transport stream to node 3 established
I161112 09:06:10.707660 16485 storage/raft_transport.go:436 raft transport stream to node 5 established
I161112 09:06:10.958782 16461 storage/raft_transport.go:436 raft transport stream to node 5 established
I161112 09:06:11.271905 15316 storage/replica_proposal.go:377 [s1,r2/1:{"m"-/Max}] range [n1,s1,r2/1:{"m"-/Max}]: transferring raft leadership to replica ID 4
I161112 09:06:11.305072 15503 storage/replica_proposal.go:328 [s4,r2/4:{"m"-/Max}] new range lease replica {4 4 4} 1970-01-01 00:00:31.500000158 +0000 UTC 25.200000028s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 31.500000158s [physicalTime=1970-01-01 00:00:55.800000185 +0000 UTC]
I161112 09:06:11.604904 16537 storage/raft_transport.go:436 raft transport stream to node 4 established
I161112 09:06:11.612294 16538 storage/raft_transport.go:436 raft transport stream to node 4 established
W161112 09:06:12.221426 15469 storage/store.go:786 storeMu: mutex held by github.com/cockroachdb/cockroach/pkg/storage.(*Store).processRequestQueue for 113.396083ms (>100ms):
goroutine 15469 [running]:
runtime/debug.Stack(0x1c2a020, 0x1c2a060, 0x49)
/usr/local/go/src/runtime/debug/stack.go:24 +0x79
github.com/cockroachdb/cockroach/pkg/util/syncutil.ThresholdLogger.func1(0x6c24973)
/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:65 +0xe1
github.com/cockroachdb/cockroach/pkg/util/syncutil.(*TimedMutex).Unlock(0xc4227742e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:92 +0x80
github.com/cockroachdb/cockroach/pkg/storage.(*Store).processRequestQueue(0xc422774000, 0x2)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3170 +0xca
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc42540a3c0, 0xc42a989710)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:241 +0x267
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x33
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc42a989710, 0xc4288ce0e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x7d
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x66
W161112 09:06:12.658494 16631 storage/store.go:786 storeMu: mutex held by github.com/cockroachdb/cockroach/pkg/storage.(*Store).LookupReplica for 158.338564ms (>100ms):
goroutine 16631 [running]:
runtime/debug.Stack(0x1c26ea8, 0x1c26ee8, 0x43)
/usr/local/go/src/runtime/debug/stack.go:24 +0x79
github.com/cockroachdb/cockroach/pkg/util/syncutil.ThresholdLogger.func1(0x9700e04)
/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:65 +0xe1
github.com/cockroachdb/cockroach/pkg/util/syncutil.(*TimedMutex).Unlock(0xc4297ad7e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:92 +0x80
github.com/cockroachdb/cockroach/pkg/storage.(*Store).LookupReplica(0xc4297ad500, 0xc42a94d6a0, 0xc, 0x14, 0x0, 0x0, 0x0, 0xc42b52bc00)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1481 +0x1f6
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).LookupReplica(0xc428b8a840, 0xc42a94d6a0, 0xc, 0x14, 0xc42a94d6a0, 0xd, 0x14, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:209 +0x1a6
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).Send(0xc428b8a840, 0x7fa14eaa4578, 0xc42782cc40, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0xc42a6a0cc0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:152 +0x131
github.com/cockroachdb/cockroach/pkg/storage_test.(*multiTestContextKVTransport).SendNext.func1(0x7fa14eaa4578, 0xc42782cc40)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/client_test.go:471 +0x136
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask.func1(0xc42956be60, 0x20c622c, 0x16, 0x1f5, 0x0, 0x0, 0xc42a94d9c0, 0x7fa14eaa4578, 0xc42782cc40)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:264 +0xdf
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:265 +0x234
I161112 09:06:12.757425 15285 storage/replica_command.go:3245 [replicate,s4,r2/4:{"m"-/Max}] change replicas: read existing descriptor range_id:2 start_key:"m" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > replicas:<node_id:4 store_id:4 replica_id:4 > replicas:<node_id:5 store_id:5 replica_id:5 > next_replica_id:6
I161112 09:06:13.007265 15465 storage/replica_proposal.go:377 [s3,r1/3:{/Min-"m"}] range [n3,s3,r1/3:{/Min-"m"}]: transferring raft leadership to replica ID 1
I161112 09:06:14.798486 15467 storage/replica_proposal.go:377 [s3,r1/3:{/Min-"m"}] range [n3,s3,r1/3:{/Min-"m"}]: transferring raft leadership to replica ID 1
I161112 09:06:15.119974 16706 storage/replica.go:2055 [s4,r2/4:{"m"-/Max}] proposing REMOVE_REPLICA {NodeID:1 StoreID:1 ReplicaID:1}: [{NodeID:5 StoreID:5 ReplicaID:5} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:15.252034 15285 storage/replica_command.go:3245 [replicate,s4,r2/4:{"m"-/Max}] change replicas: read existing descriptor range_id:2 start_key:"m" end_key:"\377\377" replicas:<node_id:5 store_id:5 replica_id:5 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > replicas:<node_id:4 store_id:4 replica_id:4 > next_replica_id:6
I161112 09:06:15.286049 16048 storage/store.go:2984 [s1,r2/1:{"m"-/Max}] added to replica GC queue (peer suggestion)
I161112 09:06:16.192079 15321 storage/replica.go:2102 [s1,r1/1:{/Min-"m"}] not quiescing: 3 pending commands
I161112 09:06:16.217392 15321 storage/replica.go:2102 [s1,r1/1:{/Min-"m"}] not quiescing: 5 pending commands
I161112 09:06:16.448456 16668 storage/replica.go:2055 [s4,r2/4:{"m"-/Max}] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:5 StoreID:5 ReplicaID:5} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:16.551041 16689 storage/raft_transport.go:436 raft transport stream to node 5 established
I161112 09:06:16.578834 15525 storage/replica.go:2055 [s4,r2/4:{"m"-/Max}] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:5 StoreID:5 ReplicaID:5} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:16.654878 15328 storage/replica.go:2102 [s1,r1/1:{/Min-"m"}] not quiescing: 5 pending commands
I161112 09:06:16.789513 15533 storage/replica.go:2055 [s4,r2/4:{"m"-/Max}] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:5 StoreID:5 ReplicaID:5} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:17.049126 16783 storage/raft_transport.go:436 raft transport stream to node 2 established
I161112 09:06:17.061086 16784 storage/raft_transport.go:436 raft transport stream to node 3 established
I161112 09:06:17.229724 15530 storage/replica.go:2055 [s4,r2/4:{"m"-/Max}] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:5 StoreID:5 ReplicaID:5} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:17.494267 16229 storage/store.go:2984 [s3,r2/3:{"m"-/Max}] added to replica GC queue (peer suggestion)
I161112 09:06:17.520481 16229 storage/store.go:2984 [s3,r2/3:{"m"-/Max}] added to replica GC queue (peer suggestion)
I161112 09:06:17.578846 16846 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/queue.go:477
I161112 09:06:17.589785 16845 util/stop/stopper.go:468 quiescing; tasks left:
9 storage/client_test.go:501
1 storage/queue.go:477
I161112 09:06:17.589946 16847 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/queue.go:477
I161112 09:06:17.590114 16848 util/stop/stopper.go:468 quiescing; tasks left:
2 storage/client_test.go:501
1 storage/intent_resolver.go:383
W161112 09:06:17.590276 16670 storage/replica.go:1803 [s4,r2/4:{"m"-/Max}] shutdown cancellation of command PushTxn [/Local/Range/"m"/RangeDescriptor,/Min)
W161112 09:06:17.593625 16817 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command [txn: cf094229], BeginTransaction [/System/NodeLiveness/4,/Min), ConditionalPut [/System/NodeLiveness/4,/Min), EndTransaction [/System/NodeLiveness/4,/Min)
I161112 09:06:17.593896 16848 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/intent_resolver.go:383
1 storage/client_test.go:501
W161112 09:06:17.594044 16671 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command TruncateLog [/Min,/Min)
I161112 09:06:17.601123 16845 util/stop/stopper.go:468 quiescing; tasks left:
8 storage/client_test.go:501
1 storage/queue.go:477
E161112 09:06:17.671874 15430 storage/queue.go:568 [raftlog,s2,r1/2:{/Min-"m"}] result is ambiguous
E161112 09:06:17.672586 15230 storage/queue.go:568 [replicaGC,s1,r2/1:{"m"-/Max}] result is ambiguous
E161112 09:06:17.672781 15488 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
E161112 09:06:17.673723 15488 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
W161112 09:06:17.673878 16714 storage/replica.go:1803 [s4,r2/4:{"m"-/Max}] shutdown cancellation of command PushTxn [/Local/Range/"m"/RangeDescriptor,/Min)
I161112 09:06:17.674317 16848 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/intent_resolver.go:383
W161112 09:06:17.675769 16868 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command [txn: c85bfcaa], BeginTransaction [/System/NodeLiveness/1,/Min), ConditionalPut [/System/NodeLiveness/1,/Min), EndTransaction [/System/NodeLiveness/1,/Min)
E161112 09:06:17.676339 15350 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
W161112 09:06:17.713750 16877 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command [txn: cb440817], BeginTransaction [/System/NodeLiveness/5,/Min), ConditionalPut [/System/NodeLiveness/5,/Min), EndTransaction [/System/NodeLiveness/5,/Min)
W161112 09:06:17.720163 16843 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command ResolveIntent [/Meta2/Max,/Min)
W161112 09:06:17.720757 16713 storage/intent_resolver.go:337 [n4,s4,r2/4:{"m"-/Max}]: failed to resolve intents: result is ambiguous
E161112 09:06:17.721244 15607 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
E161112 09:06:17.722203 15607 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
W161112 09:06:17.771031 16898 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command [txn: b029025a], BeginTransaction [/System/NodeLiveness/3,/Min), ConditionalPut [/System/NodeLiveness/3,/Min), EndTransaction [/System/NodeLiveness/3,/Min)
I161112 09:06:17.783160 16845 util/stop/stopper.go:468 quiescing; tasks left:
6 storage/client_test.go:501
1 storage/queue.go:477
W161112 09:06:17.793384 16867 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command [txn: 347c5fb7], BeginTransaction [/System/NodeLiveness/2,/Min), ConditionalPut [/System/NodeLiveness/2,/Min), EndTransaction [/System/NodeLiveness/2,/Min)
E161112 09:06:17.793645 15514 storage/queue.go:568 [replicaGC,s3,r2/3:{"m"-/Max}] result is ambiguous
E161112 09:06:17.793878 15350 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.794734 15350 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.794868 15488 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.794993 15491 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
E161112 09:06:17.795815 15491 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.795946 15607 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.833980 15380 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
E161112 09:06:17.836018 15380 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
I161112 09:06:17.860180 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
E161112 09:06:17.904834 15488 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.908913 15491 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
W161112 09:06:17.909534 15749 storage/store.go:2988 [s2] got error from range 1, replica {1 1 1}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:2}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161112 09:06:17.909748 15749 storage/store.go:2988 [s2] got error from range 1, replica {1 1 1}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:2}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161112 09:06:17.909951 15749 storage/store.go:2988 [s2] got error from range 1, replica {1 1 1}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:2}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161112 09:06:17.910148 15749 storage/store.go:2988 [s2] got error from range 1, replica {1 1 1}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:2}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161112 09:06:17.910350 15749 storage/store.go:2988 [s2] got error from range 1, replica {1 1 1}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:2}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
E161112 09:06:17.912286 16803 storage/store.go:2982 [s3,r2/3:{"m"-/Max}] unable to add to replica GC queue: queue stopped
I161112 09:06:17.914445 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
E161112 09:06:17.920634 16509 storage/store.go:2982 [s3,r2/3:{"m"-/Max}] unable to add to replica GC queue: queue stopped
W161112 09:06:17.921135 16837 storage/store.go:2988 [s5] got error from range 2, replica {2 2 2}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:5 StoreID:5 ReplicaID:5}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
W161112 09:06:17.935507 15749 storage/store.go:2988 [s2] got error from range 0, replica {1 1 0}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:0}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:0}
E161112 09:06:17.971974 15607 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
I161112 09:06:17.975236 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
W161112 09:06:17.978120 16229 storage/store.go:2988 [s3] got error from range 1, replica {2 2 2}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
W161112 09:06:17.978412 16229 storage/raft_transport.go:477 no handler found for store 3 in response range_id:1 from_replica:<node_id:2 store_id:2 replica_id:2 > to_replica:<node_id:3 store_id:3 replica_id:3 > union:<error:<message:"storage/raft_transport.go:258: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}" transaction_restart:NONE origin_node:0 now:<wall_time:0 logical:0 > > >
W161112 09:06:17.978522 16445 storage/store.go:2988 [s4] got error from range 2, replica {2 2 2}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:4 StoreID:4 ReplicaID:4}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
W161112 09:06:17.978974 16445 storage/store.go:2988 [s4] got error from range 2, replica {2 2 2}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:4 StoreID:4 ReplicaID:4}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
I161112 09:06:17.983745 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:17.987501 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:17.989449 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:17.990507 15557 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:51325->127.0.0.1:54106: use of closed network connection
I161112 09:06:17.990836 15453 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:44427->127.0.0.1:38290: use of closed network connection
I161112 09:06:17.991151 15443 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:49117->127.0.0.1:58611: use of closed network connection
I161112 09:06:17.991465 15352 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:35233->127.0.0.1:40073: use of closed network connection
I161112 09:06:17.991700 15495 http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF.
W161112 09:06:17.998558 16449 storage/raft_transport.go:442 raft transport stream to node 3 failed: EOF
W161112 09:06:17.998790 16448 storage/raft_transport.go:442 raft transport stream to node 4 failed: EOF
I161112 09:06:17.998979 15294 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:60427->127.0.0.1:56528: use of closed network connection
I161112 09:06:17.999140 15258 http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161112 09:06:17.999543 15260 /go/src/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:60427: operation was canceled"; Reconnecting to {"127.0.0.1:60427" <nil>}
I161112 09:06:17.999646 15546 http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161112 09:06:18.000181 15548 /go/src/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:51325: operation was canceled"; Reconnecting to {"127.0.0.1:51325" <nil>}
I161112 09:06:18.000352 15548 /go/src/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
I161112 09:06:18.000491 15366 http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161112 09:06:18.000791 15368 /go/src/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:35233: operation was canceled"; Reconnecting to {"127.0.0.1:35233" <nil>}
I161112 09:06:18.000972 15368 /go/src/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
I161112 09:06:18.001171 15260 /go/src/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
W161112 09:06:18.001435 16199 storage/raft_transport.go:442 raft transport stream to node 3 failed: EOF
W161112 09:06:18.001610 15719 storage/raft_transport.go:442 raft transport stream to node 1 failed: EOF
W161112 09:06:18.001805 16538 storage/raft_transport.go:442 raft transport stream to node 4 failed: EOF
W161112 09:06:18.001961 16114 storage/raft_transport.go:442 raft transport stream to node 1 failed: EOF
I161112 09:06:18.002119 15497 /go/src/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:44427: getsockopt: connection refused"; Reconnecting to {"127.0.0.1:44427" <nil>}
I161112 09:06:18.002221 15136 http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161112 09:06:18.002662 15442 /go/src/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:49117: operation was canceled"; Reconnecting to {"127.0.0.1:49117" <nil>}
I161112 09:06:18.002839 15442 /go/src/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
W161112 09:06:18.002975 15823 storage/raft_transport.go:442 raft transport stream to node 1 failed: EOF
I161112 09:06:18.006438 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:18.006819 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:18.009765 15497 /go/src/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: context canceled
I161112 09:06:18.010053 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:18.010431 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:18.010802 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
client_raft_test.go:1189: Failed to achieve proper replication within 10 seconds
```
|
1.0
|
: TestStoreRangeDownReplicate failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/59693a140f93a4658174054884db891012ac0380
Stress build found a failed test:
```
I161112 09:05:59.708852 15285 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161112 09:05:59.710388 15285 gossip/gossip.go:237 [n?] initial resolvers: []
W161112 09:05:59.710805 15285 gossip/gossip.go:1055 [n?] no resolvers found; use --join to specify a connected node
I161112 09:05:59.711122 15285 base/node_id.go:62 NodeID set to 1
I161112 09:05:59.742196 15285 storage/store.go:1188 [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161112 09:05:59.742527 15285 gossip/gossip.go:280 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:60427" > attrs:<> locality:<>
I161112 09:05:59.754150 15321 storage/replica_proposal.go:328 [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900.000124ms following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=1970-01-01 00:00:00.000000123 +0000 UTC]
I161112 09:05:59.770204 15285 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161112 09:05:59.773778 15285 gossip/gossip.go:237 [n?] initial resolvers: [127.0.0.1:60427]
W161112 09:05:59.774216 15285 gossip/gossip.go:1057 [n?] no incoming or outgoing connections
I161112 09:05:59.774550 15285 base/node_id.go:62 NodeID set to 2
I161112 09:05:59.804376 15285 storage/store.go:1188 [n2,s2]: failed initial metrics computation: [n2,s2]: system config not yet available
I161112 09:05:59.804704 15285 gossip/gossip.go:280 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:35233" > attrs:<> locality:<>
I161112 09:05:59.809434 15285 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161112 09:05:59.811685 15285 gossip/gossip.go:237 [n?] initial resolvers: [127.0.0.1:60427]
W161112 09:05:59.812106 15285 gossip/gossip.go:1057 [n?] no incoming or outgoing connections
I161112 09:05:59.812423 15285 base/node_id.go:62 NodeID set to 3
I161112 09:05:59.824939 15354 gossip/client.go:125 [n2] started gossip client to 127.0.0.1:60427
I161112 09:05:59.846685 15285 storage/store.go:1188 [n3,s3]: failed initial metrics computation: [n3,s3]: system config not yet available
I161112 09:05:59.847009 15285 gossip/gossip.go:280 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:49117" > attrs:<> locality:<>
I161112 09:05:59.858346 15480 gossip/client.go:125 [n3] started gossip client to 127.0.0.1:60427
I161112 09:05:59.861901 15285 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161112 09:05:59.863994 15285 gossip/gossip.go:237 [n?] initial resolvers: [127.0.0.1:60427]
W161112 09:05:59.864419 15285 gossip/gossip.go:1057 [n?] no incoming or outgoing connections
I161112 09:05:59.864727 15285 base/node_id.go:62 NodeID set to 4
I161112 09:05:59.885355 15285 storage/store.go:1188 [n4,s4]: failed initial metrics computation: [n4,s4]: system config not yet available
I161112 09:05:59.885677 15285 gossip/gossip.go:280 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:44427" > attrs:<> locality:<>
I161112 09:05:59.887436 15483 gossip/client.go:125 [n4] started gossip client to 127.0.0.1:60427
I161112 09:05:59.891772 15285 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161112 09:05:59.900001 15285 gossip/gossip.go:237 [n?] initial resolvers: [127.0.0.1:60427]
W161112 09:05:59.900432 15285 gossip/gossip.go:1057 [n?] no incoming or outgoing connections
I161112 09:05:59.900745 15285 base/node_id.go:62 NodeID set to 5
I161112 09:05:59.930284 15285 storage/store.go:1188 [n5,s5]: failed initial metrics computation: [n5,s5]: system config not yet available
I161112 09:05:59.930607 15285 gossip/gossip.go:280 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:51325" > attrs:<> locality:<>
I161112 09:05:59.937120 15599 gossip/client.go:125 [n5] started gossip client to 127.0.0.1:60427
I161112 09:05:59.968386 15583 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 3 ({tcp 127.0.0.1:49117})
I161112 09:05:59.983678 15583 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:44427})
I161112 09:05:59.984317 15599 gossip/client.go:130 [n5] closing client to node 1 (127.0.0.1:60427): received forward from node 1 to 3 (127.0.0.1:49117)
I161112 09:05:59.984873 15609 gossip/client.go:125 [n5] started gossip client to 127.0.0.1:49117
I161112 09:05:59.992627 15583 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:44427})
I161112 09:05:59.993049 15583 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:35233})
I161112 09:06:00.089402 15285 storage/client_test.go:414 gossip network initialized
I161112 09:06:00.093818 15285 storage/replica_raftstorage.go:445 [s1,r1/1:/M{in-ax}] generated snapshot 20f38742 at index 21 in 103.603µs.
I161112 09:06:00.098784 15285 storage/store.go:3127 [s1,r1/1:/M{in-ax}] streamed snapshot: kv pairs: 40, log entries: 11
I161112 09:06:00.101447 15646 storage/replica_raftstorage.go:589 [s2,r1/?:{-}] applying preemptive snapshot at index 21 (id=20f38742, encoded size=16, 1 rocksdb batches, 11 log entries)
I161112 09:06:00.104311 15646 storage/replica_raftstorage.go:592 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 0.003s
I161112 09:06:00.109090 15285 storage/replica_command.go:3245 [s1,r1/1:/M{in-ax}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161112 09:06:00.131387 15732 storage/replica.go:2055 [s1,r1/1:/M{in-ax}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161112 09:06:00.158002 15285 storage/replica_raftstorage.go:445 [s1,r1/1:/M{in-ax}] generated snapshot 65c65bce at index 24 in 99.203µs.
I161112 09:06:00.160636 15285 storage/store.go:3127 [s1,r1/1:/M{in-ax}] streamed snapshot: kv pairs: 44, log entries: 14
I161112 09:06:00.165451 15771 storage/replica_raftstorage.go:589 [s3,r1/?:{-}] applying preemptive snapshot at index 24 (id=65c65bce, encoded size=16, 1 rocksdb batches, 14 log entries)
I161112 09:06:00.165676 15719 storage/raft_transport.go:436 raft transport stream to node 1 established
I161112 09:06:00.192963 15771 storage/replica_raftstorage.go:592 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 0.006s
I161112 09:06:00.290788 15285 storage/replica_command.go:3245 [s1,r1/1:/M{in-ax}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161112 09:06:00.440458 15312 storage/replica.go:2055 [s1,r1/1:/M{in-ax}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161112 09:06:00.650870 15823 storage/raft_transport.go:436 raft transport stream to node 1 established
I161112 09:06:01.102336 15285 storage/replica_command.go:2361 initiating a split of this range at key "m" [r2]
I161112 09:06:01.726387 15285 storage/replica_raftstorage.go:445 [s1,r2/1:{"m"-/Max}] generated snapshot bd16f07b at index 10 in 124.803µs.
I161112 09:06:01.748622 15285 storage/store.go:3127 [s1,r2/1:{"m"-/Max}] streamed snapshot: kv pairs: 28, log entries: 0
I161112 09:06:01.771306 13904 storage/replica_raftstorage.go:589 [s4,r2/?:{-}] applying preemptive snapshot at index 10 (id=bd16f07b, encoded size=16, 1 rocksdb batches, 0 log entries)
I161112 09:06:01.772583 13904 storage/replica_raftstorage.go:592 [s4,r2/?:{"m"-/Max}] applied preemptive snapshot in 0.001s
I161112 09:06:01.833079 15285 storage/replica_command.go:3245 [s1,r2/1:{"m"-/Max}] change replicas: read existing descriptor range_id:2 start_key:"m" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > next_replica_id:4
W161112 09:06:02.185653 16031 storage/stores.go:218 range not contained in one range: [/Meta2/Max,"m\x00"), but have [/Min,"m")
I161112 09:06:02.634034 16077 storage/replica.go:2055 [s1,r2/1:{"m"-/Max}] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:4}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:02.815432 15285 storage/replica_raftstorage.go:445 [s1,r2/1:{"m"-/Max}] generated snapshot 43406c1e at index 14 in 105.002µs.
I161112 09:06:02.844643 16114 storage/raft_transport.go:436 raft transport stream to node 1 established
I161112 09:06:02.849288 15285 storage/store.go:3127 [s1,r2/1:{"m"-/Max}] streamed snapshot: kv pairs: 30, log entries: 4
I161112 09:06:02.878292 16049 storage/replica_raftstorage.go:589 [s5,r2/?:{-}] applying preemptive snapshot at index 14 (id=43406c1e, encoded size=16, 1 rocksdb batches, 4 log entries)
I161112 09:06:02.880235 16049 storage/replica_raftstorage.go:592 [s5,r2/?:{"m"-/Max}] applied preemptive snapshot in 0.002s
I161112 09:06:03.002093 15285 storage/replica_command.go:3245 [s1,r2/1:{"m"-/Max}] change replicas: read existing descriptor range_id:2 start_key:"m" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > replicas:<node_id:4 store_id:4 replica_id:4 > next_replica_id:5
I161112 09:06:03.717514 16170 storage/replica.go:2055 [s1,r2/1:{"m"-/Max}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:5}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3} {NodeID:4 StoreID:4 ReplicaID:4} {NodeID:5 StoreID:5 ReplicaID:5}]
I161112 09:06:03.930439 16128 storage/raft_transport.go:436 raft transport stream to node 1 established
I161112 09:06:04.065009 16227 storage/raft_transport.go:436 raft transport stream to node 2 established
I161112 09:06:04.519898 16199 storage/raft_transport.go:436 raft transport stream to node 3 established
I161112 09:06:05.980153 15396 storage/replica.go:2102 [s2,r2/2:{"m"-/Max}] not quiescing: 1 pending commands
I161112 09:06:06.011372 15395 storage/replica.go:2102 [s2,r2/2:{"m"-/Max}] not quiescing: 1 pending commands
I161112 09:06:06.307278 15377 storage/replica.go:2102 [s2,r2/2:{"m"-/Max}] not quiescing: 1 pending commands
I161112 09:06:06.333669 15377 storage/replica.go:2102 [s2,r2/2:{"m"-/Max}] not quiescing: 1 pending commands
I161112 09:06:07.241497 15468 storage/replica_proposal.go:328 [s3,r1/3:{/Min-"m"}] new range lease replica {3 3 3} 1970-01-01 00:00:04.500000128 +0000 UTC 1.800000002s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 4.500000128s [physicalTime=1970-01-01 00:00:09.000000133 +0000 UTC]
E161112 09:06:07.708697 15380 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
E161112 09:06:07.730539 15607 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
I161112 09:06:09.447662 15397 storage/replica_proposal.go:328 [s2,r1/2:{/Min-"m"}] new range lease replica {2 2 2} 1970-01-01 00:00:22.500000148 +0000 UTC 9.00000001s following replica {3 3 3} 1970-01-01 00:00:04.500000128 +0000 UTC 18.00000002s [physicalTime=1970-01-01 00:00:32.400000159 +0000 UTC]
I161112 09:06:10.004053 15326 storage/replica_proposal.go:328 [s1,r1/1:{/Min-"m"}] new range lease replica {1 1 1} 1970-01-01 00:00:40.500000168 +0000 UTC 14.400000016s following replica {2 2 2} 1970-01-01 00:00:22.500000148 +0000 UTC 18.00000002s [physicalTime=1970-01-01 00:00:54.000000183 +0000 UTC]
I161112 09:06:10.650302 16397 storage/raft_transport.go:436 raft transport stream to node 2 established
I161112 09:06:10.676011 16448 storage/raft_transport.go:436 raft transport stream to node 4 established
I161112 09:06:10.698787 16449 storage/raft_transport.go:436 raft transport stream to node 3 established
I161112 09:06:10.707660 16485 storage/raft_transport.go:436 raft transport stream to node 5 established
I161112 09:06:10.958782 16461 storage/raft_transport.go:436 raft transport stream to node 5 established
I161112 09:06:11.271905 15316 storage/replica_proposal.go:377 [s1,r2/1:{"m"-/Max}] range [n1,s1,r2/1:{"m"-/Max}]: transferring raft leadership to replica ID 4
I161112 09:06:11.305072 15503 storage/replica_proposal.go:328 [s4,r2/4:{"m"-/Max}] new range lease replica {4 4 4} 1970-01-01 00:00:31.500000158 +0000 UTC 25.200000028s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 31.500000158s [physicalTime=1970-01-01 00:00:55.800000185 +0000 UTC]
I161112 09:06:11.604904 16537 storage/raft_transport.go:436 raft transport stream to node 4 established
I161112 09:06:11.612294 16538 storage/raft_transport.go:436 raft transport stream to node 4 established
W161112 09:06:12.221426 15469 storage/store.go:786 storeMu: mutex held by github.com/cockroachdb/cockroach/pkg/storage.(*Store).processRequestQueue for 113.396083ms (>100ms):
goroutine 15469 [running]:
runtime/debug.Stack(0x1c2a020, 0x1c2a060, 0x49)
/usr/local/go/src/runtime/debug/stack.go:24 +0x79
github.com/cockroachdb/cockroach/pkg/util/syncutil.ThresholdLogger.func1(0x6c24973)
/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:65 +0xe1
github.com/cockroachdb/cockroach/pkg/util/syncutil.(*TimedMutex).Unlock(0xc4227742e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:92 +0x80
github.com/cockroachdb/cockroach/pkg/storage.(*Store).processRequestQueue(0xc422774000, 0x2)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3170 +0xca
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc42540a3c0, 0xc42a989710)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:241 +0x267
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:181 +0x33
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc42a989710, 0xc4288ce0e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x7d
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x66
W161112 09:06:12.658494 16631 storage/store.go:786 storeMu: mutex held by github.com/cockroachdb/cockroach/pkg/storage.(*Store).LookupReplica for 158.338564ms (>100ms):
goroutine 16631 [running]:
runtime/debug.Stack(0x1c26ea8, 0x1c26ee8, 0x43)
/usr/local/go/src/runtime/debug/stack.go:24 +0x79
github.com/cockroachdb/cockroach/pkg/util/syncutil.ThresholdLogger.func1(0x9700e04)
/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:65 +0xe1
github.com/cockroachdb/cockroach/pkg/util/syncutil.(*TimedMutex).Unlock(0xc4297ad7e0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:92 +0x80
github.com/cockroachdb/cockroach/pkg/storage.(*Store).LookupReplica(0xc4297ad500, 0xc42a94d6a0, 0xc, 0x14, 0x0, 0x0, 0x0, 0xc42b52bc00)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1481 +0x1f6
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).LookupReplica(0xc428b8a840, 0xc42a94d6a0, 0xc, 0x14, 0xc42a94d6a0, 0xd, 0x14, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:209 +0x1a6
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).Send(0xc428b8a840, 0x7fa14eaa4578, 0xc42782cc40, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0xc42a6a0cc0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:152 +0x131
github.com/cockroachdb/cockroach/pkg/storage_test.(*multiTestContextKVTransport).SendNext.func1(0x7fa14eaa4578, 0xc42782cc40)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/client_test.go:471 +0x136
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask.func1(0xc42956be60, 0x20c622c, 0x16, 0x1f5, 0x0, 0x0, 0xc42a94d9c0, 0x7fa14eaa4578, 0xc42782cc40)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:264 +0xdf
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:265 +0x234
I161112 09:06:12.757425 15285 storage/replica_command.go:3245 [replicate,s4,r2/4:{"m"-/Max}] change replicas: read existing descriptor range_id:2 start_key:"m" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > replicas:<node_id:4 store_id:4 replica_id:4 > replicas:<node_id:5 store_id:5 replica_id:5 > next_replica_id:6
I161112 09:06:13.007265 15465 storage/replica_proposal.go:377 [s3,r1/3:{/Min-"m"}] range [n3,s3,r1/3:{/Min-"m"}]: transferring raft leadership to replica ID 1
I161112 09:06:14.798486 15467 storage/replica_proposal.go:377 [s3,r1/3:{/Min-"m"}] range [n3,s3,r1/3:{/Min-"m"}]: transferring raft leadership to replica ID 1
I161112 09:06:15.119974 16706 storage/replica.go:2055 [s4,r2/4:{"m"-/Max}] proposing REMOVE_REPLICA {NodeID:1 StoreID:1 ReplicaID:1}: [{NodeID:5 StoreID:5 ReplicaID:5} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:15.252034 15285 storage/replica_command.go:3245 [replicate,s4,r2/4:{"m"-/Max}] change replicas: read existing descriptor range_id:2 start_key:"m" end_key:"\377\377" replicas:<node_id:5 store_id:5 replica_id:5 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > replicas:<node_id:4 store_id:4 replica_id:4 > next_replica_id:6
I161112 09:06:15.286049 16048 storage/store.go:2984 [s1,r2/1:{"m"-/Max}] added to replica GC queue (peer suggestion)
I161112 09:06:16.192079 15321 storage/replica.go:2102 [s1,r1/1:{/Min-"m"}] not quiescing: 3 pending commands
I161112 09:06:16.217392 15321 storage/replica.go:2102 [s1,r1/1:{/Min-"m"}] not quiescing: 5 pending commands
I161112 09:06:16.448456 16668 storage/replica.go:2055 [s4,r2/4:{"m"-/Max}] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:5 StoreID:5 ReplicaID:5} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:16.551041 16689 storage/raft_transport.go:436 raft transport stream to node 5 established
I161112 09:06:16.578834 15525 storage/replica.go:2055 [s4,r2/4:{"m"-/Max}] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:5 StoreID:5 ReplicaID:5} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:16.654878 15328 storage/replica.go:2102 [s1,r1/1:{/Min-"m"}] not quiescing: 5 pending commands
I161112 09:06:16.789513 15533 storage/replica.go:2055 [s4,r2/4:{"m"-/Max}] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:5 StoreID:5 ReplicaID:5} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:17.049126 16783 storage/raft_transport.go:436 raft transport stream to node 2 established
I161112 09:06:17.061086 16784 storage/raft_transport.go:436 raft transport stream to node 3 established
I161112 09:06:17.229724 15530 storage/replica.go:2055 [s4,r2/4:{"m"-/Max}] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:5 StoreID:5 ReplicaID:5} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:4}]
I161112 09:06:17.494267 16229 storage/store.go:2984 [s3,r2/3:{"m"-/Max}] added to replica GC queue (peer suggestion)
I161112 09:06:17.520481 16229 storage/store.go:2984 [s3,r2/3:{"m"-/Max}] added to replica GC queue (peer suggestion)
I161112 09:06:17.578846 16846 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/queue.go:477
I161112 09:06:17.589785 16845 util/stop/stopper.go:468 quiescing; tasks left:
9 storage/client_test.go:501
1 storage/queue.go:477
I161112 09:06:17.589946 16847 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/queue.go:477
I161112 09:06:17.590114 16848 util/stop/stopper.go:468 quiescing; tasks left:
2 storage/client_test.go:501
1 storage/intent_resolver.go:383
W161112 09:06:17.590276 16670 storage/replica.go:1803 [s4,r2/4:{"m"-/Max}] shutdown cancellation of command PushTxn [/Local/Range/"m"/RangeDescriptor,/Min)
W161112 09:06:17.593625 16817 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command [txn: cf094229], BeginTransaction [/System/NodeLiveness/4,/Min), ConditionalPut [/System/NodeLiveness/4,/Min), EndTransaction [/System/NodeLiveness/4,/Min)
I161112 09:06:17.593896 16848 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/intent_resolver.go:383
1 storage/client_test.go:501
W161112 09:06:17.594044 16671 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command TruncateLog [/Min,/Min)
I161112 09:06:17.601123 16845 util/stop/stopper.go:468 quiescing; tasks left:
8 storage/client_test.go:501
1 storage/queue.go:477
E161112 09:06:17.671874 15430 storage/queue.go:568 [raftlog,s2,r1/2:{/Min-"m"}] result is ambiguous
E161112 09:06:17.672586 15230 storage/queue.go:568 [replicaGC,s1,r2/1:{"m"-/Max}] result is ambiguous
E161112 09:06:17.672781 15488 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
E161112 09:06:17.673723 15488 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
W161112 09:06:17.673878 16714 storage/replica.go:1803 [s4,r2/4:{"m"-/Max}] shutdown cancellation of command PushTxn [/Local/Range/"m"/RangeDescriptor,/Min)
I161112 09:06:17.674317 16848 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/intent_resolver.go:383
W161112 09:06:17.675769 16868 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command [txn: c85bfcaa], BeginTransaction [/System/NodeLiveness/1,/Min), ConditionalPut [/System/NodeLiveness/1,/Min), EndTransaction [/System/NodeLiveness/1,/Min)
E161112 09:06:17.676339 15350 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
W161112 09:06:17.713750 16877 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command [txn: cb440817], BeginTransaction [/System/NodeLiveness/5,/Min), ConditionalPut [/System/NodeLiveness/5,/Min), EndTransaction [/System/NodeLiveness/5,/Min)
W161112 09:06:17.720163 16843 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command ResolveIntent [/Meta2/Max,/Min)
W161112 09:06:17.720757 16713 storage/intent_resolver.go:337 [n4,s4,r2/4:{"m"-/Max}]: failed to resolve intents: result is ambiguous
E161112 09:06:17.721244 15607 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
E161112 09:06:17.722203 15607 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
W161112 09:06:17.771031 16898 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command [txn: b029025a], BeginTransaction [/System/NodeLiveness/3,/Min), ConditionalPut [/System/NodeLiveness/3,/Min), EndTransaction [/System/NodeLiveness/3,/Min)
I161112 09:06:17.783160 16845 util/stop/stopper.go:468 quiescing; tasks left:
6 storage/client_test.go:501
1 storage/queue.go:477
W161112 09:06:17.793384 16867 storage/replica.go:1803 [s1,r1/1:{/Min-"m"}] shutdown cancellation of command [txn: 347c5fb7], BeginTransaction [/System/NodeLiveness/2,/Min), ConditionalPut [/System/NodeLiveness/2,/Min), EndTransaction [/System/NodeLiveness/2,/Min)
E161112 09:06:17.793645 15514 storage/queue.go:568 [replicaGC,s3,r2/3:{"m"-/Max}] result is ambiguous
E161112 09:06:17.793878 15350 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.794734 15350 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.794868 15488 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.794993 15491 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
E161112 09:06:17.795815 15491 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.795946 15607 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.833980 15380 storage/node_liveness.go:141 [hb] failed liveness heartbeat: result is ambiguous
E161112 09:06:17.836018 15380 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
I161112 09:06:17.860180 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
E161112 09:06:17.904834 15488 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
E161112 09:06:17.908913 15491 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
W161112 09:06:17.909534 15749 storage/store.go:2988 [s2] got error from range 1, replica {1 1 1}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:2}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161112 09:06:17.909748 15749 storage/store.go:2988 [s2] got error from range 1, replica {1 1 1}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:2}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161112 09:06:17.909951 15749 storage/store.go:2988 [s2] got error from range 1, replica {1 1 1}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:2}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161112 09:06:17.910148 15749 storage/store.go:2988 [s2] got error from range 1, replica {1 1 1}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:2}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161112 09:06:17.910350 15749 storage/store.go:2988 [s2] got error from range 1, replica {1 1 1}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:2}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
E161112 09:06:17.912286 16803 storage/store.go:2982 [s3,r2/3:{"m"-/Max}] unable to add to replica GC queue: queue stopped
I161112 09:06:17.914445 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
E161112 09:06:17.920634 16509 storage/store.go:2982 [s3,r2/3:{"m"-/Max}] unable to add to replica GC queue: queue stopped
W161112 09:06:17.921135 16837 storage/store.go:2988 [s5] got error from range 2, replica {2 2 2}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:5 StoreID:5 ReplicaID:5}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
W161112 09:06:17.935507 15749 storage/store.go:2988 [s2] got error from range 0, replica {1 1 0}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:0}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:0}
E161112 09:06:17.971974 15607 storage/node_liveness.go:141 [hb] failed liveness heartbeat: node unavailable; try another peer
I161112 09:06:17.975236 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
W161112 09:06:17.978120 16229 storage/store.go:2988 [s3] got error from range 1, replica {2 2 2}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
W161112 09:06:17.978412 16229 storage/raft_transport.go:477 no handler found for store 3 in response range_id:1 from_replica:<node_id:2 store_id:2 replica_id:2 > to_replica:<node_id:3 store_id:3 replica_id:3 > union:<error:<message:"storage/raft_transport.go:258: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}" transaction_restart:NONE origin_node:0 now:<wall_time:0 logical:0 > > >
W161112 09:06:17.978522 16445 storage/store.go:2988 [s4] got error from range 2, replica {2 2 2}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:4 StoreID:4 ReplicaID:4}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
W161112 09:06:17.978974 16445 storage/store.go:2988 [s4] got error from range 2, replica {2 2 2}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:4 StoreID:4 ReplicaID:4}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
I161112 09:06:17.983745 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:17.987501 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:17.989449 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:17.990507 15557 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:51325->127.0.0.1:54106: use of closed network connection
I161112 09:06:17.990836 15453 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:44427->127.0.0.1:38290: use of closed network connection
I161112 09:06:17.991151 15443 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:49117->127.0.0.1:58611: use of closed network connection
I161112 09:06:17.991465 15352 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:35233->127.0.0.1:40073: use of closed network connection
I161112 09:06:17.991700 15495 http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF.
W161112 09:06:17.998558 16449 storage/raft_transport.go:442 raft transport stream to node 3 failed: EOF
W161112 09:06:17.998790 16448 storage/raft_transport.go:442 raft transport stream to node 4 failed: EOF
I161112 09:06:17.998979 15294 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:60427->127.0.0.1:56528: use of closed network connection
I161112 09:06:17.999140 15258 http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161112 09:06:17.999543 15260 /go/src/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:60427: operation was canceled"; Reconnecting to {"127.0.0.1:60427" <nil>}
I161112 09:06:17.999646 15546 http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161112 09:06:18.000181 15548 /go/src/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:51325: operation was canceled"; Reconnecting to {"127.0.0.1:51325" <nil>}
I161112 09:06:18.000352 15548 /go/src/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
I161112 09:06:18.000491 15366 http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161112 09:06:18.000791 15368 /go/src/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:35233: operation was canceled"; Reconnecting to {"127.0.0.1:35233" <nil>}
I161112 09:06:18.000972 15368 /go/src/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
I161112 09:06:18.001171 15260 /go/src/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
W161112 09:06:18.001435 16199 storage/raft_transport.go:442 raft transport stream to node 3 failed: EOF
W161112 09:06:18.001610 15719 storage/raft_transport.go:442 raft transport stream to node 1 failed: EOF
W161112 09:06:18.001805 16538 storage/raft_transport.go:442 raft transport stream to node 4 failed: EOF
W161112 09:06:18.001961 16114 storage/raft_transport.go:442 raft transport stream to node 1 failed: EOF
I161112 09:06:18.002119 15497 /go/src/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:44427: getsockopt: connection refused"; Reconnecting to {"127.0.0.1:44427" <nil>}
I161112 09:06:18.002221 15136 http2_client.go:1053 transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161112 09:06:18.002662 15442 /go/src/google.golang.org/grpc/clientconn.go:667 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:49117: operation was canceled"; Reconnecting to {"127.0.0.1:49117" <nil>}
I161112 09:06:18.002839 15442 /go/src/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
W161112 09:06:18.002975 15823 storage/raft_transport.go:442 raft transport stream to node 1 failed: EOF
I161112 09:06:18.006438 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:18.006819 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:18.009765 15497 /go/src/google.golang.org/grpc/clientconn.go:767 grpc: addrConn.transportMonitor exits due to: context canceled
I161112 09:06:18.010053 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:18.010431 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161112 09:06:18.010802 16844 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
client_raft_test.go:1189: Failed to achieve proper replication within 10 seconds
```
|
non_process
|
teststorerangedownreplicate failed under stress sha stress build found a failed test storage engine rocksdb go opening in memory rocksdb instance gossip gossip go initial resolvers gossip gossip go no resolvers found use join to specify a connected node base node id go nodeid set to storage store go failed initial metrics computation system config not yet available gossip gossip go nodedescriptor set to node id address attrs locality storage replica proposal go new range lease replica utc following replica utc storage engine rocksdb go opening in memory rocksdb instance gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections base node id go nodeid set to storage store go failed initial metrics computation system config not yet available gossip gossip go nodedescriptor set to node id address attrs locality storage engine rocksdb go opening in memory rocksdb instance gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections base node id go nodeid set to gossip client go started gossip client to storage store go failed initial metrics computation system config not yet available gossip gossip go nodedescriptor set to node id address attrs locality gossip client go started gossip client to storage engine rocksdb go opening in memory rocksdb instance gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections base node id go nodeid set to storage store go failed initial metrics computation system config not yet available gossip gossip go nodedescriptor set to node id address attrs locality gossip client go started gossip client to storage engine rocksdb go opening in memory rocksdb instance gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections base node id go nodeid set to storage store go failed initial metrics computation system config not yet available gossip gossip go nodedescriptor set to node id address attrs locality gossip client go started gossip client to gossip server go refusing gossip from node max conns forwarding to tcp gossip server go refusing gossip from node max conns forwarding to tcp gossip client go closing client to node received forward from node to gossip client go started gossip client to gossip server go refusing gossip from node max conns forwarding to tcp gossip server go refusing gossip from node max conns forwarding to tcp storage client test go gossip network initialized storage replica raftstorage go generated snapshot at index in storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key end key replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated snapshot at index in storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage raft transport go raft transport stream to node established storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage raft transport go raft transport stream to node established storage replica command go initiating a split of this range at key m storage replica raftstorage go generated snapshot at index in storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key m end key replicas replicas replicas next replica id storage stores go range not contained in one range max m but have min m storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated snapshot at index in storage raft transport go raft transport stream to node established storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key m end key replicas replicas replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage replica go not quiescing pending commands storage replica go not quiescing pending commands storage replica go not quiescing pending commands storage replica go not quiescing pending commands storage replica proposal go new range lease replica utc following replica utc storage node liveness go failed liveness heartbeat result is ambiguous storage node liveness go failed liveness heartbeat result is ambiguous storage replica proposal go new range lease replica utc following replica utc storage replica proposal go new range lease replica utc following replica utc storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage replica proposal go range transferring raft leadership to replica id storage replica proposal go new range lease replica utc following replica utc storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage store go storemu mutex held by github com cockroachdb cockroach pkg storage store processrequestqueue for goroutine runtime debug stack usr local go src runtime debug stack go github com cockroachdb cockroach pkg util syncutil thresholdlogger go src github com cockroachdb cockroach pkg util syncutil timedmutex go github com cockroachdb cockroach pkg util syncutil timedmutex unlock go src github com cockroachdb cockroach pkg util syncutil timedmutex go github com cockroachdb cockroach pkg storage store processrequestqueue go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go storage store go storemu mutex held by github com cockroachdb cockroach pkg storage store lookupreplica for goroutine runtime debug stack usr local go src runtime debug stack go github com cockroachdb cockroach pkg util syncutil thresholdlogger go src github com cockroachdb cockroach pkg util syncutil timedmutex go github com cockroachdb cockroach pkg util syncutil timedmutex unlock go src github com cockroachdb cockroach pkg util syncutil timedmutex go github com cockroachdb cockroach pkg storage store lookupreplica go src github com cockroachdb cockroach pkg storage store go github com cockroachdb cockroach pkg storage stores lookupreplica go src github com cockroachdb cockroach pkg storage stores go github com cockroachdb cockroach pkg storage stores send go src github com cockroachdb cockroach pkg storage stores go github com cockroachdb cockroach pkg storage test multitestcontextkvtransport sendnext go src github com cockroachdb cockroach pkg storage client test go github com cockroachdb cockroach pkg util stop stopper runasynctask go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runasynctask go src github com cockroachdb cockroach pkg util stop stopper go storage replica command go change replicas read existing descriptor range id start key m end key replicas replicas replicas replicas replicas next replica id storage replica proposal go range transferring raft leadership to replica id storage replica proposal go range transferring raft leadership to replica id storage replica go proposing remove replica nodeid storeid replicaid storage replica command go change replicas read existing descriptor range id start key m end key replicas replicas replicas replicas next replica id storage store go added to replica gc queue peer suggestion storage replica go not quiescing pending commands storage replica go not quiescing pending commands storage replica go proposing remove replica nodeid storeid replicaid storage raft transport go raft transport stream to node established storage replica go proposing remove replica nodeid storeid replicaid storage replica go not quiescing pending commands storage replica go proposing remove replica nodeid storeid replicaid storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage replica go proposing remove replica nodeid storeid replicaid storage store go added to replica gc queue peer suggestion storage store go added to replica gc queue peer suggestion util stop stopper go quiescing tasks left storage queue go util stop stopper go quiescing tasks left storage client test go storage queue go util stop stopper go quiescing tasks left storage queue go util stop stopper go quiescing tasks left storage client test go storage intent resolver go storage replica go shutdown cancellation of command pushtxn local range m rangedescriptor min storage replica go shutdown cancellation of command begintransaction system nodeliveness min conditionalput system nodeliveness min endtransaction system nodeliveness min util stop stopper go quiescing tasks left storage intent resolver go storage client test go storage replica go shutdown cancellation of command truncatelog min min util stop stopper go quiescing tasks left storage client test go storage queue go storage queue go result is ambiguous storage queue go result is ambiguous storage node liveness go failed liveness heartbeat result is ambiguous storage node liveness go failed liveness heartbeat node unavailable try another peer storage replica go shutdown cancellation of command pushtxn local range m rangedescriptor min util stop stopper go quiescing tasks left storage intent resolver go storage replica go shutdown cancellation of command begintransaction system nodeliveness min conditionalput system nodeliveness min endtransaction system nodeliveness min storage node liveness go failed liveness heartbeat result is ambiguous storage replica go shutdown cancellation of command begintransaction system nodeliveness min conditionalput system nodeliveness min endtransaction system nodeliveness min storage replica go shutdown cancellation of command resolveintent max min storage intent resolver go failed to resolve intents result is ambiguous storage node liveness go failed liveness heartbeat result is ambiguous storage node liveness go failed liveness heartbeat node unavailable try another peer storage replica go shutdown cancellation of command begintransaction system nodeliveness min conditionalput system nodeliveness min endtransaction system nodeliveness min util stop stopper go quiescing tasks left storage client test go storage queue go storage replica go shutdown cancellation of command begintransaction system nodeliveness min conditionalput system nodeliveness min endtransaction system nodeliveness min storage queue go result is ambiguous storage node liveness go failed liveness heartbeat node unavailable try another peer storage node liveness go failed liveness heartbeat node unavailable try another peer storage node liveness go failed liveness heartbeat node unavailable try another peer storage node liveness go failed liveness heartbeat result is ambiguous storage node liveness go failed liveness heartbeat node unavailable try another peer storage node liveness go failed liveness heartbeat node unavailable try another peer storage node liveness go failed liveness heartbeat result is ambiguous storage node liveness go failed liveness heartbeat node unavailable try another peer util stop stopper go stop has been called stopping or quiescing all running tasks storage node liveness go failed liveness heartbeat node unavailable try another peer storage node liveness go failed liveness heartbeat node unavailable try another peer storage store go got error from range replica storage raft transport go unable to accept raft message from nodeid storeid replicaid no handler registered for nodeid storeid replicaid storage store go got error from range replica storage raft transport go unable to accept raft message from nodeid storeid replicaid no handler registered for nodeid storeid replicaid storage store go got error from range replica storage raft transport go unable to accept raft message from nodeid storeid replicaid no handler registered for nodeid storeid replicaid storage store go got error from range replica storage raft transport go unable to accept raft message from nodeid storeid replicaid no handler registered for nodeid storeid replicaid storage store go got error from range replica storage raft transport go unable to accept raft message from nodeid storeid replicaid no handler registered for nodeid storeid replicaid storage store go unable to add to replica gc queue queue stopped util stop stopper go stop has been called stopping or quiescing all running tasks storage store go unable to add to replica gc queue queue stopped storage store go got error from range replica storage raft transport go unable to accept raft message from nodeid storeid replicaid no handler registered for nodeid storeid replicaid storage store go got error from range replica storage raft transport go unable to accept raft message from nodeid storeid replicaid no handler registered for nodeid storeid replicaid storage node liveness go failed liveness heartbeat node unavailable try another peer util stop stopper go stop has been called stopping or quiescing all running tasks storage store go got error from range replica storage raft transport go unable to accept raft message from nodeid storeid replicaid no handler registered for nodeid storeid replicaid storage raft transport go no handler found for store in response range id from replica to replica union storage store go got error from range replica storage raft transport go unable to accept raft message from nodeid storeid replicaid no handler registered for nodeid storeid replicaid storage store go got error from range replica storage raft transport go unable to accept raft message from nodeid storeid replicaid no handler registered for nodeid storeid replicaid util stop stopper go stop has been called stopping or quiescing all running tasks util stop stopper go stop has been called stopping or quiescing all running tasks util stop stopper go stop has been called stopping or quiescing all running tasks server go transport handlestreams failed to read frame read tcp use of closed network connection server go transport handlestreams failed to read frame read tcp use of closed network connection server go transport handlestreams failed to read frame read tcp use of closed network connection server go transport handlestreams failed to read frame read tcp use of closed network connection client go transport notifyerror got notified that the client transport was broken eof storage raft transport go raft transport stream to node failed eof storage raft transport go raft transport stream to node failed eof server go transport handlestreams failed to read frame read tcp use of closed network connection client go transport notifyerror got notified that the client transport was broken eof go src google golang org grpc clientconn go grpc addrconn resettransport failed to create client transport connection error desc transport dial tcp operation was canceled reconnecting to client go transport notifyerror got notified that the client transport was broken eof go src google golang org grpc clientconn go grpc addrconn resettransport failed to create client transport connection error desc transport dial tcp operation was canceled reconnecting to go src google golang org grpc clientconn go grpc addrconn transportmonitor exits due to grpc the connection is closing client go transport notifyerror got notified that the client transport was broken eof go src google golang org grpc clientconn go grpc addrconn resettransport failed to create client transport connection error desc transport dial tcp operation was canceled reconnecting to go src google golang org grpc clientconn go grpc addrconn transportmonitor exits due to grpc the connection is closing go src google golang org grpc clientconn go grpc addrconn transportmonitor exits due to grpc the connection is closing storage raft transport go raft transport stream to node failed eof storage raft transport go raft transport stream to node failed eof storage raft transport go raft transport stream to node failed eof storage raft transport go raft transport stream to node failed eof go src google golang org grpc clientconn go grpc addrconn resettransport failed to create client transport connection error desc transport dial tcp getsockopt connection refused reconnecting to client go transport notifyerror got notified that the client transport was broken eof go src google golang org grpc clientconn go grpc addrconn resettransport failed to create client transport connection error desc transport dial tcp operation was canceled reconnecting to go src google golang org grpc clientconn go grpc addrconn transportmonitor exits due to grpc the connection is closing storage raft transport go raft transport stream to node failed eof util stop stopper go stop has been called stopping or quiescing all running tasks util stop stopper go stop has been called stopping or quiescing all running tasks go src google golang org grpc clientconn go grpc addrconn transportmonitor exits due to context canceled util stop stopper go stop has been called stopping or quiescing all running tasks util stop stopper go stop has been called stopping or quiescing all running tasks util stop stopper go stop has been called stopping or quiescing all running tasks client raft test go failed to achieve proper replication within seconds
| 0
|
3,351
| 6,486,694,393
|
IssuesEvent
|
2017-08-19 22:19:04
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
EthPrice data location is hardcoded in the library
|
apps-ethPrice status-inprocess type-enhancement
|
It should be in the application code (I think). Revisit this.
|
1.0
|
EthPrice data location is hardcoded in the library - It should be in the application code (I think). Revisit this.
|
process
|
ethprice data location is hardcoded in the library it should be in the application code i think revisit this
| 1
|
325,768
| 9,935,762,713
|
IssuesEvent
|
2019-07-02 17:22:18
|
ahmedkaludi/accelerated-mobile-pages
|
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
|
closed
|
Create an option to change featured image size in the single settings.
|
NEED FAST REVIEW [Priority: HIGH] enhancement
|
Create an option to change featured image width and height in single settings so users can change image size as per their requirement.
Ref: https://secure.helpscout.net/conversation/882174191/71090?folderId=2322649
|
1.0
|
Create an option to change featured image size in the single settings. - Create an option to change featured image width and height in single settings so users can change image size as per their requirement.
Ref: https://secure.helpscout.net/conversation/882174191/71090?folderId=2322649
|
non_process
|
create an option to change featured image size in the single settings create an option to change featured image width and height in single settings so users can change image size as per their requirement ref
| 0
|
17,678
| 23,512,040,994
|
IssuesEvent
|
2022-08-18 17:31:43
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
opened
|
Old IOP issue when having 2 instances of diffuse & sharpen module
|
priority: high scope: image processing bug: pending
|
**Describe the bug/issue**
An old IOP bug is back! @TurboGit: I know you will not like that one.
**To Reproduce**
1. Be sure to have 2 instances of diffuse & sharpen module on destination image
2. Copy development on another image (I test it with image that have only one instance of diffuse & sharpen)
3. Paste it to the destination image with 2 instances of diffuse & sharpen
4. See the error:

**Expected behavior**
No issue and image correctly displayed.
**Platform**
_Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _
* darktable version : latest master
* Linux - Distro : Debian Sid
|
1.0
|
Old IOP issue when having 2 instances of diffuse & sharpen module - **Describe the bug/issue**
An old IOP bug is back! @TurboGit: I know you will not like that one.
**To Reproduce**
1. Be sure to have 2 instances of diffuse & sharpen module on destination image
2. Copy development on another image (I test it with image that have only one instance of diffuse & sharpen)
3. Paste it to the destination image with 2 instances of diffuse & sharpen
4. See the error:

**Expected behavior**
No issue and image correctly displayed.
**Platform**
_Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _
* darktable version : latest master
* Linux - Distro : Debian Sid
|
process
|
old iop issue when having instances of diffuse sharpen module describe the bug issue an old iop bug is back turbogit i know you will not like that one to reproduce be sure to have instances of diffuse sharpen module on destination image copy development on another image i test it with image that have only one instance of diffuse sharpen paste it to the destination image with instances of diffuse sharpen see the error expected behavior no issue and image correctly displayed platform please fill as much information as possible in the list given below please state unknown where you do not know the answer and remove any sections that are not applicable darktable version latest master linux distro debian sid
| 1
|
5,303
| 8,121,949,168
|
IssuesEvent
|
2018-08-16 09:51:42
|
openvstorage/alba
|
https://api.github.com/repos/openvstorage/alba
|
reopened
|
Bad storage load distributions in asymmetrical setup
|
priority_minor process_wontfix type_enhancement
|
### Problem description
ASD load seems to balanced around nodes instead of asds
Asymmetrical setups are filling up the asds on nodes with the least amount of asds first.
Screenshots:



### Setup
1 node with 16 asds
2 nodes with 8 asds first
The two smaller nodes filled up their asds completely and the asds on the bigger node were only (roughly) filled halfway.
### Packages
Alba version 1.3.3
|
1.0
|
Bad storage load distributions in asymmetrical setup - ### Problem description
ASD load seems to balanced around nodes instead of asds
Asymmetrical setups are filling up the asds on nodes with the least amount of asds first.
Screenshots:



### Setup
1 node with 16 asds
2 nodes with 8 asds first
The two smaller nodes filled up their asds completely and the asds on the bigger node were only (roughly) filled halfway.
### Packages
Alba version 1.3.3
|
process
|
bad storage load distributions in asymmetrical setup problem description asd load seems to balanced around nodes instead of asds asymmetrical setups are filling up the asds on nodes with the least amount of asds first screenshots setup node with asds nodes with asds first the two smaller nodes filled up their asds completely and the asds on the bigger node were only roughly filled halfway packages alba version
| 1
|
15,868
| 20,036,382,411
|
IssuesEvent
|
2022-02-02 12:22:03
|
plazi/community
|
https://api.github.com/repos/plazi/community
|
opened
|
finding and processing: J. Roy. Soc. Western Australia 13: 2 (1927)
|
process request
|
What would it take to get this processed: J. Roy. Soc. Western Australia 13: 2 (1927)
Before we process, lets see whether we can find it and if, what it would take to get it processed.
|
1.0
|
finding and processing: J. Roy. Soc. Western Australia 13: 2 (1927) - What would it take to get this processed: J. Roy. Soc. Western Australia 13: 2 (1927)
Before we process, lets see whether we can find it and if, what it would take to get it processed.
|
process
|
finding and processing j roy soc western australia what would it take to get this processed j roy soc western australia before we process lets see whether we can find it and if what it would take to get it processed
| 1
|
5,349
| 8,179,349,310
|
IssuesEvent
|
2018-08-28 16:07:01
|
cypress-io/cypress-documentation
|
https://api.github.com/repos/cypress-io/cypress-documentation
|
closed
|
Add more detail to Contributing doc so users can easily contribute
|
process: internal docs
|
- [ ] Add section about testing the documentation (with Cypress)
- [ ] Explain process of adding a new page
- Adding to `sidebar.yml`
- Adding translation to `en.yml`
- Adding `{name}.md` file
- Adding hexo
|
1.0
|
Add more detail to Contributing doc so users can easily contribute - - [ ] Add section about testing the documentation (with Cypress)
- [ ] Explain process of adding a new page
- Adding to `sidebar.yml`
- Adding translation to `en.yml`
- Adding `{name}.md` file
- Adding hexo
|
process
|
add more detail to contributing doc so users can easily contribute add section about testing the documentation with cypress explain process of adding a new page adding to sidebar yml adding translation to en yml adding name md file adding hexo
| 1
|
9,769
| 12,750,392,998
|
IssuesEvent
|
2020-06-27 04:09:36
|
brucemiller/LaTeXML
|
https://api.github.com/repos/brucemiller/LaTeXML
|
closed
|
Request: Option to turn off conversion of math to MathML
|
math parsing postprocessing question
|
For a project I am involved with, it would be very useful to be able to leave math as LaTeX code so that it can be rendered by MathJax. A similar outcome can be achieved with the [Pandoc](https://pandoc.org/demos.html) command:
`pandoc math.text -s --mathml -o mathMathML.html`
but Pandoc does not do so good a job at converting the .tex file as LateXML. (See e.g. [this blog post by Matthew Towers for a comparison](https://www.homepages.ucl.ac.uk/~ucahmto/elearning/latex/2019/05/06/accessibility-regulations.html).
Ideally, too, it would be amazing if any `$..$` and `$$...$$` pairs were converted to `\( ... \)` and `\[ ... \]` respectively.
|
1.0
|
Request: Option to turn off conversion of math to MathML - For a project I am involved with, it would be very useful to be able to leave math as LaTeX code so that it can be rendered by MathJax. A similar outcome can be achieved with the [Pandoc](https://pandoc.org/demos.html) command:
`pandoc math.text -s --mathml -o mathMathML.html`
but Pandoc does not do so good a job at converting the .tex file as LateXML. (See e.g. [this blog post by Matthew Towers for a comparison](https://www.homepages.ucl.ac.uk/~ucahmto/elearning/latex/2019/05/06/accessibility-regulations.html).
Ideally, too, it would be amazing if any `$..$` and `$$...$$` pairs were converted to `\( ... \)` and `\[ ... \]` respectively.
|
process
|
request option to turn off conversion of math to mathml for a project i am involved with it would be very useful to be able to leave math as latex code so that it can be rendered by mathjax a similar outcome can be achieved with the command pandoc math text s mathml o mathmathml html but pandoc does not do so good a job at converting the tex file as latexml see e g ideally too it would be amazing if any and pairs were converted to and respectively
| 1
|
144,008
| 5,534,073,369
|
IssuesEvent
|
2017-03-21 14:42:43
|
wordpress-mobile/WordPress-Aztec-iOS
|
https://api.github.com/repos/wordpress-mobile/WordPress-Aztec-iOS
|
closed
|
Crash when trying to delete character on single character content with blockquote
|
[Bug-Type] Crash / Blocker [Priority] High [Type] Bug
|
How to reproduce:
- Start the demo app
- Select the empty demo
- Tap on the blockquote selector
- Tap one character
- Tap on delete character
- Crash!
|
1.0
|
Crash when trying to delete character on single character content with blockquote - How to reproduce:
- Start the demo app
- Select the empty demo
- Tap on the blockquote selector
- Tap one character
- Tap on delete character
- Crash!
|
non_process
|
crash when trying to delete character on single character content with blockquote how to reproduce start the demo app select the empty demo tap on the blockquote selector tap one character tap on delete character crash
| 0
|
669,846
| 22,643,068,863
|
IssuesEvent
|
2022-07-01 05:35:25
|
AlphaWallet/alpha-wallet-ios
|
https://api.github.com/repos/AlphaWallet/alpha-wallet-ios
|
closed
|
Should show an error instead of crash when importing an invalid "raw" private key
|
Bug High Priority
|
Private keys for the secp256k1 curve must be smaller than the order n, which is `fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141` so these keys are invalid:
```
fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141
fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364142
```
While this is valid:
```
fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364140
```
Should show an error instead of crash.
Context: https://canary.discord.com/channels/548357107267928064/991499612617719848/991562068975161426
Related: https://github.com/AlphaWallet/alpha-wallet-android/issues/2689
|
1.0
|
Should show an error instead of crash when importing an invalid "raw" private key - Private keys for the secp256k1 curve must be smaller than the order n, which is `fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141` so these keys are invalid:
```
fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141
fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364142
```
While this is valid:
```
fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364140
```
Should show an error instead of crash.
Context: https://canary.discord.com/channels/548357107267928064/991499612617719848/991562068975161426
Related: https://github.com/AlphaWallet/alpha-wallet-android/issues/2689
|
non_process
|
should show an error instead of crash when importing an invalid raw private key private keys for the curve must be smaller than the order n which is so these keys are invalid while this is valid should show an error instead of crash context related
| 0
|
55,956
| 6,496,714,597
|
IssuesEvent
|
2017-08-22 11:13:31
|
benchflow/benchflow
|
https://api.github.com/repos/benchflow/benchflow
|
opened
|
Synchronization of Dispatcher and Scheduler
|
benchflow-experiment-manager benchflow-test-manager enhancement
|
If you look in the dispatcher you see that it just checks that there is something in the ready queue and if the running queue is empty it sets the state to running, but doesn't wait for the while loop in the `handleStarting*` to exit. This means in practice it is possible for the state to change to `RUNNING` before the loop in that method is terminated and thus would lead to test/experiment continuing the life cycle causing wrong behavior.
A solution would be that the `Dispatcher` takes a lock on the `Scheduler` object so that the while loop for the `START` state has time to terminate (e.g. there is no race to set the new state). This would work since the `handleStarting*` is `synchronized`.
|
1.0
|
Synchronization of Dispatcher and Scheduler - If you look in the dispatcher you see that it just checks that there is something in the ready queue and if the running queue is empty it sets the state to running, but doesn't wait for the while loop in the `handleStarting*` to exit. This means in practice it is possible for the state to change to `RUNNING` before the loop in that method is terminated and thus would lead to test/experiment continuing the life cycle causing wrong behavior.
A solution would be that the `Dispatcher` takes a lock on the `Scheduler` object so that the while loop for the `START` state has time to terminate (e.g. there is no race to set the new state). This would work since the `handleStarting*` is `synchronized`.
|
non_process
|
synchronization of dispatcher and scheduler if you look in the dispatcher you see that it just checks that there is something in the ready queue and if the running queue is empty it sets the state to running but doesn t wait for the while loop in the handlestarting to exit this means in practice it is possible for the state to change to running before the loop in that method is terminated and thus would lead to test experiment continuing the life cycle causing wrong behavior a solution would be that the dispatcher takes a lock on the scheduler object so that the while loop for the start state has time to terminate e g there is no race to set the new state this would work since the handlestarting is synchronized
| 0
|
19,555
| 25,878,511,508
|
IssuesEvent
|
2022-12-14 09:39:12
|
mehta-lab/microDL
|
https://api.github.com/repos/mehta-lab/microDL
|
closed
|
Flat-field correction in gunpowder pipeline
|
preprocessing
|
Zarr-reader PR should enable reading and computing flat-field corrected.
Currently done in 2 passes:
- Pass 1: read through each FOV, one at a time, and sum to progressively accentuate the flat-field abberation
- Pass 2: read through each FOV again, make a copy, subtract/correct copy for abberation calculated for in pass 1
Should already be implemented except instead of making a copy of the zarr data, it just corrects and saves the entire FOV to a numpy to wait for tiling. So, we can just shorten the current code, and modify it to save to the copy of the array in the zarr store rather than a numpy array.
Flat-field corrected images will be stored in the same structure as the 'raw' data arrays as an additional array. Ex:
ex.zarr
|-- Row_0
|-- Pos_0
|-- Col_0
|-- arr_0 #'raw' data
|-- arr_1_ff # flat_field corrected data data
|
|
|
|
1.0
|
Flat-field correction in gunpowder pipeline - Zarr-reader PR should enable reading and computing flat-field corrected.
Currently done in 2 passes:
- Pass 1: read through each FOV, one at a time, and sum to progressively accentuate the flat-field abberation
- Pass 2: read through each FOV again, make a copy, subtract/correct copy for abberation calculated for in pass 1
Should already be implemented except instead of making a copy of the zarr data, it just corrects and saves the entire FOV to a numpy to wait for tiling. So, we can just shorten the current code, and modify it to save to the copy of the array in the zarr store rather than a numpy array.
Flat-field corrected images will be stored in the same structure as the 'raw' data arrays as an additional array. Ex:
ex.zarr
|-- Row_0
|-- Pos_0
|-- Col_0
|-- arr_0 #'raw' data
|-- arr_1_ff # flat_field corrected data data
|
|
|
|
process
|
flat field correction in gunpowder pipeline zarr reader pr should enable reading and computing flat field corrected currently done in passes pass read through each fov one at a time and sum to progressively accentuate the flat field abberation pass read through each fov again make a copy subtract correct copy for abberation calculated for in pass should already be implemented except instead of making a copy of the zarr data it just corrects and saves the entire fov to a numpy to wait for tiling so we can just shorten the current code and modify it to save to the copy of the array in the zarr store rather than a numpy array flat field corrected images will be stored in the same structure as the raw data arrays as an additional array ex ex zarr row pos col arr raw data arr ff flat field corrected data data
| 1
|
280,678
| 24,323,097,494
|
IssuesEvent
|
2022-09-30 12:36:37
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
[CI] SnaphotsAndFileSettingsIT testRestoreWithRemovedFileSettings failing
|
:Core/Infra/Core >test-failure Team:Core/Infra
|
**Build scan:**
https://gradle-enterprise.elastic.co/s/5plxopim2uz7g/tests/:server:internalClusterTest/org.elasticsearch.reservedstate.service.SnaphotsAndFileSettingsIT/testRestoreWithRemovedFileSettings
**Reproduction line:**
`./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.reservedstate.service.SnaphotsAndFileSettingsIT.testRestoreWithRemovedFileSettings" -Dtests.seed=897B64625F82F06C -Dtests.locale=ar-BH -Dtests.timezone=Asia/Bahrain -Druntime.java=17`
**Applicable branches:**
main
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.reservedstate.service.SnaphotsAndFileSettingsIT&tests.test=testRestoreWithRemovedFileSettings
**Failure excerpt:**
```
java.lang.IllegalArgumentException: Failed to process request [org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest/unset] with errors: [[indices.recovery.max_bytes_per_sec] set as read-only by [file_settings]]
at org.elasticsearch.reservedstate.ActionWithReservedState.validateForReservedState(ActionWithReservedState.java:69)
at org.elasticsearch.action.support.master.TransportMasterNodeAction.validateForReservedState(TransportMasterNodeAction.java:154)
at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:166)
at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:54)
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:86)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
at org.elasticsearch.tasks.TaskManager.registerAndExecute(TaskManager.java:201)
at org.elasticsearch.client.internal.node.NodeClient.executeLocally(NodeClient.java:112)
at org.elasticsearch.client.internal.node.NodeClient.doExecute(NodeClient.java:90)
at org.elasticsearch.client.internal.support.AbstractClient.execute(AbstractClient.java:380)
at org.elasticsearch.client.internal.FilterClient.doExecute(FilterClient.java:57)
at org.elasticsearch.client.internal.support.AbstractClient.execute(AbstractClient.java:380)
at org.elasticsearch.client.internal.support.AbstractClient.execute(AbstractClient.java:366)
at org.elasticsearch.client.internal.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:667)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:34)
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:48)
at org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked(ElasticsearchAssertions.java:91)
at org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked(ElasticsearchAssertions.java:87)
at org.elasticsearch.reservedstate.service.SnaphotsAndFileSettingsIT.testRestoreWithRemovedFileSettings(SnaphotsAndFileSettingsIT.java:189)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
```
|
1.0
|
[CI] SnaphotsAndFileSettingsIT testRestoreWithRemovedFileSettings failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/5plxopim2uz7g/tests/:server:internalClusterTest/org.elasticsearch.reservedstate.service.SnaphotsAndFileSettingsIT/testRestoreWithRemovedFileSettings
**Reproduction line:**
`./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.reservedstate.service.SnaphotsAndFileSettingsIT.testRestoreWithRemovedFileSettings" -Dtests.seed=897B64625F82F06C -Dtests.locale=ar-BH -Dtests.timezone=Asia/Bahrain -Druntime.java=17`
**Applicable branches:**
main
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.reservedstate.service.SnaphotsAndFileSettingsIT&tests.test=testRestoreWithRemovedFileSettings
**Failure excerpt:**
```
java.lang.IllegalArgumentException: Failed to process request [org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest/unset] with errors: [[indices.recovery.max_bytes_per_sec] set as read-only by [file_settings]]
at org.elasticsearch.reservedstate.ActionWithReservedState.validateForReservedState(ActionWithReservedState.java:69)
at org.elasticsearch.action.support.master.TransportMasterNodeAction.validateForReservedState(TransportMasterNodeAction.java:154)
at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:166)
at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:54)
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:86)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
at org.elasticsearch.tasks.TaskManager.registerAndExecute(TaskManager.java:201)
at org.elasticsearch.client.internal.node.NodeClient.executeLocally(NodeClient.java:112)
at org.elasticsearch.client.internal.node.NodeClient.doExecute(NodeClient.java:90)
at org.elasticsearch.client.internal.support.AbstractClient.execute(AbstractClient.java:380)
at org.elasticsearch.client.internal.FilterClient.doExecute(FilterClient.java:57)
at org.elasticsearch.client.internal.support.AbstractClient.execute(AbstractClient.java:380)
at org.elasticsearch.client.internal.support.AbstractClient.execute(AbstractClient.java:366)
at org.elasticsearch.client.internal.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:667)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:34)
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:48)
at org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked(ElasticsearchAssertions.java:91)
at org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked(ElasticsearchAssertions.java:87)
at org.elasticsearch.reservedstate.service.SnaphotsAndFileSettingsIT.testRestoreWithRemovedFileSettings(SnaphotsAndFileSettingsIT.java:189)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
```
|
non_process
|
snaphotsandfilesettingsit testrestorewithremovedfilesettings failing build scan reproduction line gradlew server internalclustertest tests org elasticsearch reservedstate service snaphotsandfilesettingsit testrestorewithremovedfilesettings dtests seed dtests locale ar bh dtests timezone asia bahrain druntime java applicable branches main reproduces locally no failure history failure excerpt java lang illegalargumentexception failed to process request with errors set as read only by at org elasticsearch reservedstate actionwithreservedstate validateforreservedstate actionwithreservedstate java at org elasticsearch action support master transportmasternodeaction validateforreservedstate transportmasternodeaction java at org elasticsearch action support master transportmasternodeaction doexecute transportmasternodeaction java at org elasticsearch action support master transportmasternodeaction doexecute transportmasternodeaction java at org elasticsearch action support transportaction requestfilterchain proceed transportaction java at org elasticsearch action support transportaction execute transportaction java at org elasticsearch tasks taskmanager registerandexecute taskmanager java at org elasticsearch client internal node nodeclient executelocally nodeclient java at org elasticsearch client internal node nodeclient doexecute nodeclient java at org elasticsearch client internal support abstractclient execute abstractclient java at org elasticsearch client internal filterclient doexecute filterclient java at org elasticsearch client internal support abstractclient execute abstractclient java at org elasticsearch client internal support abstractclient execute abstractclient java at org elasticsearch client internal support abstractclient clusteradmin execute abstractclient java at org elasticsearch action actionrequestbuilder execute actionrequestbuilder java at org elasticsearch action actionrequestbuilder get actionrequestbuilder java at org elasticsearch test hamcrest elasticsearchassertions assertacked elasticsearchassertions java at org elasticsearch test hamcrest elasticsearchassertions assertacked elasticsearchassertions java at org elasticsearch reservedstate service snaphotsandfilesettingsit testrestorewithremovedfilesettings snaphotsandfilesettingsit java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
| 0
|
21,386
| 4,707,788,763
|
IssuesEvent
|
2016-10-13 21:11:36
|
spring-cloud/spring-cloud-dataflow
|
https://api.github.com/repos/spring-cloud/spring-cloud-dataflow
|
opened
|
Add a maven plugin to parse through the latest release revisions from project page
|
documentation next iteration
|
As a developer, I'd like to add a maven plugin that reads latest release versions from project page, so I don't have hard-code and manually keep track of links that are version bound.
For example, the following request includes the version credentials in the returned JSON that we can parse through to derive the version tokens and replace the placeholders in the reference guide.
> curl https://spring.io/project_metadata/spring-cloud-task-app-starters
```
{"id":"spring-cloud-task-app-starters","name":"Spring Cloud Task App Starters","repoUrl":"http://github.com/spring-cloud/spring-cloud-task-app-starters","siteUrl":" http://cloud.spring.io/spring-cloud-task-app-starters","category":"incubator","stackOverflowTags":"","projectReleases":[{"releaseStatus":"SNAPSHOT","refDocUrl":"http://docs.spring.io/spring-cloud-task-app-starters/docs/1.0.2.BUILD-SNAPSHOT/reference/html","apiDocUrl":"http://docs.spring.io/spring-cloud-task-app-starters/docs/1.0.2.BUILD-SNAPSHOT/api/","groupId":"org.springframework.cloud","artifactId":"spring-cloud-task-app-starters","repository":{"id":"spring-snapshots","name":"Spring Snapshots","url":"https://repo.spring.io/libs-snapshot","snapshotsEnabled":true},"snapshot":true,"generalAvailability":false,"preRelease":false,"versionDisplayName":"1.0.2","current":false,"version":"1.0.2.BUILD-SNAPSHOT"},{"releaseStatus":"GENERAL_AVAILABILITY","refDocUrl":"http://docs.spring.io/spring-cloud-task-app-starters/docs/1.0.1.RELEASE/reference/html","apiDocUrl":"http://docs.spring.io/spring-cloud-task-app-starters/docs/1.0.1.RELEASE/api/","groupId":"org.springframework.cloud","artifactId":"spring-cloud-task-app-starters","repository":null,"snapshot":false,"generalAvailability":true,"preRelease":false,"versionDisplayName":"1.0.1","current":false,"version":"1.0.1.RELEASE"}],"aggregator":false,"stackOverflowTagList":[]}%
```
**Acceptance:**
- links and other "versioned" references in the docs are automatically replaced with latest release versions at the build time
- verify the accuracy of latest release revisions in the generated docs
|
1.0
|
Add a maven plugin to parse through the latest release revisions from project page - As a developer, I'd like to add a maven plugin that reads latest release versions from project page, so I don't have hard-code and manually keep track of links that are version bound.
For example, the following request includes the version credentials in the returned JSON that we can parse through to derive the version tokens and replace the placeholders in the reference guide.
> curl https://spring.io/project_metadata/spring-cloud-task-app-starters
```
{"id":"spring-cloud-task-app-starters","name":"Spring Cloud Task App Starters","repoUrl":"http://github.com/spring-cloud/spring-cloud-task-app-starters","siteUrl":" http://cloud.spring.io/spring-cloud-task-app-starters","category":"incubator","stackOverflowTags":"","projectReleases":[{"releaseStatus":"SNAPSHOT","refDocUrl":"http://docs.spring.io/spring-cloud-task-app-starters/docs/1.0.2.BUILD-SNAPSHOT/reference/html","apiDocUrl":"http://docs.spring.io/spring-cloud-task-app-starters/docs/1.0.2.BUILD-SNAPSHOT/api/","groupId":"org.springframework.cloud","artifactId":"spring-cloud-task-app-starters","repository":{"id":"spring-snapshots","name":"Spring Snapshots","url":"https://repo.spring.io/libs-snapshot","snapshotsEnabled":true},"snapshot":true,"generalAvailability":false,"preRelease":false,"versionDisplayName":"1.0.2","current":false,"version":"1.0.2.BUILD-SNAPSHOT"},{"releaseStatus":"GENERAL_AVAILABILITY","refDocUrl":"http://docs.spring.io/spring-cloud-task-app-starters/docs/1.0.1.RELEASE/reference/html","apiDocUrl":"http://docs.spring.io/spring-cloud-task-app-starters/docs/1.0.1.RELEASE/api/","groupId":"org.springframework.cloud","artifactId":"spring-cloud-task-app-starters","repository":null,"snapshot":false,"generalAvailability":true,"preRelease":false,"versionDisplayName":"1.0.1","current":false,"version":"1.0.1.RELEASE"}],"aggregator":false,"stackOverflowTagList":[]}%
```
**Acceptance:**
- links and other "versioned" references in the docs are automatically replaced with latest release versions at the build time
- verify the accuracy of latest release revisions in the generated docs
|
non_process
|
add a maven plugin to parse through the latest release revisions from project page as a developer i d like to add a maven plugin that reads latest release versions from project page so i don t have hard code and manually keep track of links that are version bound for example the following request includes the version credentials in the returned json that we can parse through to derive the version tokens and replace the placeholders in the reference guide curl id spring cloud task app starters name spring cloud task app starters repourl aggregator false stackoverflowtaglist acceptance links and other versioned references in the docs are automatically replaced with latest release versions at the build time verify the accuracy of latest release revisions in the generated docs
| 0
|
17,523
| 23,330,720,721
|
IssuesEvent
|
2022-08-09 04:44:16
|
esmero/strawberry_runners
|
https://api.github.com/repos/esmero/strawberry_runners
|
closed
|
Make WACZ processor aware of Browsertrix -Crawler extraPages.jsonl
|
enhancement Datapackage / Frictionless Post processor Plugins
|
# What?
WACZ now keeps main page and the extra pages in different indexes. take all of them if present
|
1.0
|
Make WACZ processor aware of Browsertrix -Crawler extraPages.jsonl - # What?
WACZ now keeps main page and the extra pages in different indexes. take all of them if present
|
process
|
make wacz processor aware of browsertrix crawler extrapages jsonl what wacz now keeps main page and the extra pages in different indexes take all of them if present
| 1
|
216,417
| 16,761,076,226
|
IssuesEvent
|
2021-06-13 19:57:56
|
bounswe/2021SpringGroup3
|
https://api.github.com/repos/bounswe/2021SpringGroup3
|
closed
|
Implement Test: getProfile
|
Component: Junit-Testing Priority: Medium Status: Review Needed Type: Testing
|
Can you please implement the tests for getting profile by id functionality for both functions in Profile Service and Profile Controller?
|
2.0
|
Implement Test: getProfile - Can you please implement the tests for getting profile by id functionality for both functions in Profile Service and Profile Controller?
|
non_process
|
implement test getprofile can you please implement the tests for getting profile by id functionality for both functions in profile service and profile controller
| 0
|
12,047
| 14,738,820,869
|
IssuesEvent
|
2021-01-07 05:48:33
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
068- portland - Client with multiple customer/accounts
|
anc-ops anc-process anp-1 ant-bug ant-support has attachment
|
In GitLab by @kdjstudios on Jul 31, 2018, 08:34
**Submitted by:** "Lettice Ross" <lettice.ross@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-07-24-42522/conversation
**Server:** Internal
**Client/Site:** 068- portland
**Account:**
**Issue:**
I have a client that has two account with Answernet, and was set up to have both accounts merged together on the portal for her to make payments. The account that the portal is set under is B01342.
This client has sent an email every month, because she is unable to get in a pay, without me resending the invite.
Is there someone that can fix this issue.
I have attached the clients email she sent me.
[attached_message_001__3_.txt](/uploads/0e0cf33bd61cd224b404e42fe10f4dd8/attached_message_001__3_.txt)
[attached_message_002.txt](/uploads/57d45531c8053bd2158b129cf9da651a/attached_message_002.txt)
[original_message__11_.html](/uploads/f780f7a090c13c9e070444c5d54475be/original_message__11_.html)
|
1.0
|
068- portland - Client with multiple customer/accounts - In GitLab by @kdjstudios on Jul 31, 2018, 08:34
**Submitted by:** "Lettice Ross" <lettice.ross@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-07-24-42522/conversation
**Server:** Internal
**Client/Site:** 068- portland
**Account:**
**Issue:**
I have a client that has two account with Answernet, and was set up to have both accounts merged together on the portal for her to make payments. The account that the portal is set under is B01342.
This client has sent an email every month, because she is unable to get in a pay, without me resending the invite.
Is there someone that can fix this issue.
I have attached the clients email she sent me.
[attached_message_001__3_.txt](/uploads/0e0cf33bd61cd224b404e42fe10f4dd8/attached_message_001__3_.txt)
[attached_message_002.txt](/uploads/57d45531c8053bd2158b129cf9da651a/attached_message_002.txt)
[original_message__11_.html](/uploads/f780f7a090c13c9e070444c5d54475be/original_message__11_.html)
|
process
|
portland client with multiple customer accounts in gitlab by kdjstudios on jul submitted by lettice ross helpdesk server internal client site portland account issue i have a client that has two account with answernet and was set up to have both accounts merged together on the portal for her to make payments the account that the portal is set under is this client has sent an email every month because she is unable to get in a pay without me resending the invite is there someone that can fix this issue i have attached the clients email she sent me uploads attached message txt uploads attached message txt uploads original message html
| 1
|
20,410
| 27,067,212,243
|
IssuesEvent
|
2023-02-14 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Mon, 13 Feb 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Context Understanding in Computer Vision: A Survey
- **Authors:** Xuan Wang, Zhigang Zhu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.05011
- **Pdf link:** https://arxiv.org/pdf/2302.05011
- **Abstract**
Contextual information plays an important role in many computer vision tasks, such as object detection, video action detection, image classification, etc. Recognizing a single object or action out of context could be sometimes very challenging, and context information may help improve the understanding of a scene or an event greatly. Appearance context information, e.g., colors or shapes of the background of an object can improve the recognition accuracy of the object in the scene. Semantic context (e.g. a keyboard on an empty desk vs. a keyboard next to a desktop computer ) will improve accuracy and exclude unrelated events. Context information that are not in the image itself, such as the time or location of an images captured, can also help to decide whether certain event or action should occur. Other types of context (e.g. 3D structure of a building) will also provide additional information to improve the accuracy. In this survey, different context information that has been used in computer vision tasks is reviewed. We categorize context into different types and different levels. We also review available machine learning models and image/video datasets that can employ context information. Furthermore, we compare context based integration and context-free integration in mainly two classes of tasks: image-based and video-based. Finally, this survey is concluded by a set of promising future directions in context learning and utilization.
### BEST: BERT Pre-Training for Sign Language Recognition with Coupling Tokenization
- **Authors:** Weichao Zhao, Hezhen Hu, Wengang Zhou, Jiaxin Shi, Houqiang Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.05075
- **Pdf link:** https://arxiv.org/pdf/2302.05075
- **Abstract**
In this work, we are dedicated to leveraging the BERT pre-training success and modeling the domain-specific statistics to fertilize the sign language recognition~(SLR) model. Considering the dominance of hand and body in sign language expression, we organize them as pose triplet units and feed them into the Transformer backbone in a frame-wise manner. Pre-training is performed via reconstructing the masked triplet unit from the corrupted input sequence, which learns the hierarchical correlation context cues among internal and external triplet units. Notably, different from the highly semantic word token in BERT, the pose unit is a low-level signal originally located in continuous space, which prevents the direct adoption of the BERT cross-entropy objective. To this end, we bridge this semantic gap via coupling tokenization of the triplet unit. It adaptively extracts the discrete pseudo label from the pose triplet unit, which represents the semantic gesture/body state. After pre-training, we fine-tune the pre-trained encoder on the downstream SLR task, jointly with the newly added task-specific layer. Extensive experiments are conducted to validate the effectiveness of our proposed method, achieving new state-of-the-art performance on all four benchmarks with a notable gain.
### Generalized Video Anomaly Event Detection: Systematic Taxonomy and Comparison of Deep Models
- **Authors:** Yang Liu, Dingkang Yang, Yan Wang, Jing Liu, Liang Song
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2302.05087
- **Pdf link:** https://arxiv.org/pdf/2302.05087
- **Abstract**
Video Anomaly Event Detection (VAED) is the core technology of intelligent surveillance systems aiming to temporally or spatially locate anomalous events in videos. With the penetration of deep learning, the recent advances in VAED have diverged various routes and achieved significant success. However, most existing reviews focus on traditional and unsupervised VAED methods, lacking attention to emerging weakly-supervised and fully-unsupervised routes. Therefore, this review extends the narrow VAED concept from unsupervised video anomaly detection to Generalized Video Anomaly Event Detection (GVAED), which provides a comprehensive survey that integrates recent works based on different assumptions and learning frameworks into an intuitive taxonomy and coordinates unsupervised, weakly-supervised, fully-unsupervised, and supervised VAED routes. To facilitate future researchers, this review collates and releases research resources such as datasets, available codes, programming tools, and literature. Moreover, this review quantitatively compares the model performance and analyzes the research challenges and possible trends for future work.
### Dual Memory Units with Uncertainty Regulation for Weakly Supervised Video Anomaly Detection
- **Authors:** Hang Zhou, Junqing Yu, Wei Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.05160
- **Pdf link:** https://arxiv.org/pdf/2302.05160
- **Abstract**
Learning discriminative features for effectively separating abnormal events from normality is crucial for weakly supervised video anomaly detection (WS-VAD) tasks. Existing approaches, both video and segment-level label oriented, mainly focus on extracting representations for anomaly data while neglecting the implication of normal data. We observe that such a scheme is sub-optimal, i.e., for better distinguishing anomaly one needs to understand what is a normal state, and may yield a higher false alarm rate. To address this issue, we propose an Uncertainty Regulated Dual Memory Units (UR-DMU) model to learn both the representations of normal data and discriminative features of abnormal data. To be specific, inspired by the traditional global and local structure on graph convolutional networks, we introduce a Global and Local Multi-Head Self Attention (GL-MHSA) module for the Transformer network to obtain more expressive embeddings for capturing associations in videos. Then, we use two memory banks, one additional abnormal memory for tackling hard samples, to store and separate abnormal and normal prototypes and maximize the margins between the two representations. Finally, we propose an uncertainty learning scheme to learn the normal data latent space, that is robust to noise from camera switching, object changing, scene transforming, etc. Extensive experiments on XD-Violence and UCF-Crime datasets demonstrate that our method outperforms the state-of-the-art methods by a sizable margin.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Is multi-modal vision supervision beneficial to language?
- **Authors:** Avinash Madasu, Vasudev Lal
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
- **Arxiv link:** https://arxiv.org/abs/2302.05016
- **Pdf link:** https://arxiv.org/pdf/2302.05016
- **Abstract**
Vision (image and video) - Language (VL) pre-training is the recent popular paradigm that achieved state-of-the-art results on multi-modal tasks like image-retrieval, video-retrieval, visual question answering etc. These models are trained in an unsupervised way and greatly benefit from the complementary modality supervision. In this paper, we explore if the language representations trained using vision supervision perform better than vanilla language representations on Natural Language Understanding and commonsense reasoning benchmarks. We experiment with a diverse set of image-text models such as ALBEF, BLIP, METER and video-text models like ALPRO, Frozen-in-Time (FiT), VIOLET. We compare the performance of language representations of stand-alone text encoders of these models to the language representations of text encoders learnt through vision supervision. Our experiments suggest that vanilla language representations show superior performance on most of the tasks. These results shed light on the current drawbacks of the vision-language models.
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### CEN-HDR: Computationally Efficient neural Network for real-time High Dynamic Range imaging
- **Authors:** Steven Tel, Barthélémy Heyrman, Dominique Ginhac
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2302.05213
- **Pdf link:** https://arxiv.org/pdf/2302.05213
- **Abstract**
High dynamic range (HDR) imaging is still a challenging task in modern digital photography. Recent research proposes solutions that provide high-quality acquisition but at the cost of a very large number of operations and a slow inference time that prevent the implementation of these solutions on lightweight real-time systems. In this paper, we propose CEN-HDR, a new computationally efficient neural network by providing a novel architecture based on a light attention mechanism and sub-pixel convolution operations for real-time HDR imaging. We also provide an efficient training scheme by applying network compression using knowledge distillation. We performed extensive qualitative and quantitative comparisons to show that our approach produces competitive results in image quality while being faster than state-of-the-art solutions, allowing it to be practically deployed under real-time constraints. Experimental results show our method obtains a score of 43.04 mu-PSNR on the Kalantari2017 dataset with a framerate of 33 FPS using a Macbook M1 NPU.
## Keyword: RAW
### Unsupervised ore/waste classification on open-cut mine faces using close-range hyperspectral data
- **Authors:** Lloyd Windrim, Arman Melkumyan, Richard J. Murphy, Anna Chlingaryan, Raymond Leung
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.04936
- **Pdf link:** https://arxiv.org/pdf/2302.04936
- **Abstract**
The remote mapping of minerals and discrimination of ore and waste on surfaces are important tasks for geological applications such as those in mining. Such tasks have become possible using ground-based, close-range hyperspectral sensors which can remotely measure the reflectance properties of the environment with high spatial and spectral resolution. However, autonomous mapping of mineral spectra measured on an open-cut mine face remains a challenging problem due to the subtleness of differences in spectral absorption features between mineral and rock classes as well as variability in the illumination of the scene. An additional layer of difficulty arises when there is no annotated data available to train a supervised learning algorithm. A pipeline for unsupervised mapping of spectra on a mine face is proposed which draws from several recent advances in the hyperspectral machine learning literature. The proposed pipeline brings together unsupervised and self-supervised algorithms in a unified system to map minerals on a mine face without the need for human-annotated training data. The pipeline is evaluated with a hyperspectral image dataset of an open-cut mine face comprising mineral ore martite and non-mineralised shale. The combined system is shown to produce a superior map to its constituent algorithms, and the consistency of its mapping capability is demonstrated using data acquired at two different times of day.
### Invariant Slot Attention: Object Discovery with Slot-Centric Reference Frames
- **Authors:** Ondrej Biza, Sjoerd van Steenkiste, Mehdi S. M. Sajjadi, Gamaleldin F. Elsayed, Aravindh Mahendran, Thomas Kipf
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2302.04973
- **Pdf link:** https://arxiv.org/pdf/2302.04973
- **Abstract**
Automatically discovering composable abstractions from raw perceptual data is a long-standing challenge in machine learning. Recent slot-based neural networks that learn about objects in a self-supervised manner have made exciting progress in this direction. However, they typically fall short at adequately capturing spatial symmetries present in the visual world, which leads to sample inefficiency, such as when entangling object appearance and pose. In this paper, we present a simple yet highly effective method for incorporating spatial symmetries via slot-centric reference frames. We incorporate equivariance to per-object pose transformations into the attention and generation mechanism of Slot Attention by translating, scaling, and rotating position encodings. These changes result in little computational overhead, are easy to implement, and can result in large gains in terms of data efficiency and overall improvements to object discovery. We evaluate our method on a wide range of synthetic object discovery benchmarks namely CLEVR, Tetrominoes, CLEVRTex, Objects Room and MultiShapeNet, and show promising improvements on the challenging real-world Waymo Open dataset.
### Is multi-modal vision supervision beneficial to language?
- **Authors:** Avinash Madasu, Vasudev Lal
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
- **Arxiv link:** https://arxiv.org/abs/2302.05016
- **Pdf link:** https://arxiv.org/pdf/2302.05016
- **Abstract**
Vision (image and video) - Language (VL) pre-training is the recent popular paradigm that achieved state-of-the-art results on multi-modal tasks like image-retrieval, video-retrieval, visual question answering etc. These models are trained in an unsupervised way and greatly benefit from the complementary modality supervision. In this paper, we explore if the language representations trained using vision supervision perform better than vanilla language representations on Natural Language Understanding and commonsense reasoning benchmarks. We experiment with a diverse set of image-text models such as ALBEF, BLIP, METER and video-text models like ALPRO, Frozen-in-Time (FiT), VIOLET. We compare the performance of language representations of stand-alone text encoders of these models to the language representations of text encoders learnt through vision supervision. Our experiments suggest that vanilla language representations show superior performance on most of the tasks. These results shed light on the current drawbacks of the vision-language models.
### TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation
- **Authors:** Hyesu Lim, Byeonggeun Kim, Jaegul Choo, Sungha Choi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.05155
- **Pdf link:** https://arxiv.org/pdf/2302.05155
- **Abstract**
This paper proposes a novel batch normalization strategy for test-time adaptation. Recent test-time adaptation methods heavily rely on the modified batch normalization, i.e., transductive batch normalization (TBN), which calculates the mean and the variance from the current test batch rather than using the running mean and variance obtained from the source data, i.e., conventional batch normalization (CBN). Adopting TBN that employs test batch statistics mitigates the performance degradation caused by the domain shift. However, re-estimating normalization statistics using test data depends on impractical assumptions that a test batch should be large enough and be drawn from i.i.d. stream, and we observed that the previous methods with TBN show critical performance drop without the assumptions. In this paper, we identify that CBN and TBN are in a trade-off relationship and present a new test-time normalization (TTN) method that interpolates the statistics by adjusting the importance between CBN and TBN according to the domain-shift sensitivity of each BN layer. Our proposed TTN improves model robustness to shifted domains across a wide range of batch sizes and in various realistic evaluation scenarios. TTN is widely applicable to other test-time adaptation methods that rely on updating model parameters via backpropagation. We demonstrate that adopting TTN further improves their performance and achieves state-of-the-art performance in various standard benchmarks.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Mon, 13 Feb 23 - ## Keyword: events
### Context Understanding in Computer Vision: A Survey
- **Authors:** Xuan Wang, Zhigang Zhu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.05011
- **Pdf link:** https://arxiv.org/pdf/2302.05011
- **Abstract**
Contextual information plays an important role in many computer vision tasks, such as object detection, video action detection, image classification, etc. Recognizing a single object or action out of context could be sometimes very challenging, and context information may help improve the understanding of a scene or an event greatly. Appearance context information, e.g., colors or shapes of the background of an object can improve the recognition accuracy of the object in the scene. Semantic context (e.g. a keyboard on an empty desk vs. a keyboard next to a desktop computer ) will improve accuracy and exclude unrelated events. Context information that are not in the image itself, such as the time or location of an images captured, can also help to decide whether certain event or action should occur. Other types of context (e.g. 3D structure of a building) will also provide additional information to improve the accuracy. In this survey, different context information that has been used in computer vision tasks is reviewed. We categorize context into different types and different levels. We also review available machine learning models and image/video datasets that can employ context information. Furthermore, we compare context based integration and context-free integration in mainly two classes of tasks: image-based and video-based. Finally, this survey is concluded by a set of promising future directions in context learning and utilization.
### BEST: BERT Pre-Training for Sign Language Recognition with Coupling Tokenization
- **Authors:** Weichao Zhao, Hezhen Hu, Wengang Zhou, Jiaxin Shi, Houqiang Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.05075
- **Pdf link:** https://arxiv.org/pdf/2302.05075
- **Abstract**
In this work, we are dedicated to leveraging the BERT pre-training success and modeling the domain-specific statistics to fertilize the sign language recognition~(SLR) model. Considering the dominance of hand and body in sign language expression, we organize them as pose triplet units and feed them into the Transformer backbone in a frame-wise manner. Pre-training is performed via reconstructing the masked triplet unit from the corrupted input sequence, which learns the hierarchical correlation context cues among internal and external triplet units. Notably, different from the highly semantic word token in BERT, the pose unit is a low-level signal originally located in continuous space, which prevents the direct adoption of the BERT cross-entropy objective. To this end, we bridge this semantic gap via coupling tokenization of the triplet unit. It adaptively extracts the discrete pseudo label from the pose triplet unit, which represents the semantic gesture/body state. After pre-training, we fine-tune the pre-trained encoder on the downstream SLR task, jointly with the newly added task-specific layer. Extensive experiments are conducted to validate the effectiveness of our proposed method, achieving new state-of-the-art performance on all four benchmarks with a notable gain.
### Generalized Video Anomaly Event Detection: Systematic Taxonomy and Comparison of Deep Models
- **Authors:** Yang Liu, Dingkang Yang, Yan Wang, Jing Liu, Liang Song
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2302.05087
- **Pdf link:** https://arxiv.org/pdf/2302.05087
- **Abstract**
Video Anomaly Event Detection (VAED) is the core technology of intelligent surveillance systems aiming to temporally or spatially locate anomalous events in videos. With the penetration of deep learning, the recent advances in VAED have diverged various routes and achieved significant success. However, most existing reviews focus on traditional and unsupervised VAED methods, lacking attention to emerging weakly-supervised and fully-unsupervised routes. Therefore, this review extends the narrow VAED concept from unsupervised video anomaly detection to Generalized Video Anomaly Event Detection (GVAED), which provides a comprehensive survey that integrates recent works based on different assumptions and learning frameworks into an intuitive taxonomy and coordinates unsupervised, weakly-supervised, fully-unsupervised, and supervised VAED routes. To facilitate future researchers, this review collates and releases research resources such as datasets, available codes, programming tools, and literature. Moreover, this review quantitatively compares the model performance and analyzes the research challenges and possible trends for future work.
### Dual Memory Units with Uncertainty Regulation for Weakly Supervised Video Anomaly Detection
- **Authors:** Hang Zhou, Junqing Yu, Wei Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.05160
- **Pdf link:** https://arxiv.org/pdf/2302.05160
- **Abstract**
Learning discriminative features for effectively separating abnormal events from normality is crucial for weakly supervised video anomaly detection (WS-VAD) tasks. Existing approaches, both video and segment-level label oriented, mainly focus on extracting representations for anomaly data while neglecting the implication of normal data. We observe that such a scheme is sub-optimal, i.e., for better distinguishing anomaly one needs to understand what is a normal state, and may yield a higher false alarm rate. To address this issue, we propose an Uncertainty Regulated Dual Memory Units (UR-DMU) model to learn both the representations of normal data and discriminative features of abnormal data. To be specific, inspired by the traditional global and local structure on graph convolutional networks, we introduce a Global and Local Multi-Head Self Attention (GL-MHSA) module for the Transformer network to obtain more expressive embeddings for capturing associations in videos. Then, we use two memory banks, one additional abnormal memory for tackling hard samples, to store and separate abnormal and normal prototypes and maximize the margins between the two representations. Finally, we propose an uncertainty learning scheme to learn the normal data latent space, that is robust to noise from camera switching, object changing, scene transforming, etc. Extensive experiments on XD-Violence and UCF-Crime datasets demonstrate that our method outperforms the state-of-the-art methods by a sizable margin.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Is multi-modal vision supervision beneficial to language?
- **Authors:** Avinash Madasu, Vasudev Lal
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
- **Arxiv link:** https://arxiv.org/abs/2302.05016
- **Pdf link:** https://arxiv.org/pdf/2302.05016
- **Abstract**
Vision (image and video) - Language (VL) pre-training is the recent popular paradigm that achieved state-of-the-art results on multi-modal tasks like image-retrieval, video-retrieval, visual question answering etc. These models are trained in an unsupervised way and greatly benefit from the complementary modality supervision. In this paper, we explore if the language representations trained using vision supervision perform better than vanilla language representations on Natural Language Understanding and commonsense reasoning benchmarks. We experiment with a diverse set of image-text models such as ALBEF, BLIP, METER and video-text models like ALPRO, Frozen-in-Time (FiT), VIOLET. We compare the performance of language representations of stand-alone text encoders of these models to the language representations of text encoders learnt through vision supervision. Our experiments suggest that vanilla language representations show superior performance on most of the tasks. These results shed light on the current drawbacks of the vision-language models.
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### CEN-HDR: Computationally Efficient neural Network for real-time High Dynamic Range imaging
- **Authors:** Steven Tel, Barthélémy Heyrman, Dominique Ginhac
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2302.05213
- **Pdf link:** https://arxiv.org/pdf/2302.05213
- **Abstract**
High dynamic range (HDR) imaging is still a challenging task in modern digital photography. Recent research proposes solutions that provide high-quality acquisition but at the cost of a very large number of operations and a slow inference time that prevent the implementation of these solutions on lightweight real-time systems. In this paper, we propose CEN-HDR, a new computationally efficient neural network by providing a novel architecture based on a light attention mechanism and sub-pixel convolution operations for real-time HDR imaging. We also provide an efficient training scheme by applying network compression using knowledge distillation. We performed extensive qualitative and quantitative comparisons to show that our approach produces competitive results in image quality while being faster than state-of-the-art solutions, allowing it to be practically deployed under real-time constraints. Experimental results show our method obtains a score of 43.04 mu-PSNR on the Kalantari2017 dataset with a framerate of 33 FPS using a Macbook M1 NPU.
## Keyword: RAW
### Unsupervised ore/waste classification on open-cut mine faces using close-range hyperspectral data
- **Authors:** Lloyd Windrim, Arman Melkumyan, Richard J. Murphy, Anna Chlingaryan, Raymond Leung
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.04936
- **Pdf link:** https://arxiv.org/pdf/2302.04936
- **Abstract**
The remote mapping of minerals and discrimination of ore and waste on surfaces are important tasks for geological applications such as those in mining. Such tasks have become possible using ground-based, close-range hyperspectral sensors which can remotely measure the reflectance properties of the environment with high spatial and spectral resolution. However, autonomous mapping of mineral spectra measured on an open-cut mine face remains a challenging problem due to the subtleness of differences in spectral absorption features between mineral and rock classes as well as variability in the illumination of the scene. An additional layer of difficulty arises when there is no annotated data available to train a supervised learning algorithm. A pipeline for unsupervised mapping of spectra on a mine face is proposed which draws from several recent advances in the hyperspectral machine learning literature. The proposed pipeline brings together unsupervised and self-supervised algorithms in a unified system to map minerals on a mine face without the need for human-annotated training data. The pipeline is evaluated with a hyperspectral image dataset of an open-cut mine face comprising mineral ore martite and non-mineralised shale. The combined system is shown to produce a superior map to its constituent algorithms, and the consistency of its mapping capability is demonstrated using data acquired at two different times of day.
### Invariant Slot Attention: Object Discovery with Slot-Centric Reference Frames
- **Authors:** Ondrej Biza, Sjoerd van Steenkiste, Mehdi S. M. Sajjadi, Gamaleldin F. Elsayed, Aravindh Mahendran, Thomas Kipf
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2302.04973
- **Pdf link:** https://arxiv.org/pdf/2302.04973
- **Abstract**
Automatically discovering composable abstractions from raw perceptual data is a long-standing challenge in machine learning. Recent slot-based neural networks that learn about objects in a self-supervised manner have made exciting progress in this direction. However, they typically fall short at adequately capturing spatial symmetries present in the visual world, which leads to sample inefficiency, such as when entangling object appearance and pose. In this paper, we present a simple yet highly effective method for incorporating spatial symmetries via slot-centric reference frames. We incorporate equivariance to per-object pose transformations into the attention and generation mechanism of Slot Attention by translating, scaling, and rotating position encodings. These changes result in little computational overhead, are easy to implement, and can result in large gains in terms of data efficiency and overall improvements to object discovery. We evaluate our method on a wide range of synthetic object discovery benchmarks namely CLEVR, Tetrominoes, CLEVRTex, Objects Room and MultiShapeNet, and show promising improvements on the challenging real-world Waymo Open dataset.
### Is multi-modal vision supervision beneficial to language?
- **Authors:** Avinash Madasu, Vasudev Lal
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
- **Arxiv link:** https://arxiv.org/abs/2302.05016
- **Pdf link:** https://arxiv.org/pdf/2302.05016
- **Abstract**
Vision (image and video) - Language (VL) pre-training is the recent popular paradigm that achieved state-of-the-art results on multi-modal tasks like image-retrieval, video-retrieval, visual question answering etc. These models are trained in an unsupervised way and greatly benefit from the complementary modality supervision. In this paper, we explore if the language representations trained using vision supervision perform better than vanilla language representations on Natural Language Understanding and commonsense reasoning benchmarks. We experiment with a diverse set of image-text models such as ALBEF, BLIP, METER and video-text models like ALPRO, Frozen-in-Time (FiT), VIOLET. We compare the performance of language representations of stand-alone text encoders of these models to the language representations of text encoders learnt through vision supervision. Our experiments suggest that vanilla language representations show superior performance on most of the tasks. These results shed light on the current drawbacks of the vision-language models.
### TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation
- **Authors:** Hyesu Lim, Byeonggeun Kim, Jaegul Choo, Sungha Choi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.05155
- **Pdf link:** https://arxiv.org/pdf/2302.05155
- **Abstract**
This paper proposes a novel batch normalization strategy for test-time adaptation. Recent test-time adaptation methods heavily rely on the modified batch normalization, i.e., transductive batch normalization (TBN), which calculates the mean and the variance from the current test batch rather than using the running mean and variance obtained from the source data, i.e., conventional batch normalization (CBN). Adopting TBN that employs test batch statistics mitigates the performance degradation caused by the domain shift. However, re-estimating normalization statistics using test data depends on impractical assumptions that a test batch should be large enough and be drawn from i.i.d. stream, and we observed that the previous methods with TBN show critical performance drop without the assumptions. In this paper, we identify that CBN and TBN are in a trade-off relationship and present a new test-time normalization (TTN) method that interpolates the statistics by adjusting the importance between CBN and TBN according to the domain-shift sensitivity of each BN layer. Our proposed TTN improves model robustness to shifted domains across a wide range of batch sizes and in various realistic evaluation scenarios. TTN is widely applicable to other test-time adaptation methods that rely on updating model parameters via backpropagation. We demonstrate that adopting TTN further improves their performance and achieves state-of-the-art performance in various standard benchmarks.
## Keyword: raw image
There is no result
|
process
|
new submissions for mon feb keyword events context understanding in computer vision a survey authors xuan wang zhigang zhu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract contextual information plays an important role in many computer vision tasks such as object detection video action detection image classification etc recognizing a single object or action out of context could be sometimes very challenging and context information may help improve the understanding of a scene or an event greatly appearance context information e g colors or shapes of the background of an object can improve the recognition accuracy of the object in the scene semantic context e g a keyboard on an empty desk vs a keyboard next to a desktop computer will improve accuracy and exclude unrelated events context information that are not in the image itself such as the time or location of an images captured can also help to decide whether certain event or action should occur other types of context e g structure of a building will also provide additional information to improve the accuracy in this survey different context information that has been used in computer vision tasks is reviewed we categorize context into different types and different levels we also review available machine learning models and image video datasets that can employ context information furthermore we compare context based integration and context free integration in mainly two classes of tasks image based and video based finally this survey is concluded by a set of promising future directions in context learning and utilization best bert pre training for sign language recognition with coupling tokenization authors weichao zhao hezhen hu wengang zhou jiaxin shi houqiang li subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in this work we are dedicated to leveraging the bert pre training success and modeling the domain specific statistics to fertilize the sign language recognition slr model considering the dominance of hand and body in sign language expression we organize them as pose triplet units and feed them into the transformer backbone in a frame wise manner pre training is performed via reconstructing the masked triplet unit from the corrupted input sequence which learns the hierarchical correlation context cues among internal and external triplet units notably different from the highly semantic word token in bert the pose unit is a low level signal originally located in continuous space which prevents the direct adoption of the bert cross entropy objective to this end we bridge this semantic gap via coupling tokenization of the triplet unit it adaptively extracts the discrete pseudo label from the pose triplet unit which represents the semantic gesture body state after pre training we fine tune the pre trained encoder on the downstream slr task jointly with the newly added task specific layer extensive experiments are conducted to validate the effectiveness of our proposed method achieving new state of the art performance on all four benchmarks with a notable gain generalized video anomaly event detection systematic taxonomy and comparison of deep models authors yang liu dingkang yang yan wang jing liu liang song subjects computer vision and pattern recognition cs cv multimedia cs mm arxiv link pdf link abstract video anomaly event detection vaed is the core technology of intelligent surveillance systems aiming to temporally or spatially locate anomalous events in videos with the penetration of deep learning the recent advances in vaed have diverged various routes and achieved significant success however most existing reviews focus on traditional and unsupervised vaed methods lacking attention to emerging weakly supervised and fully unsupervised routes therefore this review extends the narrow vaed concept from unsupervised video anomaly detection to generalized video anomaly event detection gvaed which provides a comprehensive survey that integrates recent works based on different assumptions and learning frameworks into an intuitive taxonomy and coordinates unsupervised weakly supervised fully unsupervised and supervised vaed routes to facilitate future researchers this review collates and releases research resources such as datasets available codes programming tools and literature moreover this review quantitatively compares the model performance and analyzes the research challenges and possible trends for future work dual memory units with uncertainty regulation for weakly supervised video anomaly detection authors hang zhou junqing yu wei yang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract learning discriminative features for effectively separating abnormal events from normality is crucial for weakly supervised video anomaly detection ws vad tasks existing approaches both video and segment level label oriented mainly focus on extracting representations for anomaly data while neglecting the implication of normal data we observe that such a scheme is sub optimal i e for better distinguishing anomaly one needs to understand what is a normal state and may yield a higher false alarm rate to address this issue we propose an uncertainty regulated dual memory units ur dmu model to learn both the representations of normal data and discriminative features of abnormal data to be specific inspired by the traditional global and local structure on graph convolutional networks we introduce a global and local multi head self attention gl mhsa module for the transformer network to obtain more expressive embeddings for capturing associations in videos then we use two memory banks one additional abnormal memory for tackling hard samples to store and separate abnormal and normal prototypes and maximize the margins between the two representations finally we propose an uncertainty learning scheme to learn the normal data latent space that is robust to noise from camera switching object changing scene transforming etc extensive experiments on xd violence and ucf crime datasets demonstrate that our method outperforms the state of the art methods by a sizable margin keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb is multi modal vision supervision beneficial to language authors avinash madasu vasudev lal subjects computer vision and pattern recognition cs cv artificial intelligence cs ai computation and language cs cl arxiv link pdf link abstract vision image and video language vl pre training is the recent popular paradigm that achieved state of the art results on multi modal tasks like image retrieval video retrieval visual question answering etc these models are trained in an unsupervised way and greatly benefit from the complementary modality supervision in this paper we explore if the language representations trained using vision supervision perform better than vanilla language representations on natural language understanding and commonsense reasoning benchmarks we experiment with a diverse set of image text models such as albef blip meter and video text models like alpro frozen in time fit violet we compare the performance of language representations of stand alone text encoders of these models to the language representations of text encoders learnt through vision supervision our experiments suggest that vanilla language representations show superior performance on most of the tasks these results shed light on the current drawbacks of the vision language models keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression cen hdr computationally efficient neural network for real time high dynamic range imaging authors steven tel barthélémy heyrman dominique ginhac subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract high dynamic range hdr imaging is still a challenging task in modern digital photography recent research proposes solutions that provide high quality acquisition but at the cost of a very large number of operations and a slow inference time that prevent the implementation of these solutions on lightweight real time systems in this paper we propose cen hdr a new computationally efficient neural network by providing a novel architecture based on a light attention mechanism and sub pixel convolution operations for real time hdr imaging we also provide an efficient training scheme by applying network compression using knowledge distillation we performed extensive qualitative and quantitative comparisons to show that our approach produces competitive results in image quality while being faster than state of the art solutions allowing it to be practically deployed under real time constraints experimental results show our method obtains a score of mu psnr on the dataset with a framerate of fps using a macbook npu keyword raw unsupervised ore waste classification on open cut mine faces using close range hyperspectral data authors lloyd windrim arman melkumyan richard j murphy anna chlingaryan raymond leung subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the remote mapping of minerals and discrimination of ore and waste on surfaces are important tasks for geological applications such as those in mining such tasks have become possible using ground based close range hyperspectral sensors which can remotely measure the reflectance properties of the environment with high spatial and spectral resolution however autonomous mapping of mineral spectra measured on an open cut mine face remains a challenging problem due to the subtleness of differences in spectral absorption features between mineral and rock classes as well as variability in the illumination of the scene an additional layer of difficulty arises when there is no annotated data available to train a supervised learning algorithm a pipeline for unsupervised mapping of spectra on a mine face is proposed which draws from several recent advances in the hyperspectral machine learning literature the proposed pipeline brings together unsupervised and self supervised algorithms in a unified system to map minerals on a mine face without the need for human annotated training data the pipeline is evaluated with a hyperspectral image dataset of an open cut mine face comprising mineral ore martite and non mineralised shale the combined system is shown to produce a superior map to its constituent algorithms and the consistency of its mapping capability is demonstrated using data acquired at two different times of day invariant slot attention object discovery with slot centric reference frames authors ondrej biza sjoerd van steenkiste mehdi s m sajjadi gamaleldin f elsayed aravindh mahendran thomas kipf subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract automatically discovering composable abstractions from raw perceptual data is a long standing challenge in machine learning recent slot based neural networks that learn about objects in a self supervised manner have made exciting progress in this direction however they typically fall short at adequately capturing spatial symmetries present in the visual world which leads to sample inefficiency such as when entangling object appearance and pose in this paper we present a simple yet highly effective method for incorporating spatial symmetries via slot centric reference frames we incorporate equivariance to per object pose transformations into the attention and generation mechanism of slot attention by translating scaling and rotating position encodings these changes result in little computational overhead are easy to implement and can result in large gains in terms of data efficiency and overall improvements to object discovery we evaluate our method on a wide range of synthetic object discovery benchmarks namely clevr tetrominoes clevrtex objects room and multishapenet and show promising improvements on the challenging real world waymo open dataset is multi modal vision supervision beneficial to language authors avinash madasu vasudev lal subjects computer vision and pattern recognition cs cv artificial intelligence cs ai computation and language cs cl arxiv link pdf link abstract vision image and video language vl pre training is the recent popular paradigm that achieved state of the art results on multi modal tasks like image retrieval video retrieval visual question answering etc these models are trained in an unsupervised way and greatly benefit from the complementary modality supervision in this paper we explore if the language representations trained using vision supervision perform better than vanilla language representations on natural language understanding and commonsense reasoning benchmarks we experiment with a diverse set of image text models such as albef blip meter and video text models like alpro frozen in time fit violet we compare the performance of language representations of stand alone text encoders of these models to the language representations of text encoders learnt through vision supervision our experiments suggest that vanilla language representations show superior performance on most of the tasks these results shed light on the current drawbacks of the vision language models ttn a domain shift aware batch normalization in test time adaptation authors hyesu lim byeonggeun kim jaegul choo sungha choi subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper proposes a novel batch normalization strategy for test time adaptation recent test time adaptation methods heavily rely on the modified batch normalization i e transductive batch normalization tbn which calculates the mean and the variance from the current test batch rather than using the running mean and variance obtained from the source data i e conventional batch normalization cbn adopting tbn that employs test batch statistics mitigates the performance degradation caused by the domain shift however re estimating normalization statistics using test data depends on impractical assumptions that a test batch should be large enough and be drawn from i i d stream and we observed that the previous methods with tbn show critical performance drop without the assumptions in this paper we identify that cbn and tbn are in a trade off relationship and present a new test time normalization ttn method that interpolates the statistics by adjusting the importance between cbn and tbn according to the domain shift sensitivity of each bn layer our proposed ttn improves model robustness to shifted domains across a wide range of batch sizes and in various realistic evaluation scenarios ttn is widely applicable to other test time adaptation methods that rely on updating model parameters via backpropagation we demonstrate that adopting ttn further improves their performance and achieves state of the art performance in various standard benchmarks keyword raw image there is no result
| 1
|
82,425
| 15,646,559,276
|
IssuesEvent
|
2021-03-23 01:12:27
|
jgeraigery/linux
|
https://api.github.com/repos/jgeraigery/linux
|
opened
|
CVE-2020-11565 (Medium) detected in linuxv5.2
|
security vulnerability
|
## CVE-2020-11565 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux/mm/mempolicy.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux/mm/mempolicy.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** An issue was discovered in the Linux kernel through 5.6.2. mpol_parse_str in mm/mempolicy.c has a stack-based out-of-bounds write because an empty nodelist is mishandled during mount option parsing, aka CID-aa9f7d5172fa. NOTE: Someone in the security community disagrees that this is a vulnerability because the issue ???is a bug in parsing mount options which can only be specified by a privileged user, so triggering the bug does not grant any powers not already held.???.
<p>Publish Date: 2020-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11565>CVE-2020-11565</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-11565">https://www.linuxkernelcves.com/cves/CVE-2020-11565</a></p>
<p>Release Date: 2020-06-10</p>
<p>Fix Resolution: v5.7-rc1,v3.16.83,v4.14.176,v4.19.115,v4.4.219,v4.9.219,v5.4.31,v5.5.16,v5.6.3</p>
</p>
</details>
<p></p>
|
True
|
CVE-2020-11565 (Medium) detected in linuxv5.2 - ## CVE-2020-11565 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux/mm/mempolicy.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux/mm/mempolicy.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** An issue was discovered in the Linux kernel through 5.6.2. mpol_parse_str in mm/mempolicy.c has a stack-based out-of-bounds write because an empty nodelist is mishandled during mount option parsing, aka CID-aa9f7d5172fa. NOTE: Someone in the security community disagrees that this is a vulnerability because the issue ???is a bug in parsing mount options which can only be specified by a privileged user, so triggering the bug does not grant any powers not already held.???.
<p>Publish Date: 2020-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11565>CVE-2020-11565</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-11565">https://www.linuxkernelcves.com/cves/CVE-2020-11565</a></p>
<p>Release Date: 2020-06-10</p>
<p>Fix Resolution: v5.7-rc1,v3.16.83,v4.14.176,v4.19.115,v4.4.219,v4.9.219,v5.4.31,v5.5.16,v5.6.3</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href vulnerable source files linux mm mempolicy c linux mm mempolicy c vulnerability details disputed an issue was discovered in the linux kernel through mpol parse str in mm mempolicy c has a stack based out of bounds write because an empty nodelist is mishandled during mount option parsing aka cid note someone in the security community disagrees that this is a vulnerability because the issue is a bug in parsing mount options which can only be specified by a privileged user so triggering the bug does not grant any powers not already held publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
1,218
| 3,749,872,078
|
IssuesEvent
|
2016-03-11 02:24:09
|
mapbox/mapbox-gl-js
|
https://api.github.com/repos/mapbox/mapbox-gl-js
|
closed
|
Automate creating GitHub release and publishing npm package on CI server
|
testing & release process
|
Ideal workflow:
* Do the git stuff and push a `vX.Y.Z` tag
* CI takes over:
* Runs tests
* Pushes to CDN
* Pushes to npm
* Creates a release on GitHub with content from CHANGELOG.md
* Bonus: update prior releases where CHANGELOG.md has been edited
CI would need additional permissions:
* npm token with access to mapbox-gl-js
* GitHub token with access to mapbox-gl-js releases
|
1.0
|
Automate creating GitHub release and publishing npm package on CI server - Ideal workflow:
* Do the git stuff and push a `vX.Y.Z` tag
* CI takes over:
* Runs tests
* Pushes to CDN
* Pushes to npm
* Creates a release on GitHub with content from CHANGELOG.md
* Bonus: update prior releases where CHANGELOG.md has been edited
CI would need additional permissions:
* npm token with access to mapbox-gl-js
* GitHub token with access to mapbox-gl-js releases
|
process
|
automate creating github release and publishing npm package on ci server ideal workflow do the git stuff and push a vx y z tag ci takes over runs tests pushes to cdn pushes to npm creates a release on github with content from changelog md bonus update prior releases where changelog md has been edited ci would need additional permissions npm token with access to mapbox gl js github token with access to mapbox gl js releases
| 1
|
5,321
| 8,136,186,842
|
IssuesEvent
|
2018-08-20 07:30:24
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Metadata overwrites event data
|
:Processors libbeat
|
- Version: 6.3.2
- Operating System: Ubuntu 16.04
- Discuss Forum URL: https://discuss.elastic.co/t/filebeat-should-metadata-overwrite-event-data/143919
- Steps to Reproduce:
Prepare a log file with:
```
{ "fieldA": "A1", "fieldB": "B1", "host": "somehostname" }
```
Filebeat input configuration:
```
- type: log
enabled: true
paths:
- /var/log/filebeat-test/bug.log
fields_under_root: true
json:
keys_under_root: true
overwrite_keys: true
add_error_key: true
```
Resulting output:
```json
{
"@timestamp": "2018-08-13T19:38:44.658Z",
"@metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.3.2"
},
"prospector": {
"type": "log"
},
"offset": 0,
"fieldA": "A1",
"fieldB": "B1",
"host": { # <-- should have "host": "somehostname",
"name": "master1"
},
"source": "/var/log/filebeat-test/bug.log",
"input": {
"type": "log"
},
"beat": {
"name": "master1",
"hostname": "master1",
"version": "6.3.2"
}
}
```
The problem here is that `host` was overwritten by `host.name` from this PR: https://github.com/elastic/beats/pull/7051 and documentation states that:
```
overwrite_keys
If keys_under_root and this setting are enabled, then the values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
```
https://www.elastic.co/guide/en/beats/filebeat/master/filebeat-input-log.html#filebeat-input-log-config-json
I've been digging a little around the code and the issue seems to be in the order in which these actions are performed: https://github.com/tsg/beats/blob/6.3/libbeat/publisher/pipeline/processor.go#L32-L41
Beats and host metadata are added at the end, right before global processors, maybe it should be added before the client processors, or avoid overwritting fields that are already in the event when adding the fields. The purpose was to avoid any processor removing beats data (https://github.com/elastic/beats/pull/5149#issuecomment-328833290) but with the undesired effect of overwritting event data.
This was also discussed here: https://discuss.elastic.co/t/logstash-errors-after-upgrading-to-filebeat-6-3-0/135984/19
Btw, if we agree on one of the two alternatives (move the adding beat field a few steps earlier or avoid overwrite) I can work on a PR.
|
1.0
|
Metadata overwrites event data - - Version: 6.3.2
- Operating System: Ubuntu 16.04
- Discuss Forum URL: https://discuss.elastic.co/t/filebeat-should-metadata-overwrite-event-data/143919
- Steps to Reproduce:
Prepare a log file with:
```
{ "fieldA": "A1", "fieldB": "B1", "host": "somehostname" }
```
Filebeat input configuration:
```
- type: log
enabled: true
paths:
- /var/log/filebeat-test/bug.log
fields_under_root: true
json:
keys_under_root: true
overwrite_keys: true
add_error_key: true
```
Resulting output:
```json
{
"@timestamp": "2018-08-13T19:38:44.658Z",
"@metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.3.2"
},
"prospector": {
"type": "log"
},
"offset": 0,
"fieldA": "A1",
"fieldB": "B1",
"host": { # <-- should have "host": "somehostname",
"name": "master1"
},
"source": "/var/log/filebeat-test/bug.log",
"input": {
"type": "log"
},
"beat": {
"name": "master1",
"hostname": "master1",
"version": "6.3.2"
}
}
```
The problem here is that `host` was overwritten by `host.name` from this PR: https://github.com/elastic/beats/pull/7051 and documentation states that:
```
overwrite_keys
If keys_under_root and this setting are enabled, then the values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
```
https://www.elastic.co/guide/en/beats/filebeat/master/filebeat-input-log.html#filebeat-input-log-config-json
I've been digging a little around the code and the issue seems to be in the order in which these actions are performed: https://github.com/tsg/beats/blob/6.3/libbeat/publisher/pipeline/processor.go#L32-L41
Beats and host metadata are added at the end, right before global processors, maybe it should be added before the client processors, or avoid overwritting fields that are already in the event when adding the fields. The purpose was to avoid any processor removing beats data (https://github.com/elastic/beats/pull/5149#issuecomment-328833290) but with the undesired effect of overwritting event data.
This was also discussed here: https://discuss.elastic.co/t/logstash-errors-after-upgrading-to-filebeat-6-3-0/135984/19
Btw, if we agree on one of the two alternatives (move the adding beat field a few steps earlier or avoid overwrite) I can work on a PR.
|
process
|
metadata overwrites event data version operating system ubuntu discuss forum url steps to reproduce prepare a log file with fielda fieldb host somehostname filebeat input configuration type log enabled true paths var log filebeat test bug log fields under root true json keys under root true overwrite keys true add error key true resulting output json timestamp metadata beat filebeat type doc version prospector type log offset fielda fieldb host should have host somehostname name source var log filebeat test bug log input type log beat name hostname version the problem here is that host was overwritten by host name from this pr and documentation states that overwrite keys if keys under root and this setting are enabled then the values from the decoded json object overwrite the fields that filebeat normally adds type source offset etc in case of conflicts i ve been digging a little around the code and the issue seems to be in the order in which these actions are performed beats and host metadata are added at the end right before global processors maybe it should be added before the client processors or avoid overwritting fields that are already in the event when adding the fields the purpose was to avoid any processor removing beats data but with the undesired effect of overwritting event data this was also discussed here btw if we agree on one of the two alternatives move the adding beat field a few steps earlier or avoid overwrite i can work on a pr
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.