Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
86,842 | 15,755,881,479 | IssuesEvent | 2021-03-31 02:33:00 | turkdevops/node | https://api.github.com/repos/turkdevops/node | opened | WS-2020-0163 (Medium) detected in marked-0.3.19.js, marked-0.3.19.tgz | security vulnerability | ## WS-2020-0163 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>marked-0.3.19.js</b>, <b>marked-0.3.19.tgz</b></p></summary>
<p>
<details><summary><b>marked-0.3.19.js</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js">https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js</a></p>
<p>Path to dependency file: node/deps/npm/node_modules/marked/www/demo.html</p>
<p>Path to vulnerable library: node/deps/npm/node_modules/marked/www/../lib/marked.js</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.3.19.js** (Vulnerable Library)
</details>
<details><summary><b>marked-0.3.19.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.3.19.tgz">https://registry.npmjs.org/marked/-/marked-0.3.19.tgz</a></p>
<p>Path to dependency file: node/deps/npm/package.json</p>
<p>Path to vulnerable library: node/deps/npm/node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.3.19.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>archived-io.js-v0.10</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
marked before 1.1.1 is vulnerable to Regular Expression Denial of Service (REDoS). rules.js have multiple unused capture groups which can lead to a Denial of Service.
<p>Publish Date: 2020-07-02
<p>URL: <a href=https://github.com/markedjs/marked/commit/bd4f8c464befad2b304d51e33e89e567326e62e0>WS-2020-0163</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/releases/tag/v1.1.1">https://github.com/markedjs/marked/releases/tag/v1.1.1</a></p>
<p>Release Date: 2020-07-02</p>
<p>Fix Resolution: marked - 1.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2020-0163 (Medium) detected in marked-0.3.19.js, marked-0.3.19.tgz - ## WS-2020-0163 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>marked-0.3.19.js</b>, <b>marked-0.3.19.tgz</b></p></summary>
<p>
<details><summary><b>marked-0.3.19.js</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js">https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js</a></p>
<p>Path to dependency file: node/deps/npm/node_modules/marked/www/demo.html</p>
<p>Path to vulnerable library: node/deps/npm/node_modules/marked/www/../lib/marked.js</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.3.19.js** (Vulnerable Library)
</details>
<details><summary><b>marked-0.3.19.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.3.19.tgz">https://registry.npmjs.org/marked/-/marked-0.3.19.tgz</a></p>
<p>Path to dependency file: node/deps/npm/package.json</p>
<p>Path to vulnerable library: node/deps/npm/node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.3.19.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>archived-io.js-v0.10</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
marked before 1.1.1 is vulnerable to Regular Expression Denial of Service (REDoS). rules.js have multiple unused capture groups which can lead to a Denial of Service.
<p>Publish Date: 2020-07-02
<p>URL: <a href=https://github.com/markedjs/marked/commit/bd4f8c464befad2b304d51e33e89e567326e62e0>WS-2020-0163</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/releases/tag/v1.1.1">https://github.com/markedjs/marked/releases/tag/v1.1.1</a></p>
<p>Release Date: 2020-07-02</p>
<p>Fix Resolution: marked - 1.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | ws medium detected in marked js marked tgz ws medium severity vulnerability vulnerable libraries marked js marked tgz marked js a markdown parser built for speed library home page a href path to dependency file node deps npm node modules marked www demo html path to vulnerable library node deps npm node modules marked www lib marked js dependency hierarchy x marked js vulnerable library marked tgz a markdown parser built for speed library home page a href path to dependency file node deps npm package json path to vulnerable library node deps npm node modules marked package json dependency hierarchy x marked tgz vulnerable library found in base branch archived io js vulnerability details marked before is vulnerable to regular expression denial of service redos rules js have multiple unused capture groups which can lead to a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution marked step up your open source security game with whitesource | 0 |
14,422 | 17,470,618,563 | IssuesEvent | 2021-08-07 04:05:50 | knowledge-for-good/knowledgeforgood | https://api.github.com/repos/knowledge-for-good/knowledgeforgood | closed | How do you find consensus on the best roadmap for learning something? | question discussion-requested brainstorming learning process paused make-into-story | If I build a course or resources for learning a topic, what happens if someone thinks it's not a good resource?
What if they think a different "roadmap for learning" is better? | 1.0 | How do you find consensus on the best roadmap for learning something? - If I build a course or resources for learning a topic, what happens if someone thinks it's not a good resource?
What if they think a different "roadmap for learning" is better? | process | how do you find consensus on the best roadmap for learning something if i build a course or resources for learning a topic what happens if someone thinks it s not a good resource what if they think a different roadmap for learning is better | 1 |
35,875 | 8,026,923,390 | IssuesEvent | 2018-07-27 07:05:30 | mozilla/addons-frontend | https://api.github.com/repos/mozilla/addons-frontend | closed | Update to react-router-4 | component: code quality priority: p3 triaged | This will be a fair bit of work but filing here as an issue on our radar, so we can close the Greenkeeper PRs and track it here. | 1.0 | Update to react-router-4 - This will be a fair bit of work but filing here as an issue on our radar, so we can close the Greenkeeper PRs and track it here. | non_process | update to react router this will be a fair bit of work but filing here as an issue on our radar so we can close the greenkeeper prs and track it here | 0 |
11,907 | 14,698,926,841 | IssuesEvent | 2021-01-04 07:34:22 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | opened | [SB] Issue in Instruction text field | Bug P1 Process: Dev | Steps:-
1. Configure an instruction step for the Questionnaire
2. Add the text in different lines and click on Done button and verify
A/R:- All the entered texts are displaying in a single line after Saving
E/R:- Entered text should be displayed as configured
Instance:- DEV
| 1.0 | [SB] Issue in Instruction text field - Steps:-
1. Configure an instruction step for the Questionnaire
2. Add the text in different lines and click on Done button and verify
A/R:- All the entered texts are displaying in a single line after Saving
E/R:- Entered text should be displayed as configured
Instance:- DEV
| process | issue in instruction text field steps configure an instruction step for the questionnaire add the text in different lines and click on done button and verify a r all the entered texts are displaying in a single line after saving e r entered text should be displayed as configured instance dev | 1 |
41,514 | 16,769,984,498 | IssuesEvent | 2021-06-14 13:46:20 | gradido/gradido | https://api.github.com/repos/gradido/gradido | opened | ๐ง [Refactor] Compression of vendor.js | refactor service: frontend | ## ๐ง Refactor ticket
<!-- Describe your issue in detail. Include screenshots if needed. Give us as much information as possible. Use a clear and concise description of what the problem is.-->
Apparently vendor.js is not compressed and very big (2mb). We need to reduce and compress this. | 1.0 | ๐ง [Refactor] Compression of vendor.js - ## ๐ง Refactor ticket
<!-- Describe your issue in detail. Include screenshots if needed. Give us as much information as possible. Use a clear and concise description of what the problem is.-->
Apparently vendor.js is not compressed and very big (2mb). We need to reduce and compress this. | non_process | ๐ง compression of vendor js ๐ง refactor ticket apparently vendor js is not compressed and very big we need to reduce and compress this | 0 |
350,312 | 31,878,566,867 | IssuesEvent | 2023-09-16 05:03:52 | istio/istio | https://api.github.com/repos/istio/istio | closed | Usability: Base chart name can be an issue for local helm chart repositories | kind/docs area/test and release area/environments area/user experience lifecycle/stale | The chart `istio/base` makes sense in the context of istio helm repositories. However some organizations choose to setup their helm repos to hold upstream helm chart for different purposes.
If a organization wants the helm chart in their repositories this can cause a name conflict, or inaccuracy of the name to define what this package is about.
E.g. : `my-upstream-repo/base` would not make sense.
An alternative could be adding `istio-` prefix to the base helm chart name.
[ ] Ambient
[X] Docs
[X] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[ ] Test and Release
[X] User Experience
[X] Developer Infrastructure
**Affected features (please put an X in all that apply)**
[ ] Multi Cluster
[ ] Virtual Machine
[ ] Multi Control Plane
| 1.0 | Usability: Base chart name can be an issue for local helm chart repositories - The chart `istio/base` makes sense in the context of istio helm repositories. However some organizations choose to setup their helm repos to hold upstream helm chart for different purposes.
If a organization wants the helm chart in their repositories this can cause a name conflict, or inaccuracy of the name to define what this package is about.
E.g. : `my-upstream-repo/base` would not make sense.
An alternative could be adding `istio-` prefix to the base helm chart name.
[ ] Ambient
[X] Docs
[X] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[ ] Test and Release
[X] User Experience
[X] Developer Infrastructure
**Affected features (please put an X in all that apply)**
[ ] Multi Cluster
[ ] Virtual Machine
[ ] Multi Control Plane
| non_process | usability base chart name can be an issue for local helm chart repositories the chart istio base makes sense in the context of istio helm repositories however some organizations choose to setup their helm repos to hold upstream helm chart for different purposes if a organization wants the helm chart in their repositories this can cause a name conflict or inaccuracy of the name to define what this package is about e g my upstream repo base would not make sense an alternative could be adding istio prefix to the base helm chart name ambient docs installation networking performance and scalability extensions and telemetry security test and release user experience developer infrastructure affected features please put an x in all that apply multi cluster virtual machine multi control plane | 0 |
63,775 | 26,514,040,744 | IssuesEvent | 2023-01-18 19:20:39 | cityofaustin/atd-data-tech | https://api.github.com/repos/cityofaustin/atd-data-tech | closed | Project: Historic Traffic Impact Analysis Tracking | Type: Data Service: Apps Need: 2-Should Have Epic Workgroup: TDS Project Index Product: TDS Portal Project: Historic TIA Data | We're enhancing our Traffic Impact Analysis (TIA) application to support the tracking of historical TIA cases.
The Transportation Engineering and Transportation Development Services groups have expressed a need to see TIA historic case data and TIA mitigation data that existed prior to the TIA Mod application in the TDS Portal application. The engineers and reviewers need a place to reference those old cases and mitigations to be able to conduct research that can allow them to utilize fiscal monies set aside for transportation mitigation projects.
## Background
The project is a high priority for the department as we need to track $15-$20 million in TIA Mitigation Data. This tracking will allow us to determine which roadway improvements can be done as we will have fiscal resources to be able to support those initiatives.
## Scope & Deliverables
- We are importing historic TIA cases into the TIA mod for the TDS reviewers to reference the captured dates their team needs to track for reporting purposes.
- We are doing data (spreadsheet) clean up in order to have a single source of TIA Mitigation data with spatial coordinates and TIA Mitigation Fee Paid Dates (as close as possible)
- Constraints: we are being requested to pull the TIA Memo dates from the PDF memos. However the PDFs aren't all Doc to PDF, some are just scanned in and converted to PDFs.
- We have thought of searching in the AMANDA DB to see if there is a date field that would suffice
- We are importing historical TIA mitigation data into the system to be able to map it spatially for use in the Street Impact Fee Program as well.
- TIA Mitigation page that allows TED group to manage the TIA Fiscal and allow users to search and find TIA Fiscal Memos
## Desired Outcomes
This project will be successful if we are able to allow users the ability to have all their historical TIA cases and TIA mitigations at their finger tips. The TED group managing historical TIA fiscal will have a place to point ATD users to query locations (tabular search) for TIA locations with fiscal.
- Historic TIA cases being queried in the system like the other TIA cases being managed
- TIA Mitigation page that allows TED group to manage the TIA Fiscal and allow users to search and find TIA Fiscal Memos
## Timebox
We are thinking this may take up several sprints depending on the data clean up portion for both TIA fiscal and TIA mitigations. Perhaps the 2021 calendar year to finish?
## Concepts and references
### Sources
- [Original Spreadsheets folder](https://drive.google.com/drive/folders/1I7lSLv3fR-nS9crk8fZBVqJrGi_rNYv3?usp=sharing)
<!-- + Image (No Header) (No Header) + -->

---
This Github issue represents a project of Austin Transportation's [Data & Technology Services](https://austinmobility.io/) team. Project status is documented regularly in the comments below.
<!-- Don't forget to:
- Add a new "Project:" label here: https://github.com/cityofaustin/atd-data-tech/labels. Use the hex code #3D3D3D.
- Add a project evaluation here: https://atd.knack.com/dts#project-evaluation/
-->
| 1.0 | Project: Historic Traffic Impact Analysis Tracking - We're enhancing our Traffic Impact Analysis (TIA) application to support the tracking of historical TIA cases.
The Transportation Engineering and Transportation Development Services groups have expressed a need to see TIA historic case data and TIA mitigation data that existed prior to the TIA Mod application in the TDS Portal application. The engineers and reviewers need a place to reference those old cases and mitigations to be able to conduct research that can allow them to utilize fiscal monies set aside for transportation mitigation projects.
## Background
The project is a high priority for the department as we need to track $15-$20 million in TIA Mitigation Data. This tracking will allow us to determine which roadway improvements can be done as we will have fiscal resources to be able to support those initiatives.
## Scope & Deliverables
- We are importing historic TIA cases into the TIA mod for the TDS reviewers to reference the captured dates their team needs to track for reporting purposes.
- We are doing data (spreadsheet) clean up in order to have a single source of TIA Mitigation data with spatial coordinates and TIA Mitigation Fee Paid Dates (as close as possible)
- Constraints: we are being requested to pull the TIA Memo dates from the PDF memos. However the PDFs aren't all Doc to PDF, some are just scanned in and converted to PDFs.
- We have thought of searching in the AMANDA DB to see if there is a date field that would suffice
- We are importing historical TIA mitigation data into the system to be able to map it spatially for use in the Street Impact Fee Program as well.
- TIA Mitigation page that allows TED group to manage the TIA Fiscal and allow users to search and find TIA Fiscal Memos
## Desired Outcomes
This project will be successful if we are able to allow users the ability to have all their historical TIA cases and TIA mitigations at their finger tips. The TED group managing historical TIA fiscal will have a place to point ATD users to query locations (tabular search) for TIA locations with fiscal.
- Historic TIA cases being queried in the system like the other TIA cases being managed
- TIA Mitigation page that allows TED group to manage the TIA Fiscal and allow users to search and find TIA Fiscal Memos
## Timebox
We are thinking this may take up several sprints depending on the data clean up portion for both TIA fiscal and TIA mitigations. Perhaps the 2021 calendar year to finish?
## Concepts and references
### Sources
- [Original Spreadsheets folder](https://drive.google.com/drive/folders/1I7lSLv3fR-nS9crk8fZBVqJrGi_rNYv3?usp=sharing)
<!-- + Image (No Header) (No Header) + -->

---
This Github issue represents a project of Austin Transportation's [Data & Technology Services](https://austinmobility.io/) team. Project status is documented regularly in the comments below.
<!-- Don't forget to:
- Add a new "Project:" label here: https://github.com/cityofaustin/atd-data-tech/labels. Use the hex code #3D3D3D.
- Add a project evaluation here: https://atd.knack.com/dts#project-evaluation/
-->
| non_process | project historic traffic impact analysis tracking we re enhancing our traffic impact analysis tia application to support the tracking of historical tia cases the transportation engineering and transportation development services groups have expressed a need to see tia historic case data and tia mitigation data that existed prior to the tia mod application in the tds portal application the engineers and reviewers need a place to reference those old cases and mitigations to be able to conduct research that can allow them to utilize fiscal monies set aside for transportation mitigation projects background the project is a high priority for the department as we need to track million in tia mitigation data this tracking will allow us to determine which roadway improvements can be done as we will have fiscal resources to be able to support those initiatives scope deliverables we are importing historic tia cases into the tia mod for the tds reviewers to reference the captured dates their team needs to track for reporting purposes we are doing data spreadsheet clean up in order to have a single source of tia mitigation data with spatial coordinates and tia mitigation fee paid dates as close as possible constraints we are being requested to pull the tia memo dates from the pdf memos however the pdfs aren t all doc to pdf some are just scanned in and converted to pdfs we have thought of searching in the amanda db to see if there is a date field that would suffice we are importing historical tia mitigation data into the system to be able to map it spatially for use in the street impact fee program as well tia mitigation page that allows ted group to manage the tia fiscal and allow users to search and find tia fiscal memos desired outcomes this project will be successful if we are able to allow users the ability to have all their historical tia cases and tia mitigations at their finger tips the ted group managing historical tia fiscal will have a place to point atd users to query locations tabular search for tia locations with fiscal historic tia cases being queried in the system like the other tia cases being managed tia mitigation page that allows ted group to manage the tia fiscal and allow users to search and find tia fiscal memos timebox we are thinking this may take up several sprints depending on the data clean up portion for both tia fiscal and tia mitigations perhaps the calendar year to finish concepts and references sources this github issue represents a project of austin transportation s team project status is documented regularly in the comments below don t forget to add a new project label here use the hex code add a project evaluation here | 0 |
16,358 | 21,035,809,494 | IssuesEvent | 2022-03-31 07:43:58 | paul-buerkner/brms | https://api.github.com/repos/paul-buerkner/brms | closed | Speed up `summary` function: problem of latent variables being summarized | efficiency post-processing | Hello,
I have a `brms fit` of a latent factor model with 20000 subject ids and 140162 observations. Model looks like this:
```
Family: MV(gaussian, gaussian, gaussian, gaussian, gaussian, gaussian, gaussian, gaussian, bernoulli)
Links: mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = logit
Formula: x1 ~ 0 + mi(F1) + (1 | ID)
x2 ~ 0 + mi(F1) + (1 | ID)
x3 ~ 0 + mi(F1) + (1 | ID)
x4 ~ 0 + mi(F2) + (1 | ID)
x5 ~ 0 + mi(F2) + (1 | ID)
x6 ~ 0 + mi(F2) + (1 | ID)
F1 | mi() ~ 0 + arma(time = t, gr = ID, p = 0, q = 4, cov = FALSE)
F2 | mi() ~ 0 + arma(time = t, gr = ID, p = 0, q = 4, cov = FALSE)
yBinary ~ mi(F1) + mi(F2) + (1 | ID)
Data: data0 (Number of observations: 140162)
Draws: 1 chains, each with iter = 500; warmup = 0; thin = 1;
total post-warmup draws = 500
```
From the formulas above you can see that I treat latent variables (`F1`, `F2`) as missing values with `mi()`. Source where I got this idea is here: https://github.com/paul-buerkner/brms/issues/304. These latent variables are needed to describe my variables [`x1`, `x2`, ...,, `x6`]. Latent factor estimates are not interesting to me, I am only interested in [`x1`, `x2`, ...,, `x6`] and `yBinary` estimates.
However, in `summary.brmsfit()` function an object is created that stores all the information about variables estimates in each iteration, including latent factor values for each observation and iteration. In my case, due to two latent variables treated as missing values I have 2x140162 more parameters:
1. `Ymi_F1[1]` to `Ymi_F1[140162]` for factor `F1` ,
2. `Ymi_F2[1]` to `Ymi_F2[140162]` for factor `F2`.
Storing and handling these ~280k additional parameters really slows down `summary` function: if I drop every variable with missing latent factor inside `summary.brmsfit()` function (they are unnecessary to `summary` function), summary takes only several minutes instead of 9-10 hours.
So apparently, the latent variables are accidentally summarized for some reason and removing them really speeds up summary function.
| 1.0 | Speed up `summary` function: problem of latent variables being summarized - Hello,
I have a `brms fit` of a latent factor model with 20000 subject ids and 140162 observations. Model looks like this:
```
Family: MV(gaussian, gaussian, gaussian, gaussian, gaussian, gaussian, gaussian, gaussian, bernoulli)
Links: mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = identity; sigma = identity
mu = logit
Formula: x1 ~ 0 + mi(F1) + (1 | ID)
x2 ~ 0 + mi(F1) + (1 | ID)
x3 ~ 0 + mi(F1) + (1 | ID)
x4 ~ 0 + mi(F2) + (1 | ID)
x5 ~ 0 + mi(F2) + (1 | ID)
x6 ~ 0 + mi(F2) + (1 | ID)
F1 | mi() ~ 0 + arma(time = t, gr = ID, p = 0, q = 4, cov = FALSE)
F2 | mi() ~ 0 + arma(time = t, gr = ID, p = 0, q = 4, cov = FALSE)
yBinary ~ mi(F1) + mi(F2) + (1 | ID)
Data: data0 (Number of observations: 140162)
Draws: 1 chains, each with iter = 500; warmup = 0; thin = 1;
total post-warmup draws = 500
```
From the formulas above you can see that I treat latent variables (`F1`, `F2`) as missing values with `mi()`. Source where I got this idea is here: https://github.com/paul-buerkner/brms/issues/304. These latent variables are needed to describe my variables [`x1`, `x2`, ...,, `x6`]. Latent factor estimates are not interesting to me, I am only interested in [`x1`, `x2`, ...,, `x6`] and `yBinary` estimates.
However, in `summary.brmsfit()` function an object is created that stores all the information about variables estimates in each iteration, including latent factor values for each observation and iteration. In my case, due to two latent variables treated as missing values I have 2x140162 more parameters:
1. `Ymi_F1[1]` to `Ymi_F1[140162]` for factor `F1` ,
2. `Ymi_F2[1]` to `Ymi_F2[140162]` for factor `F2`.
Storing and handling these ~280k additional parameters really slows down `summary` function: if I drop every variable with missing latent factor inside `summary.brmsfit()` function (they are unnecessary to `summary` function), summary takes only several minutes instead of 9-10 hours.
So apparently, the latent variables are accidentally summarized for some reason and removing them really speeds up summary function.
| process | speed up summary function problem of latent variables being summarized hello i have a brms fit of a latent factor model with subject ids and observations model looks like this family mv gaussian gaussian gaussian gaussian gaussian gaussian gaussian gaussian bernoulli links mu identity sigma identity mu identity sigma identity mu identity sigma identity mu identity sigma identity mu identity sigma identity mu identity sigma identity mu identity sigma identity mu identity sigma identity mu logit formula mi id mi id mi id mi id mi id mi id mi arma time t gr id p q cov false mi arma time t gr id p q cov false ybinary mi mi id data number of observations draws chains each with iter warmup thin total post warmup draws from the formulas above you can see that i treat latent variables as missing values with mi source where i got this idea is here these latent variables are needed to describe my variables latent factor estimates are not interesting to me i am only interested in and ybinary estimates however in summary brmsfit function an object is created that stores all the information about variables estimates in each iteration including latent factor values for each observation and iteration in my case due to two latent variables treated as missing values i have more parameters ymi to ymi for factor ymi to ymi for factor storing and handling these additional parameters really slows down summary function if i drop every variable with missing latent factor inside summary brmsfit function they are unnecessary to summary function summary takes only several minutes instead of hours so apparently the latent variables are accidentally summarized for some reason and removing them really speeds up summary function | 1 |
18,538 | 24,554,447,675 | IssuesEvent | 2022-10-12 14:51:55 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Android] Participant's are navigating to sign in screen in the below scenario | Bug P1 Android Process: Fixed Process: Tested QA Process: Tested dev | **Steps:**
1. Sign in and complete the passcode process
2. After navigating to study list screen , minimize the app and again open the app
3. Click on 'Sign in again' in passcode screen
4. Click on 'Ok' button
5. Click on 'Sign up' link in sign in screen
6. Try to enter the value in 'Email' field and Verify
**AR:** Participant's are navigating to sign in screen
**ER:** Participant's should remain in the sign up screen
https://user-images.githubusercontent.com/86007179/167865416-ee977877-7f3a-4b6d-a011-ab3860f21ab9.mp4
| 3.0 | [Android] Participant's are navigating to sign in screen in the below scenario - **Steps:**
1. Sign in and complete the passcode process
2. After navigating to study list screen , minimize the app and again open the app
3. Click on 'Sign in again' in passcode screen
4. Click on 'Ok' button
5. Click on 'Sign up' link in sign in screen
6. Try to enter the value in 'Email' field and Verify
**AR:** Participant's are navigating to sign in screen
**ER:** Participant's should remain in the sign up screen
https://user-images.githubusercontent.com/86007179/167865416-ee977877-7f3a-4b6d-a011-ab3860f21ab9.mp4
| process | participant s are navigating to sign in screen in the below scenario steps sign in and complete the passcode process after navigating to study list screen minimize the app and again open the app click on sign in again in passcode screen click on ok button click on sign up link in sign in screen try to enter the value in email field and verify ar participant s are navigating to sign in screen er participant s should remain in the sign up screen | 1 |
423,936 | 12,304,026,085 | IssuesEvent | 2020-05-11 19:45:19 | Lev-Echad/levechad-backend | https://api.github.com/repos/Lev-Echad/levechad-backend | closed | Volunteer certificate images with long names have cut-off text | High Priority bug volunteer-certificate | 
If a persons name is too long, this is how the certificate will end up looking (notice all text being cut off from the right).
This also happens in shorter names, I chose this very long name to make the issue clear and easy to spot.
Consider:
* First/last name character limit (maybe a combined limit)
* Reduce font size in tags
* Make it so long names run over the left part of the tag instead of changing the entire texts starting position | 1.0 | Volunteer certificate images with long names have cut-off text - 
If a persons name is too long, this is how the certificate will end up looking (notice all text being cut off from the right).
This also happens in shorter names, I chose this very long name to make the issue clear and easy to spot.
Consider:
* First/last name character limit (maybe a combined limit)
* Reduce font size in tags
* Make it so long names run over the left part of the tag instead of changing the entire texts starting position | non_process | volunteer certificate images with long names have cut off text if a persons name is too long this is how the certificate will end up looking notice all text being cut off from the right this also happens in shorter names i chose this very long name to make the issue clear and easy to spot consider first last name character limit maybe a combined limit reduce font size in tags make it so long names run over the left part of the tag instead of changing the entire texts starting position | 0 |
6,529 | 9,622,072,789 | IssuesEvent | 2019-05-14 12:14:33 | google/go-cloud | https://api.github.com/repos/google/go-cloud | closed | samples: make samples its own module | process | We currently have `samples/appengine` as a separate module.
We should instead make `samples/` a module, with all the samples just commands within that module. This will be needed as part of #886, since samples import all our providers and thus can't be part of the core module (if we want to not force some providers as dependencies on users of core) | 1.0 | samples: make samples its own module - We currently have `samples/appengine` as a separate module.
We should instead make `samples/` a module, with all the samples just commands within that module. This will be needed as part of #886, since samples import all our providers and thus can't be part of the core module (if we want to not force some providers as dependencies on users of core) | process | samples make samples its own module we currently have samples appengine as a separate module we should instead make samples a module with all the samples just commands within that module this will be needed as part of since samples import all our providers and thus can t be part of the core module if we want to not force some providers as dependencies on users of core | 1 |
427,301 | 12,393,963,760 | IssuesEvent | 2020-05-20 16:11:53 | googleapis/elixir-google-api | https://api.github.com/repos/googleapis/elixir-google-api | closed | Synthesis failed for OSConfig | autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate OSConfig. :broken_heart:
Here's the output from running `synth.py`:
```
: failed to remove deps/parse_trans/ebin/parse_trans.app: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_mod.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_codegen.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/ct_expand.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/exprecs.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_pp.beam: Permission denied
warning: failed to remove deps/parse_trans/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/parse_trans/hex_metadata.config: Permission denied
warning: failed to remove deps/parse_trans/README.md: Permission denied
warning: failed to remove deps/parse_trans/rebar.config: Permission denied
warning: failed to remove deps/parse_trans/include/codegen.hrl: Permission denied
warning: failed to remove deps/parse_trans/include/exprecs.hrl: Permission denied
warning: failed to remove deps/parse_trans/.fetch: Permission denied
warning: failed to remove deps/parse_trans/.hex: Permission denied
warning: failed to remove deps/idna/LICENSE: Permission denied
warning: failed to remove deps/idna/rebar.lock: Permission denied
warning: failed to remove deps/idna/src/idna.erl: Permission denied
warning: failed to remove deps/idna/src/idna_logger.hrl: Permission denied
warning: failed to remove deps/idna/src/idna_ucs.erl: Permission denied
warning: failed to remove deps/idna/src/punycode.erl: Permission denied
warning: failed to remove deps/idna/src/idna_table.erl: Permission denied
warning: failed to remove deps/idna/src/idna_context.erl: Permission denied
warning: failed to remove deps/idna/src/idna.app.src: Permission denied
warning: failed to remove deps/idna/src/idna_mapping.erl: Permission denied
warning: failed to remove deps/idna/src/idna_data.erl: Permission denied
warning: failed to remove deps/idna/src/idna_bidi.erl: Permission denied
warning: failed to remove deps/idna/ebin/idna_mapping.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_context.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_bidi.beam: Permission denied
warning: failed to remove deps/idna/ebin/punycode.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_table.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_data.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_ucs.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna.app: Permission denied
warning: failed to remove deps/idna/ebin/idna.beam: Permission denied
warning: failed to remove deps/idna/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/idna/hex_metadata.config: Permission denied
warning: failed to remove deps/idna/README.md: Permission denied
warning: failed to remove deps/idna/rebar.config: Permission denied
warning: failed to remove deps/idna/.fetch: Permission denied
warning: failed to remove deps/idna/rebar.config.script: Permission denied
warning: failed to remove deps/idna/.hex: Permission denied
warning: failed to remove deps/hackney/MAINTAINERS: Permission denied
warning: failed to remove deps/hackney/LICENSE: Permission denied
warning: failed to remove deps/hackney/rebar.lock: Permission denied
warning: failed to remove deps/hackney/src/hackney_ssl.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_response.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_tcp.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_http.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_cookie.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_url.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_headers.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney.app.src: Permission denied
warning: failed to remove deps/hackney/src/hackney_pool_handler.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_trace.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_multipart.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_headers_new.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_http_connect.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_util.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_socks5.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_request.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_app.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_internal.hrl: Permission denied
warning: failed to remove deps/hackney/src/hackney_date.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_manager.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_connect.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_bstr.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_sup.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_local_tcp.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_stream.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_pool.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_metrics.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_methods.hrl: Permission denied
warning: failed to remove deps/hackney/NOTICE: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_pool.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_trace.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_pool_handler.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_headers.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_url.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_manager.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_metrics.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_stream.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_sup.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_multipart.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_http.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_socks5.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_app.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_http_connect.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_response.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney.app: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_headers_new.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_cookie.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_request.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_util.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_connect.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_date.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_ssl.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_bstr.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_tcp.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_local_tcp.beam: Permission denied
warning: failed to remove deps/hackney/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/hackney/hex_metadata.config: Permission denied
warning: failed to remove deps/hackney/README.md: Permission denied
warning: failed to remove deps/hackney/rebar.config: Permission denied
warning: failed to remove deps/hackney/include/hackney.hrl: Permission denied
warning: failed to remove deps/hackney/include/hackney_lib.hrl: Permission denied
warning: failed to remove deps/hackney/.fetch: Permission denied
warning: failed to remove deps/hackney/.hex: Permission denied
warning: failed to remove deps/hackney/NEWS.md: Permission denied
Removing __pycache__/
Removing specifications/gdd/OSConfig-v1.json
Traceback (most recent call last):
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 559, in _inner_main
sys.exit(EXIT_CODE_SKIPPED)
SystemExit: 28
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 611, in _inner_main
executor.check_call(["git", "clean", "-fdx"], cwd=working_repo_path)
File "/tmpfs/src/github/synthtool/autosynth/executor.py", line 29, in check_call
subprocess.check_call(command, **args)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['git', 'clean', '-fdx']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge/c19bc74d-4d42-4305-94c3-323bf90f3ebc).
| 1.0 | Synthesis failed for OSConfig - Hello! Autosynth couldn't regenerate OSConfig. :broken_heart:
Here's the output from running `synth.py`:
```
: failed to remove deps/parse_trans/ebin/parse_trans.app: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_mod.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_codegen.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/ct_expand.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/exprecs.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_pp.beam: Permission denied
warning: failed to remove deps/parse_trans/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/parse_trans/hex_metadata.config: Permission denied
warning: failed to remove deps/parse_trans/README.md: Permission denied
warning: failed to remove deps/parse_trans/rebar.config: Permission denied
warning: failed to remove deps/parse_trans/include/codegen.hrl: Permission denied
warning: failed to remove deps/parse_trans/include/exprecs.hrl: Permission denied
warning: failed to remove deps/parse_trans/.fetch: Permission denied
warning: failed to remove deps/parse_trans/.hex: Permission denied
warning: failed to remove deps/idna/LICENSE: Permission denied
warning: failed to remove deps/idna/rebar.lock: Permission denied
warning: failed to remove deps/idna/src/idna.erl: Permission denied
warning: failed to remove deps/idna/src/idna_logger.hrl: Permission denied
warning: failed to remove deps/idna/src/idna_ucs.erl: Permission denied
warning: failed to remove deps/idna/src/punycode.erl: Permission denied
warning: failed to remove deps/idna/src/idna_table.erl: Permission denied
warning: failed to remove deps/idna/src/idna_context.erl: Permission denied
warning: failed to remove deps/idna/src/idna.app.src: Permission denied
warning: failed to remove deps/idna/src/idna_mapping.erl: Permission denied
warning: failed to remove deps/idna/src/idna_data.erl: Permission denied
warning: failed to remove deps/idna/src/idna_bidi.erl: Permission denied
warning: failed to remove deps/idna/ebin/idna_mapping.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_context.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_bidi.beam: Permission denied
warning: failed to remove deps/idna/ebin/punycode.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_table.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_data.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_ucs.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna.app: Permission denied
warning: failed to remove deps/idna/ebin/idna.beam: Permission denied
warning: failed to remove deps/idna/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/idna/hex_metadata.config: Permission denied
warning: failed to remove deps/idna/README.md: Permission denied
warning: failed to remove deps/idna/rebar.config: Permission denied
warning: failed to remove deps/idna/.fetch: Permission denied
warning: failed to remove deps/idna/rebar.config.script: Permission denied
warning: failed to remove deps/idna/.hex: Permission denied
warning: failed to remove deps/hackney/MAINTAINERS: Permission denied
warning: failed to remove deps/hackney/LICENSE: Permission denied
warning: failed to remove deps/hackney/rebar.lock: Permission denied
warning: failed to remove deps/hackney/src/hackney_ssl.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_response.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_tcp.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_http.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_cookie.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_url.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_headers.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney.app.src: Permission denied
warning: failed to remove deps/hackney/src/hackney_pool_handler.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_trace.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_multipart.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_headers_new.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_http_connect.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_util.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_socks5.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_request.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_app.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_internal.hrl: Permission denied
warning: failed to remove deps/hackney/src/hackney_date.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_manager.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_connect.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_bstr.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_sup.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_local_tcp.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_stream.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_pool.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_metrics.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_methods.hrl: Permission denied
warning: failed to remove deps/hackney/NOTICE: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_pool.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_trace.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_pool_handler.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_headers.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_url.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_manager.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_metrics.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_stream.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_sup.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_multipart.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_http.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_socks5.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_app.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_http_connect.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_response.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney.app: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_headers_new.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_cookie.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_request.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_util.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_connect.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_date.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_ssl.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_bstr.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_tcp.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_local_tcp.beam: Permission denied
warning: failed to remove deps/hackney/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/hackney/hex_metadata.config: Permission denied
warning: failed to remove deps/hackney/README.md: Permission denied
warning: failed to remove deps/hackney/rebar.config: Permission denied
warning: failed to remove deps/hackney/include/hackney.hrl: Permission denied
warning: failed to remove deps/hackney/include/hackney_lib.hrl: Permission denied
warning: failed to remove deps/hackney/.fetch: Permission denied
warning: failed to remove deps/hackney/.hex: Permission denied
warning: failed to remove deps/hackney/NEWS.md: Permission denied
Removing __pycache__/
Removing specifications/gdd/OSConfig-v1.json
Traceback (most recent call last):
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 559, in _inner_main
sys.exit(EXIT_CODE_SKIPPED)
SystemExit: 28
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 611, in _inner_main
executor.check_call(["git", "clean", "-fdx"], cwd=working_repo_path)
File "/tmpfs/src/github/synthtool/autosynth/executor.py", line 29, in check_call
subprocess.check_call(command, **args)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['git', 'clean', '-fdx']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge/c19bc74d-4d42-4305-94c3-323bf90f3ebc).
| non_process | synthesis failed for osconfig hello autosynth couldn t regenerate osconfig broken heart here s the output from running synth py failed to remove deps parse trans ebin parse trans app permission denied warning failed to remove deps parse trans ebin parse trans mod beam permission denied warning failed to remove deps parse trans ebin parse trans codegen beam permission denied warning failed to remove deps parse trans ebin ct expand beam permission denied warning failed to remove deps parse trans ebin parse trans beam permission denied warning failed to remove deps parse trans ebin exprecs beam permission denied warning failed to remove deps parse trans ebin parse trans pp beam permission denied warning failed to remove deps parse trans erlcinfo permission denied warning failed to remove deps parse trans hex metadata config permission denied warning failed to remove deps parse trans readme md permission denied warning failed to remove deps parse trans rebar config permission denied warning failed to remove deps parse trans include codegen hrl permission denied warning failed to remove deps parse trans include exprecs hrl permission denied warning failed to remove deps parse trans fetch permission denied warning failed to remove deps parse trans hex permission denied warning failed to remove deps idna license permission denied warning failed to remove deps idna rebar lock permission denied warning failed to remove deps idna src idna erl permission denied warning failed to remove deps idna src idna logger hrl permission denied warning failed to remove deps idna src idna ucs erl permission denied warning failed to remove deps idna src punycode erl permission denied warning failed to remove deps idna src idna table erl permission denied warning failed to remove deps idna src idna context erl permission denied warning failed to remove deps idna src idna app src permission denied warning failed to remove deps idna src idna mapping erl permission denied warning failed to remove deps idna src idna data erl permission denied warning failed to remove deps idna src idna bidi erl permission denied warning failed to remove deps idna ebin idna mapping beam permission denied warning failed to remove deps idna ebin idna context beam permission denied warning failed to remove deps idna ebin idna bidi beam permission denied warning failed to remove deps idna ebin punycode beam permission denied warning failed to remove deps idna ebin idna table beam permission denied warning failed to remove deps idna ebin idna data beam permission denied warning failed to remove deps idna ebin idna ucs beam permission denied warning failed to remove deps idna ebin idna app permission denied warning failed to remove deps idna ebin idna beam permission denied warning failed to remove deps idna erlcinfo permission denied warning failed to remove deps idna hex metadata config permission denied warning failed to remove deps idna readme md permission denied warning failed to remove deps idna rebar config permission denied warning failed to remove deps idna fetch permission denied warning failed to remove deps idna rebar config script permission denied warning failed to remove deps idna hex permission denied warning failed to remove deps hackney maintainers permission denied warning failed to remove deps hackney license permission denied warning failed to remove deps hackney rebar lock permission denied warning failed to remove deps hackney src hackney ssl erl permission denied warning failed to remove deps hackney src hackney response erl permission denied warning failed to remove deps hackney src hackney tcp erl permission denied warning failed to remove deps hackney src hackney http erl permission denied warning failed to remove deps hackney src hackney cookie erl permission denied warning failed to remove deps hackney src hackney url erl permission denied warning failed to remove deps hackney src hackney headers erl permission denied warning failed to remove deps hackney src hackney app src permission denied warning failed to remove deps hackney src hackney pool handler erl permission denied warning failed to remove deps hackney src hackney trace erl permission denied warning failed to remove deps hackney src hackney multipart erl permission denied warning failed to remove deps hackney src hackney headers new erl permission denied warning failed to remove deps hackney src hackney http connect erl permission denied warning failed to remove deps hackney src hackney util erl permission denied warning failed to remove deps hackney src hackney erl permission denied warning failed to remove deps hackney src hackney request erl permission denied warning failed to remove deps hackney src hackney app erl permission denied warning failed to remove deps hackney src hackney internal hrl permission denied warning failed to remove deps hackney src hackney date erl permission denied warning failed to remove deps hackney src hackney manager erl permission denied warning failed to remove deps hackney src hackney connect erl permission denied warning failed to remove deps hackney src hackney bstr erl permission denied warning failed to remove deps hackney src hackney sup erl permission denied warning failed to remove deps hackney src hackney erl permission denied warning failed to remove deps hackney src hackney local tcp erl permission denied warning failed to remove deps hackney src hackney stream erl permission denied warning failed to remove deps hackney src hackney pool erl permission denied warning failed to remove deps hackney src hackney metrics erl permission denied warning failed to remove deps hackney src hackney methods hrl permission denied warning failed to remove deps hackney notice permission denied warning failed to remove deps hackney ebin hackney pool beam permission denied warning failed to remove deps hackney ebin hackney trace beam permission denied warning failed to remove deps hackney ebin hackney pool handler beam permission denied warning failed to remove deps hackney ebin hackney beam permission denied warning failed to remove deps hackney ebin hackney headers beam permission denied warning failed to remove deps hackney ebin hackney url beam permission denied warning failed to remove deps hackney ebin hackney manager beam permission denied warning failed to remove deps hackney ebin hackney metrics beam permission denied warning failed to remove deps hackney ebin hackney stream beam permission denied warning failed to remove deps hackney ebin hackney sup beam permission denied warning failed to remove deps hackney ebin hackney multipart beam permission denied warning failed to remove deps hackney ebin hackney http beam permission denied warning failed to remove deps hackney ebin hackney beam permission denied warning failed to remove deps hackney ebin hackney app beam permission denied warning failed to remove deps hackney ebin hackney http connect beam permission denied warning failed to remove deps hackney ebin hackney response beam permission denied warning failed to remove deps hackney ebin hackney app permission denied warning failed to remove deps hackney ebin hackney headers new beam permission denied warning failed to remove deps hackney ebin hackney cookie beam permission denied warning failed to remove deps hackney ebin hackney request beam permission denied warning failed to remove deps hackney ebin hackney util beam permission denied warning failed to remove deps hackney ebin hackney connect beam permission denied warning failed to remove deps hackney ebin hackney date beam permission denied warning failed to remove deps hackney ebin hackney ssl beam permission denied warning failed to remove deps hackney ebin hackney bstr beam permission denied warning failed to remove deps hackney ebin hackney tcp beam permission denied warning failed to remove deps hackney ebin hackney local tcp beam permission denied warning failed to remove deps hackney erlcinfo permission denied warning failed to remove deps hackney hex metadata config permission denied warning failed to remove deps hackney readme md permission denied warning failed to remove deps hackney rebar config permission denied warning failed to remove deps hackney include hackney hrl permission denied warning failed to remove deps hackney include hackney lib hrl permission denied warning failed to remove deps hackney fetch permission denied warning failed to remove deps hackney hex permission denied warning failed to remove deps hackney news md permission denied removing pycache removing specifications gdd osconfig json traceback most recent call last file tmpfs src github synthtool autosynth synth py line in inner main sys exit exit code skipped systemexit during handling of the above exception another exception occurred traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main executor check call cwd working repo path file tmpfs src github synthtool autosynth executor py line in check call subprocess check call command args file home kbuilder pyenv versions lib subprocess py line in check call raise calledprocesserror retcode cmd subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log | 0 |
196,454 | 22,441,831,719 | IssuesEvent | 2022-06-21 02:13:55 | artsking/frameworks_base_10.0.0-r33 | https://api.github.com/repos/artsking/frameworks_base_10.0.0-r33 | reopened | CVE-2020-0401 (High) detected in baseandroid-10.0.0_r34 | security vulnerability | ## CVE-2020-0401 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-10.0.0_r34</b></p></summary>
<p>
<p>Android framework classes and services</p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/frameworks_base_10.0.0-r33/commit/5015614aa927c7fed4a79eac4f67e86fc8d25f62">5015614aa927c7fed4a79eac4f67e86fc8d25f62</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/services/core/java/com/android/server/pm/PackageManagerService.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In setInstallerPackageName of PackageManagerService.java, there is a missing permission check. This could lead to local escalation of privilege and granting spurious permissions with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-8.0 Android-8.1 Android-9 Android-10 Android-11Android ID: A-150857253
<p>Publish Date: 2020-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-0401>CVE-2020-0401</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://source.android.com/security/bulletin/2020-09-01">https://source.android.com/security/bulletin/2020-09-01</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: android-8.0.0_r50,android-8.1.0_r80,android-9.0.0_r60,android-10.0.0_r46</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-0401 (High) detected in baseandroid-10.0.0_r34 - ## CVE-2020-0401 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-10.0.0_r34</b></p></summary>
<p>
<p>Android framework classes and services</p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/frameworks_base_10.0.0-r33/commit/5015614aa927c7fed4a79eac4f67e86fc8d25f62">5015614aa927c7fed4a79eac4f67e86fc8d25f62</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/services/core/java/com/android/server/pm/PackageManagerService.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In setInstallerPackageName of PackageManagerService.java, there is a missing permission check. This could lead to local escalation of privilege and granting spurious permissions with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-8.0 Android-8.1 Android-9 Android-10 Android-11Android ID: A-150857253
<p>Publish Date: 2020-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-0401>CVE-2020-0401</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://source.android.com/security/bulletin/2020-09-01">https://source.android.com/security/bulletin/2020-09-01</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: android-8.0.0_r50,android-8.1.0_r80,android-9.0.0_r60,android-10.0.0_r46</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in baseandroid cve high severity vulnerability vulnerable library baseandroid android framework classes and services library home page a href found in head commit a href found in base branch master vulnerable source files services core java com android server pm packagemanagerservice java vulnerability details in setinstallerpackagename of packagemanagerservice java there is a missing permission check this could lead to local escalation of privilege and granting spurious permissions with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android android android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android android android android step up your open source security game with whitesource | 0 |
3,452 | 6,542,647,946 | IssuesEvent | 2017-09-02 10:26:27 | pwittchen/ReactiveNetwork | https://api.github.com/repos/pwittchen/ReactiveNetwork | opened | Relase 0.12.1 (RxJava2.x) | release process RxJava2.x | **Initial release notes**:
Fixed memory leak in `PreLollipopNetworkObservingStrategy` during disposing of an `Observable` - issue #219.
**Things to do**:
TBD. | 1.0 | Relase 0.12.1 (RxJava2.x) - **Initial release notes**:
Fixed memory leak in `PreLollipopNetworkObservingStrategy` during disposing of an `Observable` - issue #219.
**Things to do**:
TBD. | process | relase x initial release notes fixed memory leak in prelollipopnetworkobservingstrategy during disposing of an observable issue things to do tbd | 1 |
47,243 | 5,872,399,941 | IssuesEvent | 2017-05-15 11:24:01 | EenmaalAndermaal/EenmaalAndermaal | https://api.github.com/repos/EenmaalAndermaal/EenmaalAndermaal | closed | Zoekveld | area: koper prioriteit: 3 task tester: Wouter | # User story
Als ik koper wil ik een zoekoptie gebruiken om een specifiek voorwerp te zoeken
# geschatte tijd
0.5 uur
# Definition of done
- [ ] veld om te typen | 1.0 | Zoekveld - # User story
Als ik koper wil ik een zoekoptie gebruiken om een specifiek voorwerp te zoeken
# geschatte tijd
0.5 uur
# Definition of done
- [ ] veld om te typen | non_process | zoekveld user story als ik koper wil ik een zoekoptie gebruiken om een specifiek voorwerp te zoeken geschatte tijd uur definition of done veld om te typen | 0 |
10,750 | 13,542,562,738 | IssuesEvent | 2020-09-16 17:32:36 | googleapis/nodejs-storage | https://api.github.com/repos/googleapis/nodejs-storage | closed | Sample tests failing: "Permission denied on Cloud KMS key" | api: storage priority: p1 type: process | Example build: https://source.cloud.google.com/results/invocations/7ddfa1bd-9f06-4048-adbe-e9298d2f69cf/targets/cloud-devrel%2Fclient-libraries%2Fnodejs%2Fpresubmit%2Fgoogleapis%2Fnodejs-storage%2Fnode10%2Fsamples-test/log
```
{ Error: Permission denied on Cloud KMS key. Please ensure that your Cloud Storage service account has been authorized to use this key.
at new ApiError (/tmpfs/src/github/nodejs-storage/node_modules/@google-cloud/common/build/src/util.js:59:15)
at Util.parseHttpRespBody (/tmpfs/src/github/nodejs-storage/node_modules/@google-cloud/common/build/src/util.js:194:38)
at Util.handleResp (/tmpfs/src/github/nodejs-storage/node_modules/@google-cloud/common/build/src/util.js:135:117)
at retryRequest (/tmpfs/src/github/nodejs-storage/node_modules/@google-cloud/common/build/src/util.js:434:22)
at onResponse (/tmpfs/src/github/nodejs-storage/node_modules/retry-request/index.js:206:7)
at res.text.then.text (/tmpfs/src/github/nodejs-storage/node_modules/teeny-request/build/src/index.js:219:13)
at process._tickCallback (internal/process/next_tick.js:68:7)
code: 403,
errors:
[ { message:
'Permission denied on Cloud KMS key. Please ensure that your Cloud Storage service account has been authorized to use this key.',
domain: 'global',
reason: 'forbidden' } ]
```
@JustinBeckwith @jkwlui This seems to be failing on all PRs. It sounds like the service account needs more roles. | 1.0 | Sample tests failing: "Permission denied on Cloud KMS key" - Example build: https://source.cloud.google.com/results/invocations/7ddfa1bd-9f06-4048-adbe-e9298d2f69cf/targets/cloud-devrel%2Fclient-libraries%2Fnodejs%2Fpresubmit%2Fgoogleapis%2Fnodejs-storage%2Fnode10%2Fsamples-test/log
```
{ Error: Permission denied on Cloud KMS key. Please ensure that your Cloud Storage service account has been authorized to use this key.
at new ApiError (/tmpfs/src/github/nodejs-storage/node_modules/@google-cloud/common/build/src/util.js:59:15)
at Util.parseHttpRespBody (/tmpfs/src/github/nodejs-storage/node_modules/@google-cloud/common/build/src/util.js:194:38)
at Util.handleResp (/tmpfs/src/github/nodejs-storage/node_modules/@google-cloud/common/build/src/util.js:135:117)
at retryRequest (/tmpfs/src/github/nodejs-storage/node_modules/@google-cloud/common/build/src/util.js:434:22)
at onResponse (/tmpfs/src/github/nodejs-storage/node_modules/retry-request/index.js:206:7)
at res.text.then.text (/tmpfs/src/github/nodejs-storage/node_modules/teeny-request/build/src/index.js:219:13)
at process._tickCallback (internal/process/next_tick.js:68:7)
code: 403,
errors:
[ { message:
'Permission denied on Cloud KMS key. Please ensure that your Cloud Storage service account has been authorized to use this key.',
domain: 'global',
reason: 'forbidden' } ]
```
@JustinBeckwith @jkwlui This seems to be failing on all PRs. It sounds like the service account needs more roles. | process | sample tests failing permission denied on cloud kms key example build error permission denied on cloud kms key please ensure that your cloud storage service account has been authorized to use this key at new apierror tmpfs src github nodejs storage node modules google cloud common build src util js at util parsehttprespbody tmpfs src github nodejs storage node modules google cloud common build src util js at util handleresp tmpfs src github nodejs storage node modules google cloud common build src util js at retryrequest tmpfs src github nodejs storage node modules google cloud common build src util js at onresponse tmpfs src github nodejs storage node modules retry request index js at res text then text tmpfs src github nodejs storage node modules teeny request build src index js at process tickcallback internal process next tick js code errors message permission denied on cloud kms key please ensure that your cloud storage service account has been authorized to use this key domain global reason forbidden justinbeckwith jkwlui this seems to be failing on all prs it sounds like the service account needs more roles | 1 |
818,856 | 30,708,455,602 | IssuesEvent | 2023-07-27 08:07:29 | vatesfr/xen-orchestra | https://api.github.com/repos/vatesfr/xen-orchestra | closed | Netbox plugin: synchronising migrated VMs deletes information instead of changing it | type: bug :bug: Priority 2: plan and do :green_circle: | **Describe the bug**
If you migrate a VM from one cluster to the other and then sync all changes to netbox, the plugin recreates the VM in Netbox (#6038) and **deletes** the "old" instance of the VM. **This also deletes every piece of information manually entered.**
**To Reproduce**
1. create a VM in XOA
2. sync to Netbox
3. migrate VM to other cluster/pool
4. sync both the old and new cluster/pool to Netbox
5. check changelog โ see old duplicate VM was deleted.
**Expected behavior**
The VM should not be deleted but edited based on the UUID field when the migration is done, so the parent pool (or, as it's called in Netbox, cluster) is changed to be the new pool. I expect the UUID to actually be unique and Netbox to reflect the actual state XOA is in when it's synchronised.
| 1.0 | Netbox plugin: synchronising migrated VMs deletes information instead of changing it - **Describe the bug**
If you migrate a VM from one cluster to the other and then sync all changes to netbox, the plugin recreates the VM in Netbox (#6038) and **deletes** the "old" instance of the VM. **This also deletes every piece of information manually entered.**
**To Reproduce**
1. create a VM in XOA
2. sync to Netbox
3. migrate VM to other cluster/pool
4. sync both the old and new cluster/pool to Netbox
5. check changelog โ see old duplicate VM was deleted.
**Expected behavior**
The VM should not be deleted but edited based on the UUID field when the migration is done, so the parent pool (or, as it's called in Netbox, cluster) is changed to be the new pool. I expect the UUID to actually be unique and Netbox to reflect the actual state XOA is in when it's synchronised.
| non_process | netbox plugin synchronising migrated vms deletes information instead of changing it describe the bug if you migrate a vm from one cluster to the other and then sync all changes to netbox the plugin recreates the vm in netbox and deletes the old instance of the vm this also deletes every piece of information manually entered to reproduce create a vm in xoa sync to netbox migrate vm to other cluster pool sync both the old and new cluster pool to netbox check changelog โ see old duplicate vm was deleted expected behavior the vm should not be deleted but edited based on the uuid field when the migration is done so the parent pool or as it s called in netbox cluster is changed to be the new pool i expect the uuid to actually be unique and netbox to reflect the actual state xoa is in when it s synchronised | 0 |
19,536 | 25,849,617,703 | IssuesEvent | 2022-12-13 09:29:04 | medic/cht-core | https://api.github.com/repos/medic/cht-core | closed | Release 4.1.0 | Type: Internal process | # Planning - Product Manager
- [x] Create a GH Milestone for the release. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining.
- [x] Add all the issues to be worked on to the Milestone. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bug fixes.
- [x] Identify any features and improvements in the release that need end-user documentation (beyond eng team documentation improvements) and create corresponding issues in the cht-docs repo
- [x] Assign an engineer as Release Engineer for this release.
# Development - Release Engineer
When development is ready to begin one of the engineers should be nominated as a Release Engineer. They will be responsible for making sure the following tasks are completed though not necessarily completing them.
- [x] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`.
- [x] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://docs.communityhealthtoolkit.org/core/guides/update-dependencies/). This should be done early in the release cycle so find a volunteer to take this on and assign it to them.
- [x] Write an update in the weekly [Medic Product Team call agenda](https://docs.google.com/document/d/14AuJ7SerLuOPESBjQlJqpBtzwSAoVf5ykTT7fjyJBT0/edit) summarising development and acceptance testing progress and identifying any blockers (the [milestone-status](https://github.com/medic/support-scripts/tree/master/milestone-status) script can be used to get a breakdown of the issues). The release Engineer is to update this every week until the version is released.
# Releasing - Release Engineer
Once all issues have passed acceptance testing and have been merged into `master` release testing can begin.
- [x] Create a new release branch from `master` named `<major>.<minor>.x` in `cht-core`. Post a message to #development using this template:
```
@core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks!
```
- [x] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing.
- [x] Announce the start of release testing on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category using this template:
```
*Release testing has started for {{version}} of {{product}}*
To get a sneak peak at this upcoming release, you can install `<major>.<minor>.<patch>-beta.1` on your testing environment. We suggest you test your forms and workflows with this release candidate version and raise any issues that you experience. This helps to to discover any potential regressions that wouldn't otherwise be caught during release testing.
Keep an eye on the forum for the release announcement in the next couple of weeks!
```
- [x] Add release notes to the [Core Framework Releases](https://docs.communityhealthtoolkit.org/core/releases/) page:
- [x] Create a new document for the release in the [releases folder](https://github.com/medic/cht-docs/tree/main/content/en/core/releases).
- [x] Ensure all issues are in the GH Milestone, that they're correctly labelled (in particular: they have the right Type, "UI/UX" if they change the UI, and "Breaking change" if appropriate), and have human readable descriptions.
- [x] Use [this script](https://github.com/medic/cht-core/blob/master/scripts/release-notes) to export the issues into our release note format.
- [x] Manually document any known migration steps and known issues.
- [x] Provide description, screenshots, videos, and anything else to help communicate particularly important changes.
- [x] Document any required or recommended upgrades to our other products (eg: cht-conf, cht-gateway, cht-android).
- [x] Add the release to the [Supported versions](https://docs.communityhealthtoolkit.org/core/releases/#supported-versions) and update the EOL date and status of previous releases. Also add a link in the `Release Notes` section to the new release page.
- [x] Assign the PR to:
- The Director of Technology
- An SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient
- [x] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta.
- [x] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/cht-core/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release.
- [x] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds_4/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>`
- [ ] Execute the scalability testing suite on the final build and download the scalability results on S3 at medic-e2e/scalability/$TAG_NAME. Add the release `.jtl` file to `cht-core/tests/scalability/previous_results`. More info in the [scalability documentation](https://github.com/medic/cht-core/blob/master/tests/scalability/README.md).
- [ ] Upgrade the `demo-cht.dev` instance to this version.
- [x] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category using this template:
```
*We're excited to announce the release of {{version}} of {{product}}*
New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs.
Read the [release notes]({{url}}) for full details.
Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our [software support documentation](https://docs.communityhealthtoolkit.org/core/releases/#supported-versions).
Check out our [roadmap](https://github.com/orgs/medic/projects/112) to see what we're working on next.
```
- [ ] Add one last update to the [Medic Product Team call agenda](https://docs.google.com/document/d/14AuJ7SerLuOPESBjQlJqpBtzwSAoVf5ykTT7fjyJBT0/edit) and use this meeting to lead an internal release retrospective covering what went well and areas to improve for next time.
- [ ] Mark this issue "done" and close the Milestone.
| 1.0 | Release 4.1.0 - # Planning - Product Manager
- [x] Create a GH Milestone for the release. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining.
- [x] Add all the issues to be worked on to the Milestone. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bug fixes.
- [x] Identify any features and improvements in the release that need end-user documentation (beyond eng team documentation improvements) and create corresponding issues in the cht-docs repo
- [x] Assign an engineer as Release Engineer for this release.
# Development - Release Engineer
When development is ready to begin one of the engineers should be nominated as a Release Engineer. They will be responsible for making sure the following tasks are completed though not necessarily completing them.
- [x] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`.
- [x] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://docs.communityhealthtoolkit.org/core/guides/update-dependencies/). This should be done early in the release cycle so find a volunteer to take this on and assign it to them.
- [x] Write an update in the weekly [Medic Product Team call agenda](https://docs.google.com/document/d/14AuJ7SerLuOPESBjQlJqpBtzwSAoVf5ykTT7fjyJBT0/edit) summarising development and acceptance testing progress and identifying any blockers (the [milestone-status](https://github.com/medic/support-scripts/tree/master/milestone-status) script can be used to get a breakdown of the issues). The release Engineer is to update this every week until the version is released.
# Releasing - Release Engineer
Once all issues have passed acceptance testing and have been merged into `master` release testing can begin.
- [x] Create a new release branch from `master` named `<major>.<minor>.x` in `cht-core`. Post a message to #development using this template:
```
@core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks!
```
- [x] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing.
- [x] Announce the start of release testing on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category using this template:
```
*Release testing has started for {{version}} of {{product}}*
To get a sneak peak at this upcoming release, you can install `<major>.<minor>.<patch>-beta.1` on your testing environment. We suggest you test your forms and workflows with this release candidate version and raise any issues that you experience. This helps to to discover any potential regressions that wouldn't otherwise be caught during release testing.
Keep an eye on the forum for the release announcement in the next couple of weeks!
```
- [x] Add release notes to the [Core Framework Releases](https://docs.communityhealthtoolkit.org/core/releases/) page:
- [x] Create a new document for the release in the [releases folder](https://github.com/medic/cht-docs/tree/main/content/en/core/releases).
- [x] Ensure all issues are in the GH Milestone, that they're correctly labelled (in particular: they have the right Type, "UI/UX" if they change the UI, and "Breaking change" if appropriate), and have human readable descriptions.
- [x] Use [this script](https://github.com/medic/cht-core/blob/master/scripts/release-notes) to export the issues into our release note format.
- [x] Manually document any known migration steps and known issues.
- [x] Provide description, screenshots, videos, and anything else to help communicate particularly important changes.
- [x] Document any required or recommended upgrades to our other products (eg: cht-conf, cht-gateway, cht-android).
- [x] Add the release to the [Supported versions](https://docs.communityhealthtoolkit.org/core/releases/#supported-versions) and update the EOL date and status of previous releases. Also add a link in the `Release Notes` section to the new release page.
- [x] Assign the PR to:
- The Director of Technology
- An SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient
- [x] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta.
- [x] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/cht-core/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release.
- [x] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds_4/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>`
- [ ] Execute the scalability testing suite on the final build and download the scalability results on S3 at medic-e2e/scalability/$TAG_NAME. Add the release `.jtl` file to `cht-core/tests/scalability/previous_results`. More info in the [scalability documentation](https://github.com/medic/cht-core/blob/master/tests/scalability/README.md).
- [ ] Upgrade the `demo-cht.dev` instance to this version.
- [x] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category using this template:
```
*We're excited to announce the release of {{version}} of {{product}}*
New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs.
Read the [release notes]({{url}}) for full details.
Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our [software support documentation](https://docs.communityhealthtoolkit.org/core/releases/#supported-versions).
Check out our [roadmap](https://github.com/orgs/medic/projects/112) to see what we're working on next.
```
- [ ] Add one last update to the [Medic Product Team call agenda](https://docs.google.com/document/d/14AuJ7SerLuOPESBjQlJqpBtzwSAoVf5ykTT7fjyJBT0/edit) and use this meeting to lead an internal release retrospective covering what went well and areas to improve for next time.
- [ ] Mark this issue "done" and close the Milestone.
| process | release planning product manager create a gh milestone for the release we use so if there are breaking changes increment the major otherwise if there are new features increment the minor otherwise increment the service pack breaking changes in our case relate to updated software requirements egs couchdb node minimum browser versions broken backwards compatibility in an api or a major visual update that requires user retraining add all the issues to be worked on to the milestone ideally each minor release will have one or two features a handful of improvements and plenty of bug fixes identify any features and improvements in the release that need end user documentation beyond eng team documentation improvements and create corresponding issues in the cht docs repo assign an engineer as release engineer for this release development release engineer when development is ready to begin one of the engineers should be nominated as a release engineer they will be responsible for making sure the following tasks are completed though not necessarily completing them set the version number in package json and package lock json and submit a pr the easiest way to do this is to use npm no git tag version version raise a new issue called update dependencies for with a description that links to this should be done early in the release cycle so find a volunteer to take this on and assign it to them write an update in the weekly summarising development and acceptance testing progress and identifying any blockers the script can be used to get a breakdown of the issues the release engineer is to update this every week until the version is released releasing release engineer once all issues have passed acceptance testing and have been merged into master release testing can begin create a new release branch from master named x in cht core post a message to development using this template core devs i ve just created the x release branch please be aware that any further changes intended for this release will have to be merged to master then backported thanks build a beta named beta by pushing a git tag and when ci completes successfully notify the qa team that it s ready for release testing announce the start of release testing on the under the product releases category using this template release testing has started for version of product to get a sneak peak at this upcoming release you can install beta on your testing environment we suggest you test your forms and workflows with this release candidate version and raise any issues that you experience this helps to to discover any potential regressions that wouldn t otherwise be caught during release testing keep an eye on the forum for the release announcement in the next couple of weeks add release notes to the page create a new document for the release in the ensure all issues are in the gh milestone that they re correctly labelled in particular they have the right type ui ux if they change the ui and breaking change if appropriate and have human readable descriptions use to export the issues into our release note format manually document any known migration steps and known issues provide description screenshots videos and anything else to help communicate particularly important changes document any required or recommended upgrades to our other products eg cht conf cht gateway cht android add the release to the and update the eol date and status of previous releases also add a link in the release notes section to the new release page assign the pr to the director of technology an sre to review and confirm the documentation on upgrade instructions and breaking changes is sufficient until release testing passes make sure regressions are fixed in master cherry pick them into the release branch and release another beta create a release in github from the release branch so it shows up under the with the naming convention this will create the git tag automatically link to the release notes in the description of the release confirm the release build completes successfully and the new release is available on the make sure that the document has new entry with id medic medic execute the scalability testing suite on the final build and download the scalability results on at medic scalability tag name add the release jtl file to cht core tests scalability previous results more info in the upgrade the demo cht dev instance to this version announce the release on the under the product releases category using this template we re excited to announce the release of version of product new features include key features we ve also implemented loads of other improvements and fixed a heap of bugs read the url for full details following our support policy versions versions are no longer supported projects running these versions should start planning to upgrade in the near future for more details read our check out our to see what we re working on next add one last update to the and use this meeting to lead an internal release retrospective covering what went well and areas to improve for next time mark this issue done and close the milestone | 1 |
22,758 | 32,079,374,220 | IssuesEvent | 2023-09-25 13:04:13 | MuttiD/ElectroShop | https://api.github.com/repos/MuttiD/ElectroShop | opened | [SITE OWNER STORY]: <Create Checkout Page and Payment Processing> | payment processing site owner story | As a **site owner**, I want to streamline the checkout process to minimize cart abandonment and maximize conversions.
| 1.0 | [SITE OWNER STORY]: <Create Checkout Page and Payment Processing> - As a **site owner**, I want to streamline the checkout process to minimize cart abandonment and maximize conversions.
| process | as a site owner i want to streamline the checkout process to minimize cart abandonment and maximize conversions | 1 |
139,423 | 11,267,920,674 | IssuesEvent | 2020-01-14 04:08:33 | prysmaticlabs/prysm | https://api.github.com/repos/prysmaticlabs/prysm | closed | Efficient calculation of proposer index | Testnet | Since the v0.9 spec, proposer selection changed based on per slot shuffling of the active validator, it's no longer based on committee selection. This invalidated the usage of committee cache to compute proposer.
Given we cache ordered active validator indices and its shuffling each time. This is not a bad problem to solve. We could construct a cache of proposer indices at the start of each epoch. This could be a list of length `SLOTS_PER_EPOCH` that contains the proposer index for each slot at the list index `slot % SLOTS_PER_EPOCH`
This is feasible because the selection of proposers depends on the following:
* Active validators is fixed for the duration of the epoch
* Seed which we know an epoch in advance
* The effective balance of the active validators, they only updated during epoch transition and are therefore stable for the entirety of the epoch.
Note, we can not compute proposer index full epoch in advance because of the dependence on the effective balances | 1.0 | Efficient calculation of proposer index - Since the v0.9 spec, proposer selection changed based on per slot shuffling of the active validator, it's no longer based on committee selection. This invalidated the usage of committee cache to compute proposer.
Given we cache ordered active validator indices and its shuffling each time. This is not a bad problem to solve. We could construct a cache of proposer indices at the start of each epoch. This could be a list of length `SLOTS_PER_EPOCH` that contains the proposer index for each slot at the list index `slot % SLOTS_PER_EPOCH`
This is feasible because the selection of proposers depends on the following:
* Active validators is fixed for the duration of the epoch
* Seed which we know an epoch in advance
* The effective balance of the active validators, they only updated during epoch transition and are therefore stable for the entirety of the epoch.
Note, we can not compute proposer index full epoch in advance because of the dependence on the effective balances | non_process | efficient calculation of proposer index since the spec proposer selection changed based on per slot shuffling of the active validator it s no longer based on committee selection this invalidated the usage of committee cache to compute proposer given we cache ordered active validator indices and its shuffling each time this is not a bad problem to solve we could construct a cache of proposer indices at the start of each epoch this could be a list of length slots per epoch that contains the proposer index for each slot at the list index slot slots per epoch this is feasible because the selection of proposers depends on the following active validators is fixed for the duration of the epoch seed which we know an epoch in advance the effective balance of the active validators they only updated during epoch transition and are therefore stable for the entirety of the epoch note we can not compute proposer index full epoch in advance because of the dependence on the effective balances | 0 |
3,590 | 6,622,075,508 | IssuesEvent | 2017-09-21 21:45:21 | cptechinc/soft-6-ecomm | https://api.github.com/repos/cptechinc/soft-6-ecomm | closed | Contact Page Google Maps | Processwire | Have the source url come from processwire that way you won't have to change it when we port this to another customer or when if the same customer moves | 1.0 | Contact Page Google Maps - Have the source url come from processwire that way you won't have to change it when we port this to another customer or when if the same customer moves | process | contact page google maps have the source url come from processwire that way you won t have to change it when we port this to another customer or when if the same customer moves | 1 |
321,986 | 27,570,571,029 | IssuesEvent | 2023-03-08 08:58:01 | riparias/early-warning-webapp | https://api.github.com/repos/riparias/early-warning-webapp | closed | Find a way to let users know about changes/new features | enhancement in progress missing tests | # Use case 1
New species are added to the list.
If I have an alert with Species = All, no problem, I get notified about new obs within my alert. But if I specified a subset of species, it's not easy for me to get to know about the addition of other species which I could be interested as well.
# Use case 2
A species has been replaced with a new one.
This happened yesterday. It could happen again, although not likely.
If I have an alert for that species only, or a subset of species including it, I will not know why I don't get any observation anymore. I would first think it's just that no obs are made, but that's not the case.
# Use case 3
Addition of extra features, improvements of the user experience.
New features and improvements are coming up regularly, that's great. However, they are not communicated at the moment.
# Possible solutions
Some solutions popping up in my mind:
- A News page with a chronological list of improvements/changes communicating to people our efforts to follow up bugs, feature requests, etc.. This could be enough for Use case 3. Tweeting about it can help as well.
- Send mails to all users to inform them about addition of new species/features. This is more "disturbing" but I think is necessary for solving use cases 1 and 2. Notice that the use cases 1 and 2 will occur at year frequency or even less in the future.
@niconoe, @peterdesmet: what do you think about it? | 1.0 | Find a way to let users know about changes/new features - # Use case 1
New species are added to the list.
If I have an alert with Species = All, no problem, I get notified about new obs within my alert. But if I specified a subset of species, it's not easy for me to get to know about the addition of other species which I could be interested as well.
# Use case 2
A species has been replaced with a new one.
This happened yesterday. It could happen again, although not likely.
If I have an alert for that species only, or a subset of species including it, I will not know why I don't get any observation anymore. I would first think it's just that no obs are made, but that's not the case.
# Use case 3
Addition of extra features, improvements of the user experience.
New features and improvements are coming up regularly, that's great. However, they are not communicated at the moment.
# Possible solutions
Some solutions popping up in my mind:
- A News page with a chronological list of improvements/changes communicating to people our efforts to follow up bugs, feature requests, etc.. This could be enough for Use case 3. Tweeting about it can help as well.
- Send mails to all users to inform them about addition of new species/features. This is more "disturbing" but I think is necessary for solving use cases 1 and 2. Notice that the use cases 1 and 2 will occur at year frequency or even less in the future.
@niconoe, @peterdesmet: what do you think about it? | non_process | find a way to let users know about changes new features use case new species are added to the list if i have an alert with species all no problem i get notified about new obs within my alert but if i specified a subset of species it s not easy for me to get to know about the addition of other species which i could be interested as well use case a species has been replaced with a new one this happened yesterday it could happen again although not likely if i have an alert for that species only or a subset of species including it i will not know why i don t get any observation anymore i would first think it s just that no obs are made but that s not the case use case addition of extra features improvements of the user experience new features and improvements are coming up regularly that s great however they are not communicated at the moment possible solutions some solutions popping up in my mind a news page with a chronological list of improvements changes communicating to people our efforts to follow up bugs feature requests etc this could be enough for use case tweeting about it can help as well send mails to all users to inform them about addition of new species features this is more disturbing but i think is necessary for solving use cases and notice that the use cases and will occur at year frequency or even less in the future niconoe peterdesmet what do you think about it | 0 |
26,221 | 11,276,321,874 | IssuesEvent | 2020-01-14 22:54:18 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Spaces API in Dev Tools fails to successfully POST or DELETE space | Team:Security triage_needed | **Kibana version:**
7.5.1 in Elasticsearch Service
**Elasticsearch version:**
7.5.1
**Server OS version:**
Elasticsearch Service
**Browser version:**
Chrome
**Browser OS version:**
Mac
**Original install method (e.g. download page, yum, from source, etc.):**
Elasticsearch Service
**Describe the bug:**
Spaces deployed with the Spaces API are failing to load correctly into Kibana.
**Steps to reproduce:**
1. Go to Dev tools
2. Paste in a spaces json blob to deploy new space. I'm using the POST method with the default blob provided by the Create Spaces API page https://www.elastic.co/guide/en/kibana/current/spaces-api-post.html
3. API returns successful response, however, the space fails to show up in Kibana.
4. Using GET /api/spaces/space/ shows successful POST, but when toggling spaces using the space avatar, only the default space shows up
5. Checking the role, this role has permission to view this new space, even though technically the space does not exist in Kibana.
6. When I try: DELETE /api/spaces/space/space-name the Dev Tool responds stating:
{
"error": "no handler found for uri [/api/spaces/space/space-name?pretty=true] and method [DELETE]"
}
7. Why can I deploy a space and update it, but I can't delete and it doesn't technically exist in the Kibana GUI?
**Expected behavior:**
I expect to deploy a new space using a POST method, configure the role to access said space, and be able to use. This is not the case. I should also be able to DELETE /api/spaces/space/space-name, but that is not the case.
**Screenshots (if relevant):**
**Errors in browser console (if relevant):**
**Provide logs and/or server output (if relevant):**
**Any additional context:**
| True | Spaces API in Dev Tools fails to successfully POST or DELETE space - **Kibana version:**
7.5.1 in Elasticsearch Service
**Elasticsearch version:**
7.5.1
**Server OS version:**
Elasticsearch Service
**Browser version:**
Chrome
**Browser OS version:**
Mac
**Original install method (e.g. download page, yum, from source, etc.):**
Elasticsearch Service
**Describe the bug:**
Spaces deployed with the Spaces API are failing to load correctly into Kibana.
**Steps to reproduce:**
1. Go to Dev tools
2. Paste in a spaces json blob to deploy new space. I'm using the POST method with the default blob provided by the Create Spaces API page https://www.elastic.co/guide/en/kibana/current/spaces-api-post.html
3. API returns successful response, however, the space fails to show up in Kibana.
4. Using GET /api/spaces/space/ shows successful POST, but when toggling spaces using the space avatar, only the default space shows up
5. Checking the role, this role has permission to view this new space, even though technically the space does not exist in Kibana.
6. When I try: DELETE /api/spaces/space/space-name the Dev Tool responds stating:
{
"error": "no handler found for uri [/api/spaces/space/space-name?pretty=true] and method [DELETE]"
}
7. Why can I deploy a space and update it, but I can't delete and it doesn't technically exist in the Kibana GUI?
**Expected behavior:**
I expect to deploy a new space using a POST method, configure the role to access said space, and be able to use. This is not the case. I should also be able to DELETE /api/spaces/space/space-name, but that is not the case.
**Screenshots (if relevant):**
**Errors in browser console (if relevant):**
**Provide logs and/or server output (if relevant):**
**Any additional context:**
| non_process | spaces api in dev tools fails to successfully post or delete space kibana version in elasticsearch service elasticsearch version server os version elasticsearch service browser version chrome browser os version mac original install method e g download page yum from source etc elasticsearch service describe the bug spaces deployed with the spaces api are failing to load correctly into kibana steps to reproduce go to dev tools paste in a spaces json blob to deploy new space i m using the post method with the default blob provided by the create spaces api page api returns successful response however the space fails to show up in kibana using get api spaces space shows successful post but when toggling spaces using the space avatar only the default space shows up checking the role this role has permission to view this new space even though technically the space does not exist in kibana when i try delete api spaces space space name the dev tool responds stating error no handler found for uri and method why can i deploy a space and update it but i can t delete and it doesn t technically exist in the kibana gui expected behavior i expect to deploy a new space using a post method configure the role to access said space and be able to use this is not the case i should also be able to delete api spaces space space name but that is not the case screenshots if relevant errors in browser console if relevant provide logs and or server output if relevant any additional context | 0 |
12,554 | 14,977,195,861 | IssuesEvent | 2021-01-28 09:10:35 | Jeffail/benthos | https://api.github.com/repos/Jeffail/benthos | opened | Add multiple import path support to the protobuf processor | enhancement processors | Currently you can only specify a single import path in the `protobuf` processor, I don't think there's any particular reason why we can't expand that to allow multiple import paths. | 1.0 | Add multiple import path support to the protobuf processor - Currently you can only specify a single import path in the `protobuf` processor, I don't think there's any particular reason why we can't expand that to allow multiple import paths. | process | add multiple import path support to the protobuf processor currently you can only specify a single import path in the protobuf processor i don t think there s any particular reason why we can t expand that to allow multiple import paths | 1 |
693,820 | 23,791,421,890 | IssuesEvent | 2022-09-02 14:52:24 | CredentialEngine/CredentialRegistry | https://api.github.com/repos/CredentialEngine/CredentialRegistry | closed | The Florida staging registry (FDEO) doesn't seem to be operational | High Priority | @science @excelsior @edgarf
We thought the Florida staging registry (FDEO) was set up in June.
I did some checks today and the community is not recognized.


The production schemas endpoint only shows the ce-registry
https://credentialengineregistry.org/schemas/info

We need this to be operational to prove to Florida that it is ready to use. Or I am missing something?
| 1.0 | The Florida staging registry (FDEO) doesn't seem to be operational - @science @excelsior @edgarf
We thought the Florida staging registry (FDEO) was set up in June.
I did some checks today and the community is not recognized.


The production schemas endpoint only shows the ce-registry
https://credentialengineregistry.org/schemas/info

We need this to be operational to prove to Florida that it is ready to use. Or I am missing something?
| non_process | the florida staging registry fdeo doesn t seem to be operational science excelsior edgarf we thought the florida staging registry fdeo was set up in june i did some checks today and the community is not recognized the production schemas endpoint only shows the ce registry we need this to be operational to prove to florida that it is ready to use or i am missing something | 0 |
21,650 | 30,084,105,478 | IssuesEvent | 2023-06-29 07:16:08 | ovh/public-cloud-roadmap | https://api.github.com/repos/ovh/public-cloud-roadmap | closed | Poland (WAW1) region | Geo Data Processing | Making the Data Processing Service available in our Polish Datacenter.
Notes/Existing workaround :
The featureset and pricing model will be the same as existing regions.
Polish customer can already benefit from the service in other regions. | 1.0 | Poland (WAW1) region - Making the Data Processing Service available in our Polish Datacenter.
Notes/Existing workaround :
The featureset and pricing model will be the same as existing regions.
Polish customer can already benefit from the service in other regions. | process | poland region making the data processing service available in our polish datacenter notes existing workaround the featureset and pricing model will be the same as existing regions polish customer can already benefit from the service in other regions | 1 |
788,948 | 27,774,474,549 | IssuesEvent | 2023-03-16 16:19:57 | AY2223S2-CS2113-T12-4/tp | https://api.github.com/repos/AY2223S2-CS2113-T12-4/tp | opened | [Task] UserGuide | type.Task priority.Low | **Describe the Task**
Create preliminary user guide for v1.0
**To Reproduce**
Steps to reproduce the behavior:
1. Access http://AY2223S2-CS2113-T12-4.github.io/tp/
2. Select user guide
**Expected behavior**
User guide should me shown to instruct new users on using the application during v1.0
| 1.0 | [Task] UserGuide - **Describe the Task**
Create preliminary user guide for v1.0
**To Reproduce**
Steps to reproduce the behavior:
1. Access http://AY2223S2-CS2113-T12-4.github.io/tp/
2. Select user guide
**Expected behavior**
User guide should me shown to instruct new users on using the application during v1.0
| non_process | userguide describe the task create preliminary user guide for to reproduce steps to reproduce the behavior access select user guide expected behavior user guide should me shown to instruct new users on using the application during | 0 |
1,876 | 11,014,648,693 | IssuesEvent | 2019-12-04 23:18:06 | dfernandezm/moneycol | https://api.github.com/repos/dfernandezm/moneycol | closed | Terraform automation | automation myiac | - [ ] Setup buckets in GCP object storage for terraform state through myiac (gcloud client?)
- [ ] Backup terraform state file in GCP cloud storage
- [ ] Myiac: Deploy the backend skeleton with GraphQL endpoint/status endpoint (it must have been Dockerized first) | 1.0 | Terraform automation - - [ ] Setup buckets in GCP object storage for terraform state through myiac (gcloud client?)
- [ ] Backup terraform state file in GCP cloud storage
- [ ] Myiac: Deploy the backend skeleton with GraphQL endpoint/status endpoint (it must have been Dockerized first) | non_process | terraform automation setup buckets in gcp object storage for terraform state through myiac gcloud client backup terraform state file in gcp cloud storage myiac deploy the backend skeleton with graphql endpoint status endpoint it must have been dockerized first | 0 |
106,739 | 23,275,719,830 | IssuesEvent | 2022-08-05 06:57:23 | VirtusLab/akka-serialization-helper | https://api.github.com/repos/VirtusLab/akka-serialization-helper | closed | Add real dumping of persistence-schema to `examples/` | code quality docs | As for now, compiling the example application: `examples/akka-cluster-app` with `dump-persistence-schema-plugin` enabled does not produce any dump. We should add a working example for dumping persistence schema.
Still to be decided, which option to choose:
a) Add persistence schema to `examples/akka-cluster-app` (modify app's logic)
b) Leave `examples/akka-cluster-app` unchanged and add another example app under `examples` - just for this need | 1.0 | Add real dumping of persistence-schema to `examples/` - As for now, compiling the example application: `examples/akka-cluster-app` with `dump-persistence-schema-plugin` enabled does not produce any dump. We should add a working example for dumping persistence schema.
Still to be decided, which option to choose:
a) Add persistence schema to `examples/akka-cluster-app` (modify app's logic)
b) Leave `examples/akka-cluster-app` unchanged and add another example app under `examples` - just for this need | non_process | add real dumping of persistence schema to examples as for now compiling the example application examples akka cluster app with dump persistence schema plugin enabled does not produce any dump we should add a working example for dumping persistence schema still to be decided which option to choose a add persistence schema to examples akka cluster app modify app s logic b leave examples akka cluster app unchanged and add another example app under examples just for this need | 0 |
11,634 | 14,493,529,692 | IssuesEvent | 2020-12-11 08:39:24 | panther-labs/panther | https://api.github.com/repos/panther-labs/panther | closed | It should be possible to hide from the Data Explorer view tables of Custom schemas that are deleted | epic p1 team:data processing | ### Description
Right now, when deleting a custom schema,
### Designs
TBD
### Acceptance Criteria
- Users can have the option to hide from the Data explorer view the Custom tables that don't have a corresponding schema defined
- The contents of the tables will still be searcheable through Indicator search
- Alerts that have fired for that tables will still have access to that data. | 1.0 | It should be possible to hide from the Data Explorer view tables of Custom schemas that are deleted - ### Description
Right now, when deleting a custom schema,
### Designs
TBD
### Acceptance Criteria
- Users can have the option to hide from the Data explorer view the Custom tables that don't have a corresponding schema defined
- The contents of the tables will still be searcheable through Indicator search
- Alerts that have fired for that tables will still have access to that data. | process | it should be possible to hide from the data explorer view tables of custom schemas that are deleted description right now when deleting a custom schema designs tbd acceptance criteria users can have the option to hide from the data explorer view the custom tables that don t have a corresponding schema defined the contents of the tables will still be searcheable through indicator search alerts that have fired for that tables will still have access to that data | 1 |
181,447 | 14,020,586,132 | IssuesEvent | 2020-10-29 19:55:04 | ISISScientificComputing/autoreduce | https://api.github.com/repos/ISISScientificComputing/autoreduce | opened | DataArchiveCreator prevents changes to settings.py | :key: Testing | Issue raised by: [developer]
### What?
This is pretty convoluted to explain so I will try to bullet point it
- `queue_processors/autoreduction_processor/test_settings.py` has 2 settings `ceph_directory` and `scripts_directory`. (As of right now there is also a comment above them that shows the settings used for the dev node and probably prod)
- In `test_end_to_end.py` the class `DataArchiveCreator` is used to setup a fake data archive for storing datafiles/scripts/etc.
- The `DataArchiveCreator` will only set up a directory that matches the current value in `test_settings.py` This means 2 different things:
* If those settings are ever changed `test_end_to_end.py` will break, as the `DataArchiveCreator` is not using the setting but is hardcoded to create a directory matching the setting as it exists right now.
* The production and development builds would technically fail these 2 end to end test cases
- The docstring for `DataArchiveCreator` also claims to produce an archive identical to isis data archive. It doesn't
### Where?
Settings at `queue_processors/autoreduction_processor/test_settings.py`
Archive creator at `utils/data_archive/data_archive_creator.py`
tests at `systemtests/test_end_to_end.py`
### How?
A need to change the settings revealed this.
### Reproducible?
[Yes]
Change the setting and these 2 tests will fail.
### How to test the issue is resolved
The DataArchiveCreator should create a data archive based on the 2 settings from the settings file, not hard coded location. The end to end tests should then be able to pass on any environment (including dev and prod in theory)
| 1.0 | DataArchiveCreator prevents changes to settings.py - Issue raised by: [developer]
### What?
This is pretty convoluted to explain so I will try to bullet point it
- `queue_processors/autoreduction_processor/test_settings.py` has 2 settings `ceph_directory` and `scripts_directory`. (As of right now there is also a comment above them that shows the settings used for the dev node and probably prod)
- In `test_end_to_end.py` the class `DataArchiveCreator` is used to setup a fake data archive for storing datafiles/scripts/etc.
- The `DataArchiveCreator` will only set up a directory that matches the current value in `test_settings.py` This means 2 different things:
* If those settings are ever changed `test_end_to_end.py` will break, as the `DataArchiveCreator` is not using the setting but is hardcoded to create a directory matching the setting as it exists right now.
* The production and development builds would technically fail these 2 end to end test cases
- The docstring for `DataArchiveCreator` also claims to produce an archive identical to isis data archive. It doesn't
### Where?
Settings at `queue_processors/autoreduction_processor/test_settings.py`
Archive creator at `utils/data_archive/data_archive_creator.py`
tests at `systemtests/test_end_to_end.py`
### How?
A need to change the settings revealed this.
### Reproducible?
[Yes]
Change the setting and these 2 tests will fail.
### How to test the issue is resolved
The DataArchiveCreator should create a data archive based on the 2 settings from the settings file, not hard coded location. The end to end tests should then be able to pass on any environment (including dev and prod in theory)
| non_process | dataarchivecreator prevents changes to settings py issue raised by what this is pretty convoluted to explain so i will try to bullet point it queue processors autoreduction processor test settings py has settings ceph directory and scripts directory as of right now there is also a comment above them that shows the settings used for the dev node and probably prod in test end to end py the class dataarchivecreator is used to setup a fake data archive for storing datafiles scripts etc the dataarchivecreator will only set up a directory that matches the current value in test settings py this means different things if those settings are ever changed test end to end py will break as the dataarchivecreator is not using the setting but is hardcoded to create a directory matching the setting as it exists right now the production and development builds would technically fail these end to end test cases the docstring for dataarchivecreator also claims to produce an archive identical to isis data archive it doesn t where settings at queue processors autoreduction processor test settings py archive creator at utils data archive data archive creator py tests at systemtests test end to end py how a need to change the settings revealed this reproducible change the setting and these tests will fail how to test the issue is resolved the dataarchivecreator should create a data archive based on the settings from the settings file not hard coded location the end to end tests should then be able to pass on any environment including dev and prod in theory | 0 |
130,542 | 5,117,710,287 | IssuesEvent | 2017-01-07 19:46:58 | benvenutti/simpleDSP | https://api.github.com/repos/benvenutti/simpleDSP | closed | Sinusoids generation | priority: medium status: in progress type: feature | Implement the generation of sinusoids and complex sinusoids. This issue relates to assignments from week number 2. | 1.0 | Sinusoids generation - Implement the generation of sinusoids and complex sinusoids. This issue relates to assignments from week number 2. | non_process | sinusoids generation implement the generation of sinusoids and complex sinusoids this issue relates to assignments from week number | 0 |
243,209 | 20,370,233,529 | IssuesEvent | 2022-02-21 10:29:42 | PolicyEngine/openfisca-us | https://api.github.com/repos/PolicyEngine/openfisca-us | opened | Investigate GainsTax $200 discrepancy in unit test | testing | GainsTax unit test 2 differs from tax-calc by $162. | 1.0 | Investigate GainsTax $200 discrepancy in unit test - GainsTax unit test 2 differs from tax-calc by $162. | non_process | investigate gainstax discrepancy in unit test gainstax unit test differs from tax calc by | 0 |
679 | 3,151,375,021 | IssuesEvent | 2015-09-16 07:41:44 | rg3/youtube-dl | https://api.github.com/repos/rg3/youtube-dl | closed | mp4 recode broken | postprocessors | Recoding the video to mp4 using ffmpeg doesn't work:
$ youtube-dl --recode-video=mp4 http://www.youtube.com/watch?v=iiFWoXQPOJc
[youtube] Setting language
[youtube] iiFWoXQPOJc: Downloading video webpage
[youtube] iiFWoXQPOJc: Downloading video info webpage
[youtube] iiFWoXQPOJc: Extracting video information
[download] Destination: Work Done by Isothermic Process-iiFWoXQPOJc.flv
[download] 100.0% of 21.94MiB at 864.23KiB/s ETA 00:00
[ffmpeg] Converting video from flv to mp4, Destination: Work Done by Isothermic Process-iiFWoXQPOJc.mp4
ERROR: The encoder 'aac' is experimental but experimental codecs are not enabled, add '-strict -2' if you want to use it.
| 1.0 | mp4 recode broken - Recoding the video to mp4 using ffmpeg doesn't work:
$ youtube-dl --recode-video=mp4 http://www.youtube.com/watch?v=iiFWoXQPOJc
[youtube] Setting language
[youtube] iiFWoXQPOJc: Downloading video webpage
[youtube] iiFWoXQPOJc: Downloading video info webpage
[youtube] iiFWoXQPOJc: Extracting video information
[download] Destination: Work Done by Isothermic Process-iiFWoXQPOJc.flv
[download] 100.0% of 21.94MiB at 864.23KiB/s ETA 00:00
[ffmpeg] Converting video from flv to mp4, Destination: Work Done by Isothermic Process-iiFWoXQPOJc.mp4
ERROR: The encoder 'aac' is experimental but experimental codecs are not enabled, add '-strict -2' if you want to use it.
| process | recode broken recoding the video to using ffmpeg doesn t work youtube dl recode video setting language iifwoxqpojc downloading video webpage iifwoxqpojc downloading video info webpage iifwoxqpojc extracting video information destination work done by isothermic process iifwoxqpojc flv of at s eta converting video from flv to destination work done by isothermic process iifwoxqpojc error the encoder aac is experimental but experimental codecs are not enabled add strict if you want to use it | 1 |
351,680 | 10,522,044,314 | IssuesEvent | 2019-09-30 07:48:24 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.ixxx.com - site is not usable | browser-firefox-mobile engine-gecko priority-normal | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.ixxx.com/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: site doesn't open
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/9/b474e472-0c9b-4ee9-98e1-9aff5fdbe1be.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190913215619</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_ | 1.0 | www.ixxx.com - site is not usable - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.ixxx.com/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: site doesn't open
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/9/b474e472-0c9b-4ee9-98e1-9aff5fdbe1be.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190913215619</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_ | non_process | site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description site doesn t open steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with โค๏ธ | 0 |
98,063 | 8,674,295,960 | IssuesEvent | 2018-11-30 06:58:54 | humera987/FXLabs-Test-Automation | https://api.github.com/repos/humera987/FXLabs-Test-Automation | reopened | FXLabs Testing 30 : ApiV1DataRecordsGetQueryParamPageEmptyValue | FXLabs Testing 30 | Project : FXLabs Testing 30
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=Y2YxN2JhM2QtNTcwYy00Y2ZlLWE3YjEtODRjNGEyNGU1YTlh; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 06:42:43 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/data-records?page=
Request :
Response :
{
"timestamp" : "2018-11-30T06:42:43.481+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/data-records"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot --- | 1.0 | FXLabs Testing 30 : ApiV1DataRecordsGetQueryParamPageEmptyValue - Project : FXLabs Testing 30
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=Y2YxN2JhM2QtNTcwYy00Y2ZlLWE3YjEtODRjNGEyNGU1YTlh; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 06:42:43 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/data-records?page=
Request :
Response :
{
"timestamp" : "2018-11-30T06:42:43.481+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/data-records"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot --- | non_process | fxlabs testing project fxlabs testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api data records logs assertion resolved to result assertion resolved to result assertion resolved to result assertion resolved to result fx bot | 0 |
21,241 | 28,364,815,752 | IssuesEvent | 2023-04-12 13:16:01 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | [MLv2] Add a utility to group columns by tables | .metabase-lib .Team/QueryProcessor :hammer_and_wrench: | The FE often needs columns to be grouped by tables they're coming from (the most prominent use-case is column pickers). We'd like to keep that logic inside MLv2, so it can also encapsulate rules that e.g. custom columns show up under the source table.
Here's a format I have in mind, please feel free to leave comments/feedback:
```ts
const columns = ML.orderableColumns(query);
const groups = ML.groupColumnsByTable(query, columns);
// I suppose groupColumnsByTable could return opaque objects
// that can be passed to display-info method,
// and return something like the following in the end
/**
[
{ display_name: "Orders", is_source_table: true, columns: [ ... ] },
{ display_name: "Products", is_joined_table: true, columns: [ ... ] },
]
*/
```
**Notes**
* calculated columns (custom expressions, aggregations, breakouts) show up at the top of the source table's column list
* could be an MLv1 bug or a limitation, but after a few query stages are added, we lose track of a source table. In that case, we don't show any table at all; that means we might need a reserved "no table" kind of group
* please feel free to use `/question/:id/notebook` UI as a reference | 1.0 | [MLv2] Add a utility to group columns by tables - The FE often needs columns to be grouped by tables they're coming from (the most prominent use-case is column pickers). We'd like to keep that logic inside MLv2, so it can also encapsulate rules that e.g. custom columns show up under the source table.
Here's a format I have in mind, please feel free to leave comments/feedback:
```ts
const columns = ML.orderableColumns(query);
const groups = ML.groupColumnsByTable(query, columns);
// I suppose groupColumnsByTable could return opaque objects
// that can be passed to display-info method,
// and return something like the following in the end
/**
[
{ display_name: "Orders", is_source_table: true, columns: [ ... ] },
{ display_name: "Products", is_joined_table: true, columns: [ ... ] },
]
*/
```
**Notes**
* calculated columns (custom expressions, aggregations, breakouts) show up at the top of the source table's column list
* could be an MLv1 bug or a limitation, but after a few query stages are added, we lose track of a source table. In that case, we don't show any table at all; that means we might need a reserved "no table" kind of group
* please feel free to use `/question/:id/notebook` UI as a reference | process | add a utility to group columns by tables the fe often needs columns to be grouped by tables they re coming from the most prominent use case is column pickers we d like to keep that logic inside so it can also encapsulate rules that e g custom columns show up under the source table here s a format i have in mind please feel free to leave comments feedback ts const columns ml orderablecolumns query const groups ml groupcolumnsbytable query columns i suppose groupcolumnsbytable could return opaque objects that can be passed to display info method and return something like the following in the end display name orders is source table true columns display name products is joined table true columns notes calculated columns custom expressions aggregations breakouts show up at the top of the source table s column list could be an bug or a limitation but after a few query stages are added we lose track of a source table in that case we don t show any table at all that means we might need a reserved no table kind of group please feel free to use question id notebook ui as a reference | 1 |
19,641 | 26,005,269,584 | IssuesEvent | 2022-12-20 18:43:53 | daoanhhuy26012001/pacific-hotel | https://api.github.com/repos/daoanhhuy26012001/pacific-hotel | closed | create about us | in-process | - [x] video
- [x] click pause
**card**
- [x] images
- [x] title
- [x] text
- [x] button
| 1.0 | create about us - - [x] video
- [x] click pause
**card**
- [x] images
- [x] title
- [x] text
- [x] button
| process | create about us video click pause card images title text button | 1 |
338,860 | 24,602,011,355 | IssuesEvent | 2022-10-14 13:16:21 | kubewarden/kubewarden.io | https://api.github.com/repos/kubewarden/kubewarden.io | opened | Comparison page | documentation | Write a page where we state how Kubewarden compares against other similar solutions like OPA, Gatekeeper and Kyverno.
The goals are:
* Allow someone approaching Kubewarden to understand how many of the features of kyverno/gatekeeper are missing/already implemented
* Provide an overview of the direction Kubewarden plans to take (what are the major features still missing)
* It's a guideline for Kubewarden developers, these are our lighthouse goals
We can take inspiration from what others are doing (like nomad, providing a comparison against Kubernetes) | 1.0 | Comparison page - Write a page where we state how Kubewarden compares against other similar solutions like OPA, Gatekeeper and Kyverno.
The goals are:
* Allow someone approaching Kubewarden to understand how many of the features of kyverno/gatekeeper are missing/already implemented
* Provide an overview of the direction Kubewarden plans to take (what are the major features still missing)
* It's a guideline for Kubewarden developers, these are our lighthouse goals
We can take inspiration from what others are doing (like nomad, providing a comparison against Kubernetes) | non_process | comparison page write a page where we state how kubewarden compares against other similar solutions like opa gatekeeper and kyverno the goals are allow someone approaching kubewarden to understand how many of the features of kyverno gatekeeper are missing already implemented provide an overview of the direction kubewarden plans to take what are the major features still missing it s a guideline for kubewarden developers these are our lighthouse goals we can take inspiration from what others are doing like nomad providing a comparison against kubernetes | 0 |
89,684 | 10,607,179,306 | IssuesEvent | 2019-10-11 02:40:12 | cyrusimap/cyrus-imapd | https://api.github.com/repos/cyrusimap/cyrus-imapd | closed | Docs: virtdomains defaults: release docs say userid, but actually off | documentation | From Stephan Lauffer on info-cyrus:
> This is a minor "problem"... just found it:
>
> The Chapter "Updates to default configuration" from
> https://www.cyrusimap.org/imap/download/release-notes/3.0/x/3.0.0.html
> say:
>
> "virtdomains is now userid by default (was off)
>
> Indeed virtdomains is still "off" by default.
Fix the 3.0.0 release notes (and possibly upgrade guide) to reflect that in all versions of 3.0 prior to know, virtdomains is still off. (and that there was an error in the docs previously). | 1.0 | Docs: virtdomains defaults: release docs say userid, but actually off - From Stephan Lauffer on info-cyrus:
> This is a minor "problem"... just found it:
>
> The Chapter "Updates to default configuration" from
> https://www.cyrusimap.org/imap/download/release-notes/3.0/x/3.0.0.html
> say:
>
> "virtdomains is now userid by default (was off)
>
> Indeed virtdomains is still "off" by default.
Fix the 3.0.0 release notes (and possibly upgrade guide) to reflect that in all versions of 3.0 prior to know, virtdomains is still off. (and that there was an error in the docs previously). | non_process | docs virtdomains defaults release docs say userid but actually off from stephan lauffer on info cyrus this is a minor problem just found it the chapter updates to default configuration from say virtdomains is now userid by default was off indeed virtdomains is still off by default fix the release notes and possibly upgrade guide to reflect that in all versions of prior to know virtdomains is still off and that there was an error in the docs previously | 0 |
136,112 | 18,722,325,732 | IssuesEvent | 2021-11-03 13:10:48 | KDWSS/dd-trace-java | https://api.github.com/repos/KDWSS/dd-trace-java | opened | CVE-2018-9159 (Medium) detected in spark-core-2.4.jar, spark-core-2.3.jar | security vulnerability | ## CVE-2018-9159 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spark-core-2.4.jar</b>, <b>spark-core-2.3.jar</b></p></summary>
<p>
<details><summary><b>spark-core-2.4.jar</b></p></summary>
<p>A Sinatra inspired java web framework</p>
<p>Library home page: <a href="http://www.sparkjava.com">http://www.sparkjava.com</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/instrumentation/sparkjava-2.3/sparkjava-2.3.gradle</p>
<p>Path to vulnerable library: /caches/modules-2/files-2.1/com.sparkjava/spark-core/2.4/72bc518c557ba4e3ae0676eed3e587b3074ca0a3/spark-core-2.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **spark-core-2.4.jar** (Vulnerable Library)
</details>
<details><summary><b>spark-core-2.3.jar</b></p></summary>
<p>A Sinatra inspired java web framework</p>
<p>Library home page: <a href="http://www.sparkjava.com">http://www.sparkjava.com</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/instrumentation/sparkjava-2.3/sparkjava-2.3.gradle</p>
<p>Path to vulnerable library: /caches/modules-2/files-2.1/com.sparkjava/spark-core/2.3/b0326d867f1ecbc8d624f64175d2aa5809bb0599/spark-core-2.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **spark-core-2.3.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/KDWSS/dd-trace-java/commit/2819174635979a19573ec0ce8e3e2b63a3848079">2819174635979a19573ec0ce8e3e2b63a3848079</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spark before 2.7.2, a remote attacker can read unintended static files via various representations of absolute or relative pathnames, as demonstrated by file: URLs and directory traversal sequences. NOTE: this product is unrelated to Ignite Realtime Spark.
<p>Publish Date: 2018-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-9159>CVE-2018-9159</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-9159">https://nvd.nist.gov/vuln/detail/CVE-2018-9159</a></p>
<p>Release Date: 2018-03-31</p>
<p>Fix Resolution: 2.7.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.sparkjava","packageName":"spark-core","packageVersion":"2.4","packageFilePaths":["/dd-java-agent/instrumentation/sparkjava-2.3/sparkjava-2.3.gradle"],"isTransitiveDependency":false,"dependencyTree":"com.sparkjava:spark-core:2.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.2"},{"packageType":"Java","groupId":"com.sparkjava","packageName":"spark-core","packageVersion":"2.3","packageFilePaths":["/dd-java-agent/instrumentation/sparkjava-2.3/sparkjava-2.3.gradle"],"isTransitiveDependency":false,"dependencyTree":"com.sparkjava:spark-core:2.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-9159","vulnerabilityDetails":"In Spark before 2.7.2, a remote attacker can read unintended static files via various representations of absolute or relative pathnames, as demonstrated by file: URLs and directory traversal sequences. NOTE: this product is unrelated to Ignite Realtime Spark.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-9159","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-9159 (Medium) detected in spark-core-2.4.jar, spark-core-2.3.jar - ## CVE-2018-9159 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spark-core-2.4.jar</b>, <b>spark-core-2.3.jar</b></p></summary>
<p>
<details><summary><b>spark-core-2.4.jar</b></p></summary>
<p>A Sinatra inspired java web framework</p>
<p>Library home page: <a href="http://www.sparkjava.com">http://www.sparkjava.com</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/instrumentation/sparkjava-2.3/sparkjava-2.3.gradle</p>
<p>Path to vulnerable library: /caches/modules-2/files-2.1/com.sparkjava/spark-core/2.4/72bc518c557ba4e3ae0676eed3e587b3074ca0a3/spark-core-2.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **spark-core-2.4.jar** (Vulnerable Library)
</details>
<details><summary><b>spark-core-2.3.jar</b></p></summary>
<p>A Sinatra inspired java web framework</p>
<p>Library home page: <a href="http://www.sparkjava.com">http://www.sparkjava.com</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/instrumentation/sparkjava-2.3/sparkjava-2.3.gradle</p>
<p>Path to vulnerable library: /caches/modules-2/files-2.1/com.sparkjava/spark-core/2.3/b0326d867f1ecbc8d624f64175d2aa5809bb0599/spark-core-2.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **spark-core-2.3.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/KDWSS/dd-trace-java/commit/2819174635979a19573ec0ce8e3e2b63a3848079">2819174635979a19573ec0ce8e3e2b63a3848079</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spark before 2.7.2, a remote attacker can read unintended static files via various representations of absolute or relative pathnames, as demonstrated by file: URLs and directory traversal sequences. NOTE: this product is unrelated to Ignite Realtime Spark.
<p>Publish Date: 2018-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-9159>CVE-2018-9159</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-9159">https://nvd.nist.gov/vuln/detail/CVE-2018-9159</a></p>
<p>Release Date: 2018-03-31</p>
<p>Fix Resolution: 2.7.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.sparkjava","packageName":"spark-core","packageVersion":"2.4","packageFilePaths":["/dd-java-agent/instrumentation/sparkjava-2.3/sparkjava-2.3.gradle"],"isTransitiveDependency":false,"dependencyTree":"com.sparkjava:spark-core:2.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.2"},{"packageType":"Java","groupId":"com.sparkjava","packageName":"spark-core","packageVersion":"2.3","packageFilePaths":["/dd-java-agent/instrumentation/sparkjava-2.3/sparkjava-2.3.gradle"],"isTransitiveDependency":false,"dependencyTree":"com.sparkjava:spark-core:2.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-9159","vulnerabilityDetails":"In Spark before 2.7.2, a remote attacker can read unintended static files via various representations of absolute or relative pathnames, as demonstrated by file: URLs and directory traversal sequences. NOTE: this product is unrelated to Ignite Realtime Spark.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-9159","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_process | cve medium detected in spark core jar spark core jar cve medium severity vulnerability vulnerable libraries spark core jar spark core jar spark core jar a sinatra inspired java web framework library home page a href path to dependency file dd trace java dd java agent instrumentation sparkjava sparkjava gradle path to vulnerable library caches modules files com sparkjava spark core spark core jar dependency hierarchy x spark core jar vulnerable library spark core jar a sinatra inspired java web framework library home page a href path to dependency file dd trace java dd java agent instrumentation sparkjava sparkjava gradle path to vulnerable library caches modules files com sparkjava spark core spark core jar dependency hierarchy x spark core jar vulnerable library found in head commit a href found in base branch master vulnerability details in spark before a remote attacker can read unintended static files via various representations of absolute or relative pathnames as demonstrated by file urls and directory traversal sequences note this product is unrelated to ignite realtime spark publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com sparkjava spark core isminimumfixversionavailable true minimumfixversion packagetype java groupid com sparkjava packagename spark core packageversion packagefilepaths istransitivedependency false dependencytree com sparkjava spark core isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails in spark before a remote attacker can read unintended static files via various representations of absolute or relative pathnames as demonstrated by file urls and directory traversal sequences note this product is unrelated to ignite realtime spark vulnerabilityurl | 0 |
57,170 | 15,725,386,792 | IssuesEvent | 2021-03-29 09:56:03 | danmar/testissues | https://api.github.com/repos/danmar/testissues | opened | false positive because of condional macro (Trac #8) | False positive Incomplete Migration Migrated from Trac defect hyd_danmar | Migrated from https://trac.cppcheck.net/ticket/8
```json
{
"status": "closed",
"changetime": "2009-03-04T13:48:32",
"description": "{{{\n#if VERBOSE\n#define LOG(x) do { if (VERBOSE) printf x; } while (0)\n#else\n#define LOG(x)\n#endif\n\nint main(int argc, char *argv[])\n{\n\tint i;\n\n\tLOG((\"message\\n\"));\n\n\tfor (i = 0; i < argc; i++)\n\t{\n\t}\n\n\treturn 0;\n}\n}}}\n\nChecking c:/temp/cppcheck_tests/test32.c: ...\n[c:/temp/cppcheck_tests/test32.c:8]: Unused variable 'i'\nChecking c:/temp/cppcheck_tests/test32.c: VERBOSE...\n\nWhen you remove the \"#if VERBOSE #else #endif\" and just have either macro it works.",
"reporter": "kidkat",
"cc": "sigra",
"resolution": "fixed",
"_ts": "1236174512000000",
"component": "False positive",
"summary": "false positive because of condional macro",
"priority": "major",
"keywords": "",
"time": "2009-01-17T08:52:57",
"milestone": "1.28",
"owner": "hyd_danmar",
"type": "defect"
}
```
| 1.0 | false positive because of condional macro (Trac #8) - Migrated from https://trac.cppcheck.net/ticket/8
```json
{
"status": "closed",
"changetime": "2009-03-04T13:48:32",
"description": "{{{\n#if VERBOSE\n#define LOG(x) do { if (VERBOSE) printf x; } while (0)\n#else\n#define LOG(x)\n#endif\n\nint main(int argc, char *argv[])\n{\n\tint i;\n\n\tLOG((\"message\\n\"));\n\n\tfor (i = 0; i < argc; i++)\n\t{\n\t}\n\n\treturn 0;\n}\n}}}\n\nChecking c:/temp/cppcheck_tests/test32.c: ...\n[c:/temp/cppcheck_tests/test32.c:8]: Unused variable 'i'\nChecking c:/temp/cppcheck_tests/test32.c: VERBOSE...\n\nWhen you remove the \"#if VERBOSE #else #endif\" and just have either macro it works.",
"reporter": "kidkat",
"cc": "sigra",
"resolution": "fixed",
"_ts": "1236174512000000",
"component": "False positive",
"summary": "false positive because of condional macro",
"priority": "major",
"keywords": "",
"time": "2009-01-17T08:52:57",
"milestone": "1.28",
"owner": "hyd_danmar",
"type": "defect"
}
```
| non_process | false positive because of condional macro trac migrated from json status closed changetime description n if verbose n define log x do if verbose printf x while n else n define log x n endif n nint main int argc char argv n n tint i n n tlog message n n n tfor i i argc i n t n t n n treturn n n n nchecking c temp cppcheck tests c n unused variable i nchecking c temp cppcheck tests c verbose n nwhen you remove the if verbose else endif and just have either macro it works reporter kidkat cc sigra resolution fixed ts component false positive summary false positive because of condional macro priority major keywords time milestone owner hyd danmar type defect | 0 |
21,342 | 29,087,452,465 | IssuesEvent | 2023-05-16 02:00:10 | lizhihao6/get-daily-arxiv-noti | https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti | opened | New submissions for Tue, 16 May 23 | event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB | ## Keyword: events
### mAedesID: Android Application for Aedes Mosquito Species Identification using Convolutional Neural Network
- **Authors:** G. Jeyakodi, Trisha Agarwal, P. Shanthi Bala
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.07664
- **Pdf link:** https://arxiv.org/pdf/2305.07664
- **Abstract**
Vector-Borne Disease (VBD) is an infectious disease transmitted through the pathogenic female Aedes mosquito to humans and animals. It is important to control dengue disease by reducing the spread of Aedes mosquito vectors. Community awareness plays acrucial role to ensure Aedes control programmes and encourages the communities to involve active participation. Identifying the species of mosquito will help to recognize the mosquito density in the locality and intensifying mosquito control efforts in particular areas. This willhelp in avoiding Aedes breeding sites around residential areas and reduce adult mosquitoes. To serve this purpose, an android application are developed to identify Aedes species that help the community to contribute in mosquito control events. Several Android applications have been developed to identify species like birds, plant species, and Anopheles mosquito species. In this work, a user-friendly mobile application mAedesID is developed for identifying the Aedes mosquito species using a deep learning Convolutional Neural Network (CNN) algorithm which is best suited for species image classification and achieves better accuracy for voluminous images. The mobile application can be downloaded from the URLhttps://tinyurl.com/mAedesID.
### EV-MGRFlowNet: Motion-Guided Recurrent Network for Unsupervised Event-based Optical Flow with Hybrid Motion-Compensation Loss
- **Authors:** Hao Zhuang, Xinjie Huang, Kuanxu Hou, Delei Kong, Chenming Hu, Zheng Fang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.07853
- **Pdf link:** https://arxiv.org/pdf/2305.07853
- **Abstract**
Event cameras offer promising properties, such as high temporal resolution and high dynamic range. These benefits have been utilized into many machine vision tasks, especially optical flow estimation. Currently, most existing event-based works use deep learning to estimate optical flow. However, their networks have not fully exploited prior hidden states and motion flows. Additionally, their supervision strategy has not fully leveraged the geometric constraints of event data to unlock the potential of networks. In this paper, we propose EV-MGRFlowNet, an unsupervised event-based optical flow estimation pipeline with motion-guided recurrent networks using a hybrid motion-compensation loss. First, we propose a feature-enhanced recurrent encoder network (FERE-Net) which fully utilizes prior hidden states to obtain multi-level motion features. Then, we propose a flow-guided decoder network (FGD-Net) to integrate prior motion flows. Finally, we design a hybrid motion-compensation loss (HMC-Loss) to strengthen geometric constraints for the more accurate alignment of events. Experimental results show that our method outperforms the current state-of-the-art (SOTA) method on the MVSEC dataset, with an average reduction of approximately 22.71% in average endpoint error (AEE). To our knowledge, our method ranks first among unsupervised learning-based methods.
### Semantic-aware Dynamic Retrospective-Prospective Reasoning for Event-level Video Question Answering
- **Authors:** Chenyang Lyu, Tianbo Ji, Yvette Graham, Jennifer Foster
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
- **Arxiv link:** https://arxiv.org/abs/2305.08059
- **Pdf link:** https://arxiv.org/pdf/2305.08059
- **Abstract**
Event-Level Video Question Answering (EVQA) requires complex reasoning across video events to obtain the visual information needed to provide optimal answers. However, despite significant progress in model performance, few studies have focused on using the explicit semantic connections between the question and visual information especially at the event level. There is need for using such semantic connections to facilitate complex reasoning across video frames. Therefore, we propose a semantic-aware dynamic retrospective-prospective reasoning approach for video-based question answering. Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.) of the question is being focused on. We conduct experiments on a benchmark EVQA dataset - TrafficQA. Results show that our proposed approach achieves superior performance compared to previous state-of-the-art models. Our code will be made publicly available for research use.
### MV-Map: Offboard HD-Map Generation with Multi-view Consistency
- **Authors:** Ziyang Xie, Ziqi Pang, Yuxiong Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.08851
- **Pdf link:** https://arxiv.org/pdf/2305.08851
- **Abstract**
While bird's-eye-view (BEV) perception models can be useful for building high-definition maps (HD-Maps) with less human labor, their results are often unreliable and demonstrate noticeable inconsistencies in the predicted HD-Maps from different viewpoints. This is because BEV perception is typically set up in an 'onboard' manner, which restricts the computation and consequently prevents algorithms from reasoning multiple views simultaneously. This paper overcomes these limitations and advocates a more practical 'offboard' HD-Map generation setup that removes the computation constraints, based on the fact that HD-Maps are commonly reusable infrastructures built offline in data centers. To this end, we propose a novel offboard pipeline called MV-Map that capitalizes multi-view consistency and can handle an arbitrary number of frames with the key design of a 'region-centric' framework. In MV-Map, the target HD-Maps are created by aggregating all the frames of onboard predictions, weighted by the confidence scores assigned by an 'uncertainty network'. To further enhance multi-view consistency, we augment the uncertainty network with the global 3D structure optimized by a voxelized neural radiance field (Voxel-NeRF). Extensive experiments on nuScenes show that our MV-Map significantly improves the quality of HD-Maps, further highlighting the importance of offboard methods for HD-Map generation.
## Keyword: event camera
### EV-MGRFlowNet: Motion-Guided Recurrent Network for Unsupervised Event-based Optical Flow with Hybrid Motion-Compensation Loss
- **Authors:** Hao Zhuang, Xinjie Huang, Kuanxu Hou, Delei Kong, Chenming Hu, Zheng Fang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.07853
- **Pdf link:** https://arxiv.org/pdf/2305.07853
- **Abstract**
Event cameras offer promising properties, such as high temporal resolution and high dynamic range. These benefits have been utilized into many machine vision tasks, especially optical flow estimation. Currently, most existing event-based works use deep learning to estimate optical flow. However, their networks have not fully exploited prior hidden states and motion flows. Additionally, their supervision strategy has not fully leveraged the geometric constraints of event data to unlock the potential of networks. In this paper, we propose EV-MGRFlowNet, an unsupervised event-based optical flow estimation pipeline with motion-guided recurrent networks using a hybrid motion-compensation loss. First, we propose a feature-enhanced recurrent encoder network (FERE-Net) which fully utilizes prior hidden states to obtain multi-level motion features. Then, we propose a flow-guided decoder network (FGD-Net) to integrate prior motion flows. Finally, we design a hybrid motion-compensation loss (HMC-Loss) to strengthen geometric constraints for the more accurate alignment of events. Experimental results show that our method outperforms the current state-of-the-art (SOTA) method on the MVSEC dataset, with an average reduction of approximately 22.71% in average endpoint error (AEE). To our knowledge, our method ranks first among unsupervised learning-based methods.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### PanFlowNet: A Flow-Based Deep Network for Pan-sharpening
- **Authors:** Gang Yang, Xiangyong Cao, Wenzhe Xiao, Man Zhou, Aiping Liu, Xun chen, Deyu Meng
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.07774
- **Pdf link:** https://arxiv.org/pdf/2305.07774
- **Abstract**
Pan-sharpening aims to generate a high-resolution multispectral (HRMS) image by integrating the spectral information of a low-resolution multispectral (LRMS) image with the texture details of a high-resolution panchromatic (PAN) image. It essentially inherits the ill-posed nature of the super-resolution (SR) task that diverse HRMS images can degrade into an LRMS image. However, existing deep learning-based methods recover only one HRMS image from the LRMS image and PAN image using a deterministic mapping, thus ignoring the diversity of the HRMS image. In this paper, to alleviate this ill-posed issue, we propose a flow-based pan-sharpening network (PanFlowNet) to directly learn the conditional distribution of HRMS image given LRMS image and PAN image instead of learning a deterministic mapping. Specifically, we first transform this unknown conditional distribution into a given Gaussian distribution by an invertible network, and the conditional distribution can thus be explicitly defined. Then, we design an invertible Conditional Affine Coupling Block (CACB) and further build the architecture of PanFlowNet by stacking a series of CACBs. Finally, the PanFlowNet is trained by maximizing the log-likelihood of the conditional distribution given a training set and can then be used to predict diverse HRMS images. The experimental results verify that the proposed PanFlowNet can generate various HRMS images given an LRMS image and a PAN image. Additionally, the experimental results on different kinds of satellite datasets also demonstrate the superiority of our PanFlowNet compared with other state-of-the-art methods both visually and quantitatively.
### On the Hidden Mystery of OCR in Large Multimodal Models
- **Authors:** Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui Chen, Chunyuan Li, Lianwen Jin, Xiang Bai
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL)
- **Arxiv link:** https://arxiv.org/abs/2305.07895
- **Pdf link:** https://arxiv.org/pdf/2305.07895
- **Abstract**
Large models have recently played a dominant role in natural language processing and multimodal vision-language learning. It remains less explored about their efficacy in text-related visual tasks. We conducted a comprehensive study of existing publicly available multimodal models, evaluating their performance in text recognition, text-based visual question answering, and key information extraction. Our findings reveal strengths and weaknesses in these models, which primarily rely on semantic understanding for word recognition and exhibit inferior perception of individual character shapes. They also display indifference towards text length and have limited capabilities in detecting fine-grained features in images. Consequently, these results demonstrate that even the current most powerful large multimodal models cannot match domain-specific methods in traditional text tasks and face greater challenges in more complex tasks. Most importantly, the baseline results showcased in this study could provide a foundational framework for the conception and assessment of innovative strategies targeted at enhancing zero-shot multimodal techniques. Evaluation pipeline will be available at https://github.com/Yuliang-Liu/MultimodalOCR.
### Instance-Aware Repeat Factor Sampling for Long-Tailed Object Detection
- **Authors:** Burhaneddin Yaman, Tanvir Mahmud, Chun-Hao Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.08069
- **Pdf link:** https://arxiv.org/pdf/2305.08069
- **Abstract**
We propose an embarrassingly simple method -- instance-aware repeat factor sampling (IRFS) to address the problem of imbalanced data in long-tailed object detection. Imbalanced datasets in real-world object detection often suffer from a large disparity in the number of instances for each class. To improve the generalization performance of object detection models on rare classes, various data sampling techniques have been proposed. Repeat factor sampling (RFS) has shown promise due to its simplicity and effectiveness. Despite its efficiency, RFS completely neglects the instance counts and solely relies on the image count during re-sampling process. However, instance count may immensely vary for different classes with similar image counts. Such variation highlights the importance of both image and instance for addressing the long-tail distributions. Thus, we propose IRFS which unifies instance and image counts for the re-sampling process to be aware of different perspectives of the imbalance in long-tailed datasets. Our method shows promising results on the challenging LVIS v1.0 benchmark dataset over various architectures and backbones, demonstrating their effectiveness in improving the performance of object detection models on rare classes with a relative $+50\%$ average precision (AP) improvement over counterpart RFS. IRFS can serve as a strong baseline and be easily incorporated into existing long-tailed frameworks.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### ROI-based Deep Image Compression with Swin Transformers
- **Authors:** Binglin Li, Jie Liang, Haisheng Fu, Jingning Han
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.07783
- **Pdf link:** https://arxiv.org/pdf/2305.07783
- **Abstract**
Encoding the Region Of Interest (ROI) with better quality than the background has many applications including video conferencing systems, video surveillance and object-oriented vision tasks. In this paper, we propose a ROI-based image compression framework with Swin transformers as main building blocks for the autoencoder network. The binary ROI mask is integrated into different layers of the network to provide spatial information guidance. Based on the ROI mask, we can control the relative importance of the ROI and non-ROI by modifying the corresponding Lagrange multiplier $ \lambda $ for different regions. Experimental results show our model achieves higher ROI PSNR than other methods and modest average PSNR for human evaluation. When tested on models pre-trained with original images, it has superior object detection and instance segmentation performance on the COCO validation dataset.
### GSB: Group Superposition Binarization for Vision Transformer with Limited Training Samples
- **Authors:** Tian Gao, Cheng-Zhong Xu, Le Zhang, Hui Kong
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.07931
- **Pdf link:** https://arxiv.org/pdf/2305.07931
- **Abstract**
Affected by the massive amount of parameters, ViT usually suffers from serious overfitting problems with a relatively limited number of training samples. In addition, ViT generally demands heavy computing resources, which limit its deployment on resource-constrained devices. As a type of model-compression method,model binarization is potentially a good choice to solve the above problems. Compared with the full-precision one, the model with the binarization method replaces complex tensor multiplication with simple bit-wise binary operations and represents full-precision model parameters and activations with only 1-bit ones, which potentially solves the problem of model size and computational complexity, respectively. In this paper, we find that the decline of the accuracy of the binary ViT model is mainly due to the information loss of the Attention module and the Value vector. Therefore, we propose a novel model binarization technique, called Group Superposition Binarization (GSB), to deal with these issues. Furthermore, in order to further improve the performance of the binarization model, we have investigated the gradient calculation procedure in the binarization process and derived more proper gradient calculation equations for GSB to reduce the influence of gradient mismatch. Then, the knowledge distillation technique is introduced to alleviate the performance degradation caused by model binarization. Experiments on three datasets with limited numbers of training samples demonstrate that the proposed GSB model achieves state-of-the-art performance among the binary quantization schemes and exceeds its full-precision counterpart on some indicators.
### DNN-Compressed Domain Visual Recognition with Feature Adaptation
- **Authors:** Yingpeng Deng, Lina J. Karam
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.08000
- **Pdf link:** https://arxiv.org/pdf/2305.08000
- **Abstract**
Learning-based image compression was shown to achieve a competitive performance with state-of-the-art transform-based codecs. This motivated the development of new learning-based visual compression standards such as JPEG-AI. Of particular interest to these emerging standards is the development of learning-based image compression systems targeting both humans and machines. This paper is concerned with learning-based compression schemes whose compressed-domain representations can be utilized to perform visual processing and computer vision tasks directly in the compressed domain. In our work, we adopt a learning-based compressed-domain classification framework for performing visual recognition using the compressed-domain latent representation at varying bit-rates. We propose a novel feature adaptation module integrating a lightweight attention model to adaptively emphasize and enhance the key features within the extracted channel-wise information. Also, we design an adaptation training strategy to utilize the pretrained pixel-domain weights. For comparison, in addition to the performance results that are obtained using our proposed latent-based compressed-domain method, we also present performance results using compressed but fully decoded images in the pixel domain as well as original uncompressed images. The obtained performance results show that our proposed compressed-domain classification model can distinctly outperform the existing compressed-domain classification models, and that it can also yield similar accuracy results with a much higher computational efficiency as compared to the pixel-domain models that are trained using fully decoded images.
### Analyzing Compression Techniques for Computer Vision
- **Authors:** Maniratnam Mandal, Imran Khan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.08075
- **Pdf link:** https://arxiv.org/pdf/2305.08075
- **Abstract**
Compressing deep networks is highly desirable for practical use-cases in computer vision applications. Several techniques have been explored in the literature, and research has been done in finding efficient strategies for combining them. For this project, we aimed to explore three different basic compression techniques - knowledge distillation, pruning, and quantization for small-scale recognition tasks. Along with the basic methods, we also test the efficacy of combining them in a sequential manner. We analyze them using MNIST and CIFAR-10 datasets and present the results along with few observations inferred from them.
### Generative Adversarial Networks for Spatio-Spectral Compression of Hyperspectral Images
- **Authors:** Akshara Preethy Byju, Martin Hermann Paul Fuchs, Alisa Walda, Begรผm Demir
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.08514
- **Pdf link:** https://arxiv.org/pdf/2305.08514
- **Abstract**
Deep learning-based image compression methods have led to high rate-distortion performances compared to traditional codecs. Recently, Generative Adversarial Networks (GANs)-based compression models, e.g., High Fidelity Compression (HiFiC), have attracted great attention in the computer vision community. However, most of these works aim for spatial compression only and do not consider the spatio-spectral redundancies observed in hyperspectral images (HSIs). To address this problem, in this paper, we adapt the HiFiC spatial compression model to perform spatio-spectral compression of HSIs. To this end, we introduce two new models: i) HiFiC using Squeeze and Excitation (SE) blocks (denoted as HiFiC$_{SE}$); and ii) HiFiC with 3D convolutions (denoted as HiFiC$_{3D}$). We analyze the effectiveness of HiFiC$_{SE}$ and HiFiC$_{3D}$ in exploiting the spatio-spectral redundancies with channel attention and inter-dependency analysis. Experimental results show the efficacy of the proposed models in performing spatio-spectral compression and reconstruction at reduced bitrates and higher reconstruction quality when compared to JPEG 2000 and the standard HiFiC spatial compression model. The code of the proposed models is publicly available at https://git.tu-berlin.de/rsim/HSI-SSC .
## Keyword: RAW
### Is end-to-end learning enough for fitness activity recognition?
- **Authors:** Antoine Mercier, Guillaume Berger, Sunny Panchal, Florian Letsch, Cornelius Boehm, Nahua Kang, Ingo Bax, Roland Memisevic
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2305.08191
- **Pdf link:** https://arxiv.org/pdf/2305.08191
- **Abstract**
End-to-end learning has taken hold of many computer vision tasks, in particular, related to still images, with task-specific optimization yielding very strong performance. Nevertheless, human-centric action recognition is still largely dominated by hand-crafted pipelines, and only individual components are replaced by neural networks that typically operate on individual frames. As a testbed to study the relevance of such pipelines, we present a new fully annotated video dataset of fitness activities. Any recognition capabilities in this domain are almost exclusively a function of human poses and their temporal dynamics, so pose-based solutions should perform well. We show that, with this labelled data, end-to-end learning on raw pixels can compete with state-of-the-art action recognition pipelines based on pose estimation. We also show that end-to-end learning can support temporally fine-grained tasks such as real-time repetition counting.
### A Comprehensive Survey on Segment Anything Model for Vision and Beyond
- **Authors:** Chunhui Zhang, Li Liu, Yawen Cui, Guanjie Huang, Weilin Lin, Yiqian Yang, Yuehong Hu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2305.08196
- **Pdf link:** https://arxiv.org/pdf/2305.08196
- **Abstract**
Artificial intelligence (AI) is evolving towards artificial general intelligence, which refers to the ability of an AI system to perform a wide range of tasks and exhibit a level of intelligence similar to that of a human being. This is in contrast to narrow or specialized AI, which is designed to perform specific tasks with a high degree of efficiency. Therefore, it is urgent to design a general class of models, which we term foundation models, trained on broad data that can be adapted to various downstream tasks. The recently proposed segment anything model (SAM) has made significant progress in breaking the boundaries of segmentation, greatly promoting the development of foundation models for computer vision. To fully comprehend SAM, we conduct a survey study. As the first to comprehensively review the progress of segmenting anything task for vision and beyond based on the foundation model of SAM, this work focuses on its applications to various tasks and data types by discussing its historical development, recent progress, and profound impact on broad applications. We first introduce the background and terminology for foundation models including SAM, as well as state-of-the-art methods contemporaneous with SAM that are significant for segmenting anything task. Then, we analyze and summarize the advantages and limitations of SAM across various image processing applications, including software scenes, real-world scenes, and complex scenes. Importantly, some insights are drawn to guide future research to develop more versatile foundation models and improve the architecture of SAM. We also summarize massive other amazing applications of SAM in vision and beyond.
### PLIP: Language-Image Pre-training for Person Representation Learning
- **Authors:** Jialong Zuo, Changqian Yu, Nong Sang, Changxin Gao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.08386
- **Pdf link:** https://arxiv.org/pdf/2305.08386
- **Abstract**
Pre-training has emerged as an effective technique for learning powerful person representations. Most existing methods have shown that pre-training on pure-vision large-scale datasets like ImageNet and LUPerson has achieved remarkable performance. However, solely relying on visual information, the absence of robust explicit indicators poses a challenge for these methods to learn discriminative person representations. Drawing inspiration from the intrinsic fine-grained attribute indicators of person descriptions, we explore introducing the language modality into person representation learning. To this end, we propose a novel language-image pre-training framework for person representation learning, termed PLIP. To explicitly build fine-grained cross-modal associations, we specifically design three pretext tasks, \ie semantic-fused image colorization, visual-fused attributes prediction, and vision-language matching. In addition, due to the lack of an appropriate dataset, we present a large-scale person dataset named SYNTH-PEDES, where the Stylish Pedestrian Attributes-union Captioning method is proposed to synthesize diverse textual descriptions. We pre-train PLIP on SYNTH-PEDES and evaluate our model by spanning downstream tasks such as text-based Re-ID, image-based Re-ID, and person attribute recognition. Extensive experiments demonstrate that our model not only significantly improves existing methods on all these tasks, but also shows great ability in the few-shot and domain generalization settings. The code, dataset and weights will be released at~\url{https://github.com/Zplusdragon/PLIP}
### Artificial intelligence to advance Earth observation: a perspective
- **Authors:** Devis Tuia, Konrad Schindler, Begรผm Demir, Gustau Camps-Valls, Xiao Xiang Zhu, Mrinalini Kochupillai, Saลกo Dลพeroski, Jan N. van Rijn, Holger H. Hoos, Fabio Del Frate, Mihai Datcu, Jorge-Arnulfo Quianรฉ-Ruiz, Volker Markl, Bertrand Le Saux, Rochelle Schneider
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Applications (stat.AP)
- **Arxiv link:** https://arxiv.org/abs/2305.08413
- **Pdf link:** https://arxiv.org/pdf/2305.08413
- **Abstract**
Earth observation (EO) is a prime instrument for monitoring land and ocean processes, studying the dynamics at work, and taking the pulse of our planet. This article gives a bird's eye view of the essential scientific tools and approaches informing and supporting the transition from raw EO data to usable EO-based information. The promises, as well as the current challenges of these developments, are highlighted under dedicated sections. Specifically, we cover the impact of (i) Computer vision; (ii) Machine learning; (iii) Advanced processing and computing; (iv) Knowledge-based AI; (v) Explainable AI and causal inference; (vi) Physics-aware models; (vii) User-centric approaches; and (viii) the much-needed discussion of ethical and societal issues related to the massive use of ML technologies in EO.
## Keyword: raw image
There is no result
| 2.0 | New submissions for Tue, 16 May 23 - ## Keyword: events
### mAedesID: Android Application for Aedes Mosquito Species Identification using Convolutional Neural Network
- **Authors:** G. Jeyakodi, Trisha Agarwal, P. Shanthi Bala
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.07664
- **Pdf link:** https://arxiv.org/pdf/2305.07664
- **Abstract**
Vector-Borne Disease (VBD) is an infectious disease transmitted through the pathogenic female Aedes mosquito to humans and animals. It is important to control dengue disease by reducing the spread of Aedes mosquito vectors. Community awareness plays acrucial role to ensure Aedes control programmes and encourages the communities to involve active participation. Identifying the species of mosquito will help to recognize the mosquito density in the locality and intensifying mosquito control efforts in particular areas. This willhelp in avoiding Aedes breeding sites around residential areas and reduce adult mosquitoes. To serve this purpose, an android application are developed to identify Aedes species that help the community to contribute in mosquito control events. Several Android applications have been developed to identify species like birds, plant species, and Anopheles mosquito species. In this work, a user-friendly mobile application mAedesID is developed for identifying the Aedes mosquito species using a deep learning Convolutional Neural Network (CNN) algorithm which is best suited for species image classification and achieves better accuracy for voluminous images. The mobile application can be downloaded from the URLhttps://tinyurl.com/mAedesID.
### EV-MGRFlowNet: Motion-Guided Recurrent Network for Unsupervised Event-based Optical Flow with Hybrid Motion-Compensation Loss
- **Authors:** Hao Zhuang, Xinjie Huang, Kuanxu Hou, Delei Kong, Chenming Hu, Zheng Fang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.07853
- **Pdf link:** https://arxiv.org/pdf/2305.07853
- **Abstract**
Event cameras offer promising properties, such as high temporal resolution and high dynamic range. These benefits have been utilized into many machine vision tasks, especially optical flow estimation. Currently, most existing event-based works use deep learning to estimate optical flow. However, their networks have not fully exploited prior hidden states and motion flows. Additionally, their supervision strategy has not fully leveraged the geometric constraints of event data to unlock the potential of networks. In this paper, we propose EV-MGRFlowNet, an unsupervised event-based optical flow estimation pipeline with motion-guided recurrent networks using a hybrid motion-compensation loss. First, we propose a feature-enhanced recurrent encoder network (FERE-Net) which fully utilizes prior hidden states to obtain multi-level motion features. Then, we propose a flow-guided decoder network (FGD-Net) to integrate prior motion flows. Finally, we design a hybrid motion-compensation loss (HMC-Loss) to strengthen geometric constraints for the more accurate alignment of events. Experimental results show that our method outperforms the current state-of-the-art (SOTA) method on the MVSEC dataset, with an average reduction of approximately 22.71% in average endpoint error (AEE). To our knowledge, our method ranks first among unsupervised learning-based methods.
### Semantic-aware Dynamic Retrospective-Prospective Reasoning for Event-level Video Question Answering
- **Authors:** Chenyang Lyu, Tianbo Ji, Yvette Graham, Jennifer Foster
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
- **Arxiv link:** https://arxiv.org/abs/2305.08059
- **Pdf link:** https://arxiv.org/pdf/2305.08059
- **Abstract**
Event-Level Video Question Answering (EVQA) requires complex reasoning across video events to obtain the visual information needed to provide optimal answers. However, despite significant progress in model performance, few studies have focused on using the explicit semantic connections between the question and visual information especially at the event level. There is need for using such semantic connections to facilitate complex reasoning across video frames. Therefore, we propose a semantic-aware dynamic retrospective-prospective reasoning approach for video-based question answering. Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.) of the question is being focused on. We conduct experiments on a benchmark EVQA dataset - TrafficQA. Results show that our proposed approach achieves superior performance compared to previous state-of-the-art models. Our code will be made publicly available for research use.
### MV-Map: Offboard HD-Map Generation with Multi-view Consistency
- **Authors:** Ziyang Xie, Ziqi Pang, Yuxiong Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.08851
- **Pdf link:** https://arxiv.org/pdf/2305.08851
- **Abstract**
While bird's-eye-view (BEV) perception models can be useful for building high-definition maps (HD-Maps) with less human labor, their results are often unreliable and demonstrate noticeable inconsistencies in the predicted HD-Maps from different viewpoints. This is because BEV perception is typically set up in an 'onboard' manner, which restricts the computation and consequently prevents algorithms from reasoning multiple views simultaneously. This paper overcomes these limitations and advocates a more practical 'offboard' HD-Map generation setup that removes the computation constraints, based on the fact that HD-Maps are commonly reusable infrastructures built offline in data centers. To this end, we propose a novel offboard pipeline called MV-Map that capitalizes multi-view consistency and can handle an arbitrary number of frames with the key design of a 'region-centric' framework. In MV-Map, the target HD-Maps are created by aggregating all the frames of onboard predictions, weighted by the confidence scores assigned by an 'uncertainty network'. To further enhance multi-view consistency, we augment the uncertainty network with the global 3D structure optimized by a voxelized neural radiance field (Voxel-NeRF). Extensive experiments on nuScenes show that our MV-Map significantly improves the quality of HD-Maps, further highlighting the importance of offboard methods for HD-Map generation.
## Keyword: event camera
### EV-MGRFlowNet: Motion-Guided Recurrent Network for Unsupervised Event-based Optical Flow with Hybrid Motion-Compensation Loss
- **Authors:** Hao Zhuang, Xinjie Huang, Kuanxu Hou, Delei Kong, Chenming Hu, Zheng Fang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.07853
- **Pdf link:** https://arxiv.org/pdf/2305.07853
- **Abstract**
Event cameras offer promising properties, such as high temporal resolution and high dynamic range. These benefits have been utilized into many machine vision tasks, especially optical flow estimation. Currently, most existing event-based works use deep learning to estimate optical flow. However, their networks have not fully exploited prior hidden states and motion flows. Additionally, their supervision strategy has not fully leveraged the geometric constraints of event data to unlock the potential of networks. In this paper, we propose EV-MGRFlowNet, an unsupervised event-based optical flow estimation pipeline with motion-guided recurrent networks using a hybrid motion-compensation loss. First, we propose a feature-enhanced recurrent encoder network (FERE-Net) which fully utilizes prior hidden states to obtain multi-level motion features. Then, we propose a flow-guided decoder network (FGD-Net) to integrate prior motion flows. Finally, we design a hybrid motion-compensation loss (HMC-Loss) to strengthen geometric constraints for the more accurate alignment of events. Experimental results show that our method outperforms the current state-of-the-art (SOTA) method on the MVSEC dataset, with an average reduction of approximately 22.71% in average endpoint error (AEE). To our knowledge, our method ranks first among unsupervised learning-based methods.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### PanFlowNet: A Flow-Based Deep Network for Pan-sharpening
- **Authors:** Gang Yang, Xiangyong Cao, Wenzhe Xiao, Man Zhou, Aiping Liu, Xun chen, Deyu Meng
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.07774
- **Pdf link:** https://arxiv.org/pdf/2305.07774
- **Abstract**
Pan-sharpening aims to generate a high-resolution multispectral (HRMS) image by integrating the spectral information of a low-resolution multispectral (LRMS) image with the texture details of a high-resolution panchromatic (PAN) image. It essentially inherits the ill-posed nature of the super-resolution (SR) task that diverse HRMS images can degrade into an LRMS image. However, existing deep learning-based methods recover only one HRMS image from the LRMS image and PAN image using a deterministic mapping, thus ignoring the diversity of the HRMS image. In this paper, to alleviate this ill-posed issue, we propose a flow-based pan-sharpening network (PanFlowNet) to directly learn the conditional distribution of HRMS image given LRMS image and PAN image instead of learning a deterministic mapping. Specifically, we first transform this unknown conditional distribution into a given Gaussian distribution by an invertible network, and the conditional distribution can thus be explicitly defined. Then, we design an invertible Conditional Affine Coupling Block (CACB) and further build the architecture of PanFlowNet by stacking a series of CACBs. Finally, the PanFlowNet is trained by maximizing the log-likelihood of the conditional distribution given a training set and can then be used to predict diverse HRMS images. The experimental results verify that the proposed PanFlowNet can generate various HRMS images given an LRMS image and a PAN image. Additionally, the experimental results on different kinds of satellite datasets also demonstrate the superiority of our PanFlowNet compared with other state-of-the-art methods both visually and quantitatively.
### On the Hidden Mystery of OCR in Large Multimodal Models
- **Authors:** Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui Chen, Chunyuan Li, Lianwen Jin, Xiang Bai
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL)
- **Arxiv link:** https://arxiv.org/abs/2305.07895
- **Pdf link:** https://arxiv.org/pdf/2305.07895
- **Abstract**
Large models have recently played a dominant role in natural language processing and multimodal vision-language learning. It remains less explored about their efficacy in text-related visual tasks. We conducted a comprehensive study of existing publicly available multimodal models, evaluating their performance in text recognition, text-based visual question answering, and key information extraction. Our findings reveal strengths and weaknesses in these models, which primarily rely on semantic understanding for word recognition and exhibit inferior perception of individual character shapes. They also display indifference towards text length and have limited capabilities in detecting fine-grained features in images. Consequently, these results demonstrate that even the current most powerful large multimodal models cannot match domain-specific methods in traditional text tasks and face greater challenges in more complex tasks. Most importantly, the baseline results showcased in this study could provide a foundational framework for the conception and assessment of innovative strategies targeted at enhancing zero-shot multimodal techniques. Evaluation pipeline will be available at https://github.com/Yuliang-Liu/MultimodalOCR.
### Instance-Aware Repeat Factor Sampling for Long-Tailed Object Detection
- **Authors:** Burhaneddin Yaman, Tanvir Mahmud, Chun-Hao Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.08069
- **Pdf link:** https://arxiv.org/pdf/2305.08069
- **Abstract**
We propose an embarrassingly simple method -- instance-aware repeat factor sampling (IRFS) to address the problem of imbalanced data in long-tailed object detection. Imbalanced datasets in real-world object detection often suffer from a large disparity in the number of instances for each class. To improve the generalization performance of object detection models on rare classes, various data sampling techniques have been proposed. Repeat factor sampling (RFS) has shown promise due to its simplicity and effectiveness. Despite its efficiency, RFS completely neglects the instance counts and solely relies on the image count during re-sampling process. However, instance count may immensely vary for different classes with similar image counts. Such variation highlights the importance of both image and instance for addressing the long-tail distributions. Thus, we propose IRFS which unifies instance and image counts for the re-sampling process to be aware of different perspectives of the imbalance in long-tailed datasets. Our method shows promising results on the challenging LVIS v1.0 benchmark dataset over various architectures and backbones, demonstrating their effectiveness in improving the performance of object detection models on rare classes with a relative $+50\%$ average precision (AP) improvement over counterpart RFS. IRFS can serve as a strong baseline and be easily incorporated into existing long-tailed frameworks.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### ROI-based Deep Image Compression with Swin Transformers
- **Authors:** Binglin Li, Jie Liang, Haisheng Fu, Jingning Han
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.07783
- **Pdf link:** https://arxiv.org/pdf/2305.07783
- **Abstract**
Encoding the Region Of Interest (ROI) with better quality than the background has many applications including video conferencing systems, video surveillance and object-oriented vision tasks. In this paper, we propose a ROI-based image compression framework with Swin transformers as main building blocks for the autoencoder network. The binary ROI mask is integrated into different layers of the network to provide spatial information guidance. Based on the ROI mask, we can control the relative importance of the ROI and non-ROI by modifying the corresponding Lagrange multiplier $ \lambda $ for different regions. Experimental results show our model achieves higher ROI PSNR than other methods and modest average PSNR for human evaluation. When tested on models pre-trained with original images, it has superior object detection and instance segmentation performance on the COCO validation dataset.
### GSB: Group Superposition Binarization for Vision Transformer with Limited Training Samples
- **Authors:** Tian Gao, Cheng-Zhong Xu, Le Zhang, Hui Kong
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.07931
- **Pdf link:** https://arxiv.org/pdf/2305.07931
- **Abstract**
Affected by the massive amount of parameters, ViT usually suffers from serious overfitting problems with a relatively limited number of training samples. In addition, ViT generally demands heavy computing resources, which limit its deployment on resource-constrained devices. As a type of model-compression method,model binarization is potentially a good choice to solve the above problems. Compared with the full-precision one, the model with the binarization method replaces complex tensor multiplication with simple bit-wise binary operations and represents full-precision model parameters and activations with only 1-bit ones, which potentially solves the problem of model size and computational complexity, respectively. In this paper, we find that the decline of the accuracy of the binary ViT model is mainly due to the information loss of the Attention module and the Value vector. Therefore, we propose a novel model binarization technique, called Group Superposition Binarization (GSB), to deal with these issues. Furthermore, in order to further improve the performance of the binarization model, we have investigated the gradient calculation procedure in the binarization process and derived more proper gradient calculation equations for GSB to reduce the influence of gradient mismatch. Then, the knowledge distillation technique is introduced to alleviate the performance degradation caused by model binarization. Experiments on three datasets with limited numbers of training samples demonstrate that the proposed GSB model achieves state-of-the-art performance among the binary quantization schemes and exceeds its full-precision counterpart on some indicators.
### DNN-Compressed Domain Visual Recognition with Feature Adaptation
- **Authors:** Yingpeng Deng, Lina J. Karam
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.08000
- **Pdf link:** https://arxiv.org/pdf/2305.08000
- **Abstract**
Learning-based image compression was shown to achieve a competitive performance with state-of-the-art transform-based codecs. This motivated the development of new learning-based visual compression standards such as JPEG-AI. Of particular interest to these emerging standards is the development of learning-based image compression systems targeting both humans and machines. This paper is concerned with learning-based compression schemes whose compressed-domain representations can be utilized to perform visual processing and computer vision tasks directly in the compressed domain. In our work, we adopt a learning-based compressed-domain classification framework for performing visual recognition using the compressed-domain latent representation at varying bit-rates. We propose a novel feature adaptation module integrating a lightweight attention model to adaptively emphasize and enhance the key features within the extracted channel-wise information. Also, we design an adaptation training strategy to utilize the pretrained pixel-domain weights. For comparison, in addition to the performance results that are obtained using our proposed latent-based compressed-domain method, we also present performance results using compressed but fully decoded images in the pixel domain as well as original uncompressed images. The obtained performance results show that our proposed compressed-domain classification model can distinctly outperform the existing compressed-domain classification models, and that it can also yield similar accuracy results with a much higher computational efficiency as compared to the pixel-domain models that are trained using fully decoded images.
### Analyzing Compression Techniques for Computer Vision
- **Authors:** Maniratnam Mandal, Imran Khan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.08075
- **Pdf link:** https://arxiv.org/pdf/2305.08075
- **Abstract**
Compressing deep networks is highly desirable for practical use-cases in computer vision applications. Several techniques have been explored in the literature, and research has been done in finding efficient strategies for combining them. For this project, we aimed to explore three different basic compression techniques - knowledge distillation, pruning, and quantization for small-scale recognition tasks. Along with the basic methods, we also test the efficacy of combining them in a sequential manner. We analyze them using MNIST and CIFAR-10 datasets and present the results along with few observations inferred from them.
### Generative Adversarial Networks for Spatio-Spectral Compression of Hyperspectral Images
- **Authors:** Akshara Preethy Byju, Martin Hermann Paul Fuchs, Alisa Walda, Begรผm Demir
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2305.08514
- **Pdf link:** https://arxiv.org/pdf/2305.08514
- **Abstract**
Deep learning-based image compression methods have led to high rate-distortion performances compared to traditional codecs. Recently, Generative Adversarial Networks (GANs)-based compression models, e.g., High Fidelity Compression (HiFiC), have attracted great attention in the computer vision community. However, most of these works aim for spatial compression only and do not consider the spatio-spectral redundancies observed in hyperspectral images (HSIs). To address this problem, in this paper, we adapt the HiFiC spatial compression model to perform spatio-spectral compression of HSIs. To this end, we introduce two new models: i) HiFiC using Squeeze and Excitation (SE) blocks (denoted as HiFiC$_{SE}$); and ii) HiFiC with 3D convolutions (denoted as HiFiC$_{3D}$). We analyze the effectiveness of HiFiC$_{SE}$ and HiFiC$_{3D}$ in exploiting the spatio-spectral redundancies with channel attention and inter-dependency analysis. Experimental results show the efficacy of the proposed models in performing spatio-spectral compression and reconstruction at reduced bitrates and higher reconstruction quality when compared to JPEG 2000 and the standard HiFiC spatial compression model. The code of the proposed models is publicly available at https://git.tu-berlin.de/rsim/HSI-SSC .
## Keyword: RAW
### Is end-to-end learning enough for fitness activity recognition?
- **Authors:** Antoine Mercier, Guillaume Berger, Sunny Panchal, Florian Letsch, Cornelius Boehm, Nahua Kang, Ingo Bax, Roland Memisevic
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2305.08191
- **Pdf link:** https://arxiv.org/pdf/2305.08191
- **Abstract**
End-to-end learning has taken hold of many computer vision tasks, in particular, related to still images, with task-specific optimization yielding very strong performance. Nevertheless, human-centric action recognition is still largely dominated by hand-crafted pipelines, and only individual components are replaced by neural networks that typically operate on individual frames. As a testbed to study the relevance of such pipelines, we present a new fully annotated video dataset of fitness activities. Any recognition capabilities in this domain are almost exclusively a function of human poses and their temporal dynamics, so pose-based solutions should perform well. We show that, with this labelled data, end-to-end learning on raw pixels can compete with state-of-the-art action recognition pipelines based on pose estimation. We also show that end-to-end learning can support temporally fine-grained tasks such as real-time repetition counting.
### A Comprehensive Survey on Segment Anything Model for Vision and Beyond
- **Authors:** Chunhui Zhang, Li Liu, Yawen Cui, Guanjie Huang, Weilin Lin, Yiqian Yang, Yuehong Hu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2305.08196
- **Pdf link:** https://arxiv.org/pdf/2305.08196
- **Abstract**
Artificial intelligence (AI) is evolving towards artificial general intelligence, which refers to the ability of an AI system to perform a wide range of tasks and exhibit a level of intelligence similar to that of a human being. This is in contrast to narrow or specialized AI, which is designed to perform specific tasks with a high degree of efficiency. Therefore, it is urgent to design a general class of models, which we term foundation models, trained on broad data that can be adapted to various downstream tasks. The recently proposed segment anything model (SAM) has made significant progress in breaking the boundaries of segmentation, greatly promoting the development of foundation models for computer vision. To fully comprehend SAM, we conduct a survey study. As the first to comprehensively review the progress of segmenting anything task for vision and beyond based on the foundation model of SAM, this work focuses on its applications to various tasks and data types by discussing its historical development, recent progress, and profound impact on broad applications. We first introduce the background and terminology for foundation models including SAM, as well as state-of-the-art methods contemporaneous with SAM that are significant for segmenting anything task. Then, we analyze and summarize the advantages and limitations of SAM across various image processing applications, including software scenes, real-world scenes, and complex scenes. Importantly, some insights are drawn to guide future research to develop more versatile foundation models and improve the architecture of SAM. We also summarize massive other amazing applications of SAM in vision and beyond.
### PLIP: Language-Image Pre-training for Person Representation Learning
- **Authors:** Jialong Zuo, Changqian Yu, Nong Sang, Changxin Gao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2305.08386
- **Pdf link:** https://arxiv.org/pdf/2305.08386
- **Abstract**
Pre-training has emerged as an effective technique for learning powerful person representations. Most existing methods have shown that pre-training on pure-vision large-scale datasets like ImageNet and LUPerson has achieved remarkable performance. However, solely relying on visual information, the absence of robust explicit indicators poses a challenge for these methods to learn discriminative person representations. Drawing inspiration from the intrinsic fine-grained attribute indicators of person descriptions, we explore introducing the language modality into person representation learning. To this end, we propose a novel language-image pre-training framework for person representation learning, termed PLIP. To explicitly build fine-grained cross-modal associations, we specifically design three pretext tasks, \ie semantic-fused image colorization, visual-fused attributes prediction, and vision-language matching. In addition, due to the lack of an appropriate dataset, we present a large-scale person dataset named SYNTH-PEDES, where the Stylish Pedestrian Attributes-union Captioning method is proposed to synthesize diverse textual descriptions. We pre-train PLIP on SYNTH-PEDES and evaluate our model by spanning downstream tasks such as text-based Re-ID, image-based Re-ID, and person attribute recognition. Extensive experiments demonstrate that our model not only significantly improves existing methods on all these tasks, but also shows great ability in the few-shot and domain generalization settings. The code, dataset and weights will be released at~\url{https://github.com/Zplusdragon/PLIP}
### Artificial intelligence to advance Earth observation: a perspective
- **Authors:** Devis Tuia, Konrad Schindler, Begรผm Demir, Gustau Camps-Valls, Xiao Xiang Zhu, Mrinalini Kochupillai, Saลกo Dลพeroski, Jan N. van Rijn, Holger H. Hoos, Fabio Del Frate, Mihai Datcu, Jorge-Arnulfo Quianรฉ-Ruiz, Volker Markl, Bertrand Le Saux, Rochelle Schneider
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Applications (stat.AP)
- **Arxiv link:** https://arxiv.org/abs/2305.08413
- **Pdf link:** https://arxiv.org/pdf/2305.08413
- **Abstract**
Earth observation (EO) is a prime instrument for monitoring land and ocean processes, studying the dynamics at work, and taking the pulse of our planet. This article gives a bird's eye view of the essential scientific tools and approaches informing and supporting the transition from raw EO data to usable EO-based information. The promises, as well as the current challenges of these developments, are highlighted under dedicated sections. Specifically, we cover the impact of (i) Computer vision; (ii) Machine learning; (iii) Advanced processing and computing; (iv) Knowledge-based AI; (v) Explainable AI and causal inference; (vi) Physics-aware models; (vii) User-centric approaches; and (viii) the much-needed discussion of ethical and societal issues related to the massive use of ML technologies in EO.
## Keyword: raw image
There is no result
| process | new submissions for tue may keyword events maedesid android application for aedes mosquito species identification using convolutional neural network authors g jeyakodi trisha agarwal p shanthi bala subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract vector borne disease vbd is an infectious disease transmitted through the pathogenic female aedes mosquito to humans and animals it is important to control dengue disease by reducing the spread of aedes mosquito vectors community awareness plays acrucial role to ensure aedes control programmes and encourages the communities to involve active participation identifying the species of mosquito will help to recognize the mosquito density in the locality and intensifying mosquito control efforts in particular areas this willhelp in avoiding aedes breeding sites around residential areas and reduce adult mosquitoes to serve this purpose an android application are developed to identify aedes species that help the community to contribute in mosquito control events several android applications have been developed to identify species like birds plant species and anopheles mosquito species in this work a user friendly mobile application maedesid is developed for identifying the aedes mosquito species using a deep learning convolutional neural network cnn algorithm which is best suited for species image classification and achieves better accuracy for voluminous images the mobile application can be downloaded from the url ev mgrflownet motion guided recurrent network for unsupervised event based optical flow with hybrid motion compensation loss authors hao zhuang xinjie huang kuanxu hou delei kong chenming hu zheng fang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract event cameras offer promising properties such as high temporal resolution and high dynamic range these benefits have been utilized into many machine vision tasks especially optical flow estimation currently most existing event based works use deep learning to estimate optical flow however their networks have not fully exploited prior hidden states and motion flows additionally their supervision strategy has not fully leveraged the geometric constraints of event data to unlock the potential of networks in this paper we propose ev mgrflownet an unsupervised event based optical flow estimation pipeline with motion guided recurrent networks using a hybrid motion compensation loss first we propose a feature enhanced recurrent encoder network fere net which fully utilizes prior hidden states to obtain multi level motion features then we propose a flow guided decoder network fgd net to integrate prior motion flows finally we design a hybrid motion compensation loss hmc loss to strengthen geometric constraints for the more accurate alignment of events experimental results show that our method outperforms the current state of the art sota method on the mvsec dataset with an average reduction of approximately in average endpoint error aee to our knowledge our method ranks first among unsupervised learning based methods semantic aware dynamic retrospective prospective reasoning for event level video question answering authors chenyang lyu tianbo ji yvette graham jennifer foster subjects computer vision and pattern recognition cs cv artificial intelligence cs ai computation and language cs cl arxiv link pdf link abstract event level video question answering evqa requires complex reasoning across video events to obtain the visual information needed to provide optimal answers however despite significant progress in model performance few studies have focused on using the explicit semantic connections between the question and visual information especially at the event level there is need for using such semantic connections to facilitate complex reasoning across video frames therefore we propose a semantic aware dynamic retrospective prospective reasoning approach for video based question answering specifically we explicitly use the semantic role labeling srl structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the srl structure agent verb patient etc of the question is being focused on we conduct experiments on a benchmark evqa dataset trafficqa results show that our proposed approach achieves superior performance compared to previous state of the art models our code will be made publicly available for research use mv map offboard hd map generation with multi view consistency authors ziyang xie ziqi pang yuxiong wang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract while bird s eye view bev perception models can be useful for building high definition maps hd maps with less human labor their results are often unreliable and demonstrate noticeable inconsistencies in the predicted hd maps from different viewpoints this is because bev perception is typically set up in an onboard manner which restricts the computation and consequently prevents algorithms from reasoning multiple views simultaneously this paper overcomes these limitations and advocates a more practical offboard hd map generation setup that removes the computation constraints based on the fact that hd maps are commonly reusable infrastructures built offline in data centers to this end we propose a novel offboard pipeline called mv map that capitalizes multi view consistency and can handle an arbitrary number of frames with the key design of a region centric framework in mv map the target hd maps are created by aggregating all the frames of onboard predictions weighted by the confidence scores assigned by an uncertainty network to further enhance multi view consistency we augment the uncertainty network with the global structure optimized by a voxelized neural radiance field voxel nerf extensive experiments on nuscenes show that our mv map significantly improves the quality of hd maps further highlighting the importance of offboard methods for hd map generation keyword event camera ev mgrflownet motion guided recurrent network for unsupervised event based optical flow with hybrid motion compensation loss authors hao zhuang xinjie huang kuanxu hou delei kong chenming hu zheng fang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract event cameras offer promising properties such as high temporal resolution and high dynamic range these benefits have been utilized into many machine vision tasks especially optical flow estimation currently most existing event based works use deep learning to estimate optical flow however their networks have not fully exploited prior hidden states and motion flows additionally their supervision strategy has not fully leveraged the geometric constraints of event data to unlock the potential of networks in this paper we propose ev mgrflownet an unsupervised event based optical flow estimation pipeline with motion guided recurrent networks using a hybrid motion compensation loss first we propose a feature enhanced recurrent encoder network fere net which fully utilizes prior hidden states to obtain multi level motion features then we propose a flow guided decoder network fgd net to integrate prior motion flows finally we design a hybrid motion compensation loss hmc loss to strengthen geometric constraints for the more accurate alignment of events experimental results show that our method outperforms the current state of the art sota method on the mvsec dataset with an average reduction of approximately in average endpoint error aee to our knowledge our method ranks first among unsupervised learning based methods keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp panflownet a flow based deep network for pan sharpening authors gang yang xiangyong cao wenzhe xiao man zhou aiping liu xun chen deyu meng subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract pan sharpening aims to generate a high resolution multispectral hrms image by integrating the spectral information of a low resolution multispectral lrms image with the texture details of a high resolution panchromatic pan image it essentially inherits the ill posed nature of the super resolution sr task that diverse hrms images can degrade into an lrms image however existing deep learning based methods recover only one hrms image from the lrms image and pan image using a deterministic mapping thus ignoring the diversity of the hrms image in this paper to alleviate this ill posed issue we propose a flow based pan sharpening network panflownet to directly learn the conditional distribution of hrms image given lrms image and pan image instead of learning a deterministic mapping specifically we first transform this unknown conditional distribution into a given gaussian distribution by an invertible network and the conditional distribution can thus be explicitly defined then we design an invertible conditional affine coupling block cacb and further build the architecture of panflownet by stacking a series of cacbs finally the panflownet is trained by maximizing the log likelihood of the conditional distribution given a training set and can then be used to predict diverse hrms images the experimental results verify that the proposed panflownet can generate various hrms images given an lrms image and a pan image additionally the experimental results on different kinds of satellite datasets also demonstrate the superiority of our panflownet compared with other state of the art methods both visually and quantitatively on the hidden mystery of ocr in large multimodal models authors yuliang liu zhang li hongliang li wenwen yu mingxin huang dezhi peng mingyu liu mingrui chen chunyuan li lianwen jin xiang bai subjects computer vision and pattern recognition cs cv computation and language cs cl arxiv link pdf link abstract large models have recently played a dominant role in natural language processing and multimodal vision language learning it remains less explored about their efficacy in text related visual tasks we conducted a comprehensive study of existing publicly available multimodal models evaluating their performance in text recognition text based visual question answering and key information extraction our findings reveal strengths and weaknesses in these models which primarily rely on semantic understanding for word recognition and exhibit inferior perception of individual character shapes they also display indifference towards text length and have limited capabilities in detecting fine grained features in images consequently these results demonstrate that even the current most powerful large multimodal models cannot match domain specific methods in traditional text tasks and face greater challenges in more complex tasks most importantly the baseline results showcased in this study could provide a foundational framework for the conception and assessment of innovative strategies targeted at enhancing zero shot multimodal techniques evaluation pipeline will be available at instance aware repeat factor sampling for long tailed object detection authors burhaneddin yaman tanvir mahmud chun hao liu subjects computer vision and pattern recognition cs cv machine learning cs lg image and video processing eess iv arxiv link pdf link abstract we propose an embarrassingly simple method instance aware repeat factor sampling irfs to address the problem of imbalanced data in long tailed object detection imbalanced datasets in real world object detection often suffer from a large disparity in the number of instances for each class to improve the generalization performance of object detection models on rare classes various data sampling techniques have been proposed repeat factor sampling rfs has shown promise due to its simplicity and effectiveness despite its efficiency rfs completely neglects the instance counts and solely relies on the image count during re sampling process however instance count may immensely vary for different classes with similar image counts such variation highlights the importance of both image and instance for addressing the long tail distributions thus we propose irfs which unifies instance and image counts for the re sampling process to be aware of different perspectives of the imbalance in long tailed datasets our method shows promising results on the challenging lvis benchmark dataset over various architectures and backbones demonstrating their effectiveness in improving the performance of object detection models on rare classes with a relative average precision ap improvement over counterpart rfs irfs can serve as a strong baseline and be easily incorporated into existing long tailed frameworks keyword image signal processing there is no result keyword image signal process there is no result keyword compression roi based deep image compression with swin transformers authors binglin li jie liang haisheng fu jingning han subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract encoding the region of interest roi with better quality than the background has many applications including video conferencing systems video surveillance and object oriented vision tasks in this paper we propose a roi based image compression framework with swin transformers as main building blocks for the autoencoder network the binary roi mask is integrated into different layers of the network to provide spatial information guidance based on the roi mask we can control the relative importance of the roi and non roi by modifying the corresponding lagrange multiplier lambda for different regions experimental results show our model achieves higher roi psnr than other methods and modest average psnr for human evaluation when tested on models pre trained with original images it has superior object detection and instance segmentation performance on the coco validation dataset gsb group superposition binarization for vision transformer with limited training samples authors tian gao cheng zhong xu le zhang hui kong subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract affected by the massive amount of parameters vit usually suffers from serious overfitting problems with a relatively limited number of training samples in addition vit generally demands heavy computing resources which limit its deployment on resource constrained devices as a type of model compression method model binarization is potentially a good choice to solve the above problems compared with the full precision one the model with the binarization method replaces complex tensor multiplication with simple bit wise binary operations and represents full precision model parameters and activations with only bit ones which potentially solves the problem of model size and computational complexity respectively in this paper we find that the decline of the accuracy of the binary vit model is mainly due to the information loss of the attention module and the value vector therefore we propose a novel model binarization technique called group superposition binarization gsb to deal with these issues furthermore in order to further improve the performance of the binarization model we have investigated the gradient calculation procedure in the binarization process and derived more proper gradient calculation equations for gsb to reduce the influence of gradient mismatch then the knowledge distillation technique is introduced to alleviate the performance degradation caused by model binarization experiments on three datasets with limited numbers of training samples demonstrate that the proposed gsb model achieves state of the art performance among the binary quantization schemes and exceeds its full precision counterpart on some indicators dnn compressed domain visual recognition with feature adaptation authors yingpeng deng lina j karam subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract learning based image compression was shown to achieve a competitive performance with state of the art transform based codecs this motivated the development of new learning based visual compression standards such as jpeg ai of particular interest to these emerging standards is the development of learning based image compression systems targeting both humans and machines this paper is concerned with learning based compression schemes whose compressed domain representations can be utilized to perform visual processing and computer vision tasks directly in the compressed domain in our work we adopt a learning based compressed domain classification framework for performing visual recognition using the compressed domain latent representation at varying bit rates we propose a novel feature adaptation module integrating a lightweight attention model to adaptively emphasize and enhance the key features within the extracted channel wise information also we design an adaptation training strategy to utilize the pretrained pixel domain weights for comparison in addition to the performance results that are obtained using our proposed latent based compressed domain method we also present performance results using compressed but fully decoded images in the pixel domain as well as original uncompressed images the obtained performance results show that our proposed compressed domain classification model can distinctly outperform the existing compressed domain classification models and that it can also yield similar accuracy results with a much higher computational efficiency as compared to the pixel domain models that are trained using fully decoded images analyzing compression techniques for computer vision authors maniratnam mandal imran khan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract compressing deep networks is highly desirable for practical use cases in computer vision applications several techniques have been explored in the literature and research has been done in finding efficient strategies for combining them for this project we aimed to explore three different basic compression techniques knowledge distillation pruning and quantization for small scale recognition tasks along with the basic methods we also test the efficacy of combining them in a sequential manner we analyze them using mnist and cifar datasets and present the results along with few observations inferred from them generative adversarial networks for spatio spectral compression of hyperspectral images authors akshara preethy byju martin hermann paul fuchs alisa walda begรผm demir subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract deep learning based image compression methods have led to high rate distortion performances compared to traditional codecs recently generative adversarial networks gans based compression models e g high fidelity compression hific have attracted great attention in the computer vision community however most of these works aim for spatial compression only and do not consider the spatio spectral redundancies observed in hyperspectral images hsis to address this problem in this paper we adapt the hific spatial compression model to perform spatio spectral compression of hsis to this end we introduce two new models i hific using squeeze and excitation se blocks denoted as hific se and ii hific with convolutions denoted as hific we analyze the effectiveness of hific se and hific in exploiting the spatio spectral redundancies with channel attention and inter dependency analysis experimental results show the efficacy of the proposed models in performing spatio spectral compression and reconstruction at reduced bitrates and higher reconstruction quality when compared to jpeg and the standard hific spatial compression model the code of the proposed models is publicly available at keyword raw is end to end learning enough for fitness activity recognition authors antoine mercier guillaume berger sunny panchal florian letsch cornelius boehm nahua kang ingo bax roland memisevic subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract end to end learning has taken hold of many computer vision tasks in particular related to still images with task specific optimization yielding very strong performance nevertheless human centric action recognition is still largely dominated by hand crafted pipelines and only individual components are replaced by neural networks that typically operate on individual frames as a testbed to study the relevance of such pipelines we present a new fully annotated video dataset of fitness activities any recognition capabilities in this domain are almost exclusively a function of human poses and their temporal dynamics so pose based solutions should perform well we show that with this labelled data end to end learning on raw pixels can compete with state of the art action recognition pipelines based on pose estimation we also show that end to end learning can support temporally fine grained tasks such as real time repetition counting a comprehensive survey on segment anything model for vision and beyond authors chunhui zhang li liu yawen cui guanjie huang weilin lin yiqian yang yuehong hu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract artificial intelligence ai is evolving towards artificial general intelligence which refers to the ability of an ai system to perform a wide range of tasks and exhibit a level of intelligence similar to that of a human being this is in contrast to narrow or specialized ai which is designed to perform specific tasks with a high degree of efficiency therefore it is urgent to design a general class of models which we term foundation models trained on broad data that can be adapted to various downstream tasks the recently proposed segment anything model sam has made significant progress in breaking the boundaries of segmentation greatly promoting the development of foundation models for computer vision to fully comprehend sam we conduct a survey study as the first to comprehensively review the progress of segmenting anything task for vision and beyond based on the foundation model of sam this work focuses on its applications to various tasks and data types by discussing its historical development recent progress and profound impact on broad applications we first introduce the background and terminology for foundation models including sam as well as state of the art methods contemporaneous with sam that are significant for segmenting anything task then we analyze and summarize the advantages and limitations of sam across various image processing applications including software scenes real world scenes and complex scenes importantly some insights are drawn to guide future research to develop more versatile foundation models and improve the architecture of sam we also summarize massive other amazing applications of sam in vision and beyond plip language image pre training for person representation learning authors jialong zuo changqian yu nong sang changxin gao subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract pre training has emerged as an effective technique for learning powerful person representations most existing methods have shown that pre training on pure vision large scale datasets like imagenet and luperson has achieved remarkable performance however solely relying on visual information the absence of robust explicit indicators poses a challenge for these methods to learn discriminative person representations drawing inspiration from the intrinsic fine grained attribute indicators of person descriptions we explore introducing the language modality into person representation learning to this end we propose a novel language image pre training framework for person representation learning termed plip to explicitly build fine grained cross modal associations we specifically design three pretext tasks ie semantic fused image colorization visual fused attributes prediction and vision language matching in addition due to the lack of an appropriate dataset we present a large scale person dataset named synth pedes where the stylish pedestrian attributes union captioning method is proposed to synthesize diverse textual descriptions we pre train plip on synth pedes and evaluate our model by spanning downstream tasks such as text based re id image based re id and person attribute recognition extensive experiments demonstrate that our model not only significantly improves existing methods on all these tasks but also shows great ability in the few shot and domain generalization settings the code dataset and weights will be released at url artificial intelligence to advance earth observation a perspective authors devis tuia konrad schindler begรผm demir gustau camps valls xiao xiang zhu mrinalini kochupillai saลกo dลพeroski jan n van rijn holger h hoos fabio del frate mihai datcu jorge arnulfo quianรฉ ruiz volker markl bertrand le saux rochelle schneider subjects computer vision and pattern recognition cs cv image and video processing eess iv applications stat ap arxiv link pdf link abstract earth observation eo is a prime instrument for monitoring land and ocean processes studying the dynamics at work and taking the pulse of our planet this article gives a bird s eye view of the essential scientific tools and approaches informing and supporting the transition from raw eo data to usable eo based information the promises as well as the current challenges of these developments are highlighted under dedicated sections specifically we cover the impact of i computer vision ii machine learning iii advanced processing and computing iv knowledge based ai v explainable ai and causal inference vi physics aware models vii user centric approaches and viii the much needed discussion of ethical and societal issues related to the massive use of ml technologies in eo keyword raw image there is no result | 1 |
323,483 | 9,855,672,790 | IssuesEvent | 2019-06-19 20:01:26 | cBioPortal/cbioportal | https://api.github.com/repos/cBioPortal/cbioportal | reopened | Create group from Upset diagram | frontend group-comparison priority | The user should be able to select a bar and create a group, similar to Venn diagram | 1.0 | Create group from Upset diagram - The user should be able to select a bar and create a group, similar to Venn diagram | non_process | create group from upset diagram the user should be able to select a bar and create a group similar to venn diagram | 0 |
34,818 | 12,301,060,238 | IssuesEvent | 2020-05-11 14:52:37 | TIBCOSoftware/TCSTK-Angular | https://api.github.com/repos/TIBCOSoftware/TCSTK-Angular | closed | WS-2019-0381 (Medium) detected in kind-of-6.0.2.tgz | security vulnerability | ## WS-2019-0381 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary>
<p>Get the native type of a value.</p>
<p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/TCSTK-Angular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/TCSTK-Angular/node_modules/kind-of/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.803.25.tgz (Root Library)
- sass-loader-7.2.0.tgz
- clone-deep-4.0.1.tgz
- :x: **kind-of-6.0.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of kind-of 6.x prior to 6.0.3 are vulnerable to a Validation Bypass. A maliciously crafted object can alter the result of the type check, allowing attackers to bypass the type checking validation.
<p>Publish Date: 2020-03-18
<p>URL: <a href=https://github.com/jonschlinkert/kind-of/commit/975c13a7cfaf25d811475823824af3a9c04b0ba8>WS-2019-0381</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jonschlinkert/kind-of/commit/975c13a7cfaf25d811475823824af3a9c04b0ba8">https://github.com/jonschlinkert/kind-of/commit/975c13a7cfaf25d811475823824af3a9c04b0ba8</a></p>
<p>Release Date: 2020-03-18</p>
<p>Fix Resolution: kind-of - 6.0.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"kind-of","packageVersion":"6.0.2","isTransitiveDependency":true,"dependencyTree":"@angular-devkit/build-angular:0.803.25;sass-loader:7.2.0;clone-deep:4.0.1;kind-of:6.0.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"kind-of - 6.0.3"}],"vulnerabilityIdentifier":"WS-2019-0381","vulnerabilityDetails":"Versions of kind-of 6.x prior to 6.0.3 are vulnerable to a Validation Bypass. A maliciously crafted object can alter the result of the type check, allowing attackers to bypass the type checking validation.","vulnerabilityUrl":"https://github.com/jonschlinkert/kind-of/commit/975c13a7cfaf25d811475823824af3a9c04b0ba8","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | WS-2019-0381 (Medium) detected in kind-of-6.0.2.tgz - ## WS-2019-0381 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary>
<p>Get the native type of a value.</p>
<p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/TCSTK-Angular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/TCSTK-Angular/node_modules/kind-of/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.803.25.tgz (Root Library)
- sass-loader-7.2.0.tgz
- clone-deep-4.0.1.tgz
- :x: **kind-of-6.0.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of kind-of 6.x prior to 6.0.3 are vulnerable to a Validation Bypass. A maliciously crafted object can alter the result of the type check, allowing attackers to bypass the type checking validation.
<p>Publish Date: 2020-03-18
<p>URL: <a href=https://github.com/jonschlinkert/kind-of/commit/975c13a7cfaf25d811475823824af3a9c04b0ba8>WS-2019-0381</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jonschlinkert/kind-of/commit/975c13a7cfaf25d811475823824af3a9c04b0ba8">https://github.com/jonschlinkert/kind-of/commit/975c13a7cfaf25d811475823824af3a9c04b0ba8</a></p>
<p>Release Date: 2020-03-18</p>
<p>Fix Resolution: kind-of - 6.0.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"kind-of","packageVersion":"6.0.2","isTransitiveDependency":true,"dependencyTree":"@angular-devkit/build-angular:0.803.25;sass-loader:7.2.0;clone-deep:4.0.1;kind-of:6.0.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"kind-of - 6.0.3"}],"vulnerabilityIdentifier":"WS-2019-0381","vulnerabilityDetails":"Versions of kind-of 6.x prior to 6.0.3 are vulnerable to a Validation Bypass. A maliciously crafted object can alter the result of the type check, allowing attackers to bypass the type checking validation.","vulnerabilityUrl":"https://github.com/jonschlinkert/kind-of/commit/975c13a7cfaf25d811475823824af3a9c04b0ba8","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_process | ws medium detected in kind of tgz ws medium severity vulnerability vulnerable library kind of tgz get the native type of a value library home page a href path to dependency file tmp ws scm tcstk angular package json path to vulnerable library tmp ws scm tcstk angular node modules kind of package json dependency hierarchy build angular tgz root library sass loader tgz clone deep tgz x kind of tgz vulnerable library vulnerability details versions of kind of x prior to are vulnerable to a validation bypass a maliciously crafted object can alter the result of the type check allowing attackers to bypass the type checking validation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution kind of isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails versions of kind of x prior to are vulnerable to a validation bypass a maliciously crafted object can alter the result of the type check allowing attackers to bypass the type checking validation vulnerabilityurl | 0 |
825,708 | 31,467,660,168 | IssuesEvent | 2023-08-30 04:18:22 | prometheus/prometheus | https://api.github.com/repos/prometheus/prometheus | closed | Increase produces different results for different range | priority/Pmaybe component/promql kind/more-info-needed | <details>
<summary>Raw output from <b>telliot_trader_eth_converted{reporter="0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"}</b>
</summary>
{
"state": "Done",
"series": [
{
"meta": {
"preferredVisualisationType": "table"
},
"refId": "B",
"length": 1,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633927683000
],
"state": null
},
{
"name": "__name__",
"config": {
"filterable": true
},
"type": "string",
"values": [
"telliot_trader_eth_converted"
],
"state": {
"displayName": "__name__"
}
},
{
"name": "instance",
"config": {
"filterable": true
},
"type": "string",
"values": [
"10.244.0.57:9090"
],
"state": {
"displayName": "instance"
}
},
{
"name": "job",
"config": {
"filterable": true
},
"type": "string",
"values": [
"report-master"
],
"state": {
"displayName": "job"
}
},
{
"name": "reporter",
"config": {
"filterable": true
},
"type": "string",
"values": [
"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
],
"state": {
"displayName": "reporter"
}
},
{
"name": "Value #B",
"type": "number",
"config": {},
"values": [
4.1986
],
"state": {
"displayName": "Value #B"
}
}
]
},
{
"meta": {
"preferredVisualisationType": "graph"
},
"refId": "B",
"length": 56,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633926855000,
1633926870000,
1633926885000,
1633926900000,
1633926915000,
1633926930000,
1633926945000,
1633926960000,
1633926975000,
1633926990000,
1633927005000,
1633927020000,
1633927035000,
1633927050000,
1633927065000,
1633927080000,
1633927095000,
1633927110000,
1633927125000,
1633927140000,
1633927155000,
1633927170000,
1633927185000,
1633927200000,
1633927215000,
1633927230000,
1633927245000,
1633927260000,
1633927275000,
1633927290000,
1633927305000,
1633927320000,
1633927335000,
1633927350000,
1633927365000,
1633927380000,
1633927395000,
1633927410000,
1633927425000,
1633927440000,
1633927455000,
1633927470000,
1633927485000,
1633927500000,
1633927515000,
1633927530000,
1633927545000,
1633927560000,
1633927575000,
1633927590000,
1633927605000,
1633927620000,
1633927635000,
1633927650000,
1633927665000,
1633927680000
],
"state": null
},
{
"name": "Value",
"type": "number",
"config": {
"displayNameFromDS": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
},
"labels": {
"instance": "10.244.0.57:9090",
"job": "report-master",
"reporter": "0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
},
"values": [
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986
],
"state": {
"calcs": {
"sum": 231.10809999999998,
"max": 4.1986,
"min": 4.0241,
"logmin": 4.0241,
"mean": 4.126930357142856,
"last": 4.1986,
"first": 4.0241,
"lastNotNull": 4.1986,
"firstNotNull": 4.0241,
"count": 56,
"nonNullCount": 56,
"allIsNull": false,
"allIsZero": false,
"range": 0.1745000000000001,
"diff": 0.1745000000000001,
"delta": 0.1745000000000001,
"step": 0,
"diffperc": 0.04336373350562862,
"previousDeltaUp": true
},
"displayName": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
}
],
"name": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
],
"annotations": [],
"request": {
"app": "explore",
"dashboardId": 0,
"timezone": "utc",
"startTime": 1633952908797,
"interval": "15s",
"intervalMs": 15000,
"panelId": "Q-cd81dace-291d-407b-9dcc-1f1b2783fe38-0Q-b0bb7a99-e20a-4548-bd68-beac321c0a0e-1Q-2f5d8626-b814-4328-baad-7d27c8c0f46f-2Q-7f89b16e-1772-4f07-8e33-533f08659cae-3Q-c7017f11-7c17-42a4-bfb5-e80ed4ad13c1-4Q-bbff9114-1bb2-4bf1-9ad0-18867a4e0e28-5",
"targets": [
{
"refId": "A",
"key": "Q-cd81dace-291d-407b-9dcc-1f1b2783fe38-0",
"exemplar": true,
"expr": "increase ( last_over_time(telliot_trader_eth_converted{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}[5m:]) - on(reporter) last_over_time(telliot_trackerTellor_submit_cost[5m:])[5m:]) ",
"hide": true
},
{
"refId": "B",
"key": "Q-b0bb7a99-e20a-4548-bd68-beac321c0a0e-1",
"exemplar": true,
"expr": "telliot_trader_eth_converted{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}",
"hide": false
},
{
"refId": "C",
"key": "Q-2f5d8626-b814-4328-baad-7d27c8c0f46f-2",
"exemplar": true,
"expr": "telliot_trackerTellor_submit_cost{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}",
"hide": true
},
{
"refId": "D",
"key": "Q-7f89b16e-1772-4f07-8e33-533f08659cae-3",
"exemplar": true,
"expr": "(increase(telliot_trader_eth_converted{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}[5m])>0) - on(reporter) (increase(telliot_trackerTellor_submit_cost{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}[5m])>0)",
"hide": true
},
{
"refId": "E",
"key": "Q-c7017f11-7c17-42a4-bfb5-e80ed4ad13c1-4",
"exemplar": true,
"expr": "increase(telliot_trader_eth_converted{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}[5m])",
"hide": true
},
{
"refId": "F",
"key": "Q-bbff9114-1bb2-4bf1-9ad0-18867a4e0e28-5",
"exemplar": true,
"expr": "increase(telliot_trackerTellor_submit_cost{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}[5m])",
"hide": true
}
],
"range": {
"from": "2021-10-11T04:34:26.070Z",
"to": "2021-10-11T04:48:02.098Z",
"raw": {
"from": "2021-10-11T04:34:26.070Z",
"to": "2021-10-11T04:48:02.098Z"
}
},
"requestId": "explore_left",
"rangeRaw": {
"from": "2021-10-11T04:34:26.070Z",
"to": "2021-10-11T04:48:02.098Z"
},
"scopedVars": {
"__interval": {
"text": "15s",
"value": "15s"
},
"__interval_ms": {
"text": 15000,
"value": 15000
}
},
"maxDataPoints": 1860,
"liveStreaming": false,
"endTime": 1633952909208
},
"timeRange": {
"from": "2021-10-11T04:34:26.070Z",
"to": "2021-10-11T04:48:02.098Z",
"raw": {
"from": "2021-10-11T04:34:26.070Z",
"to": "2021-10-11T04:48:02.098Z"
}
},
"timings": {
"dataProcessingTime": 0.04500001668930054
},
"graphFrames": [
{
"meta": {
"preferredVisualisationType": "graph"
},
"refId": "B",
"length": 56,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633926855000,
1633926870000,
1633926885000,
1633926900000,
1633926915000,
1633926930000,
1633926945000,
1633926960000,
1633926975000,
1633926990000,
1633927005000,
1633927020000,
1633927035000,
1633927050000,
1633927065000,
1633927080000,
1633927095000,
1633927110000,
1633927125000,
1633927140000,
1633927155000,
1633927170000,
1633927185000,
1633927200000,
1633927215000,
1633927230000,
1633927245000,
1633927260000,
1633927275000,
1633927290000,
1633927305000,
1633927320000,
1633927335000,
1633927350000,
1633927365000,
1633927380000,
1633927395000,
1633927410000,
1633927425000,
1633927440000,
1633927455000,
1633927470000,
1633927485000,
1633927500000,
1633927515000,
1633927530000,
1633927545000,
1633927560000,
1633927575000,
1633927590000,
1633927605000,
1633927620000,
1633927635000,
1633927650000,
1633927665000,
1633927680000
],
"state": null
},
{
"name": "Value",
"type": "number",
"config": {
"displayNameFromDS": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
},
"labels": {
"instance": "10.244.0.57:9090",
"job": "report-master",
"reporter": "0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
},
"values": [
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986
],
"state": {
"calcs": {
"sum": 231.10809999999998,
"max": 4.1986,
"min": 4.0241,
"logmin": 4.0241,
"mean": 4.126930357142856,
"last": 4.1986,
"first": 4.0241,
"lastNotNull": 4.1986,
"firstNotNull": 4.0241,
"count": 56,
"nonNullCount": 56,
"allIsNull": false,
"allIsZero": false,
"range": 0.1745000000000001,
"diff": 0.1745000000000001,
"delta": 0.1745000000000001,
"step": 0,
"diffperc": 0.04336373350562862,
"previousDeltaUp": true
},
"displayName": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
}
],
"name": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
],
"tableFrames": [
{
"meta": {
"preferredVisualisationType": "table"
},
"refId": "B",
"length": 1,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633927683000
],
"state": null
},
{
"name": "__name__",
"config": {
"filterable": true
},
"type": "string",
"values": [
"telliot_trader_eth_converted"
],
"state": {
"displayName": "__name__"
}
},
{
"name": "instance",
"config": {
"filterable": true
},
"type": "string",
"values": [
"10.244.0.57:9090"
],
"state": {
"displayName": "instance"
}
},
{
"name": "job",
"config": {
"filterable": true
},
"type": "string",
"values": [
"report-master"
],
"state": {
"displayName": "job"
}
},
{
"name": "reporter",
"config": {
"filterable": true
},
"type": "string",
"values": [
"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
],
"state": {
"displayName": "reporter"
}
},
{
"name": "Value #B",
"type": "number",
"config": {},
"values": [
4.1986
],
"state": {
"displayName": "Value #B"
}
}
]
}
],
"logsFrames": [],
"traceFrames": [],
"nodeGraphFrames": [],
"graphResult": [
{
"meta": {
"preferredVisualisationType": "graph"
},
"refId": "B",
"length": 56,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633926855000,
1633926870000,
1633926885000,
1633926900000,
1633926915000,
1633926930000,
1633926945000,
1633926960000,
1633926975000,
1633926990000,
1633927005000,
1633927020000,
1633927035000,
1633927050000,
1633927065000,
1633927080000,
1633927095000,
1633927110000,
1633927125000,
1633927140000,
1633927155000,
1633927170000,
1633927185000,
1633927200000,
1633927215000,
1633927230000,
1633927245000,
1633927260000,
1633927275000,
1633927290000,
1633927305000,
1633927320000,
1633927335000,
1633927350000,
1633927365000,
1633927380000,
1633927395000,
1633927410000,
1633927425000,
1633927440000,
1633927455000,
1633927470000,
1633927485000,
1633927500000,
1633927515000,
1633927530000,
1633927545000,
1633927560000,
1633927575000,
1633927590000,
1633927605000,
1633927620000,
1633927635000,
1633927650000,
1633927665000,
1633927680000
],
"state": null
},
{
"name": "Value",
"type": "number",
"config": {
"displayNameFromDS": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
},
"labels": {
"instance": "10.244.0.57:9090",
"job": "report-master",
"reporter": "0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
},
"values": [
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986
],
"state": {
"calcs": {
"sum": 231.10809999999998,
"max": 4.1986,
"min": 4.0241,
"logmin": 4.0241,
"mean": 4.126930357142856,
"last": 4.1986,
"first": 4.0241,
"lastNotNull": 4.1986,
"firstNotNull": 4.0241,
"count": 56,
"nonNullCount": 56,
"allIsNull": false,
"allIsZero": false,
"range": 0.1745000000000001,
"diff": 0.1745000000000001,
"delta": 0.1745000000000001,
"step": 0,
"diffperc": 0.04336373350562862,
"previousDeltaUp": true
},
"displayName": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
}
],
"name": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
],
"tableResult": {
"meta": {
"preferredVisualisationType": "table"
},
"refId": "B",
"length": 1,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633927683000
],
"state": null
},
{
"name": "__name__",
"config": {
"filterable": true
},
"type": "string",
"values": [
"telliot_trader_eth_converted"
],
"state": {
"displayName": "__name__"
}
},
{
"name": "instance",
"config": {
"filterable": true
},
"type": "string",
"values": [
"10.244.0.57:9090"
],
"state": {
"displayName": "instance"
}
},
{
"name": "job",
"config": {
"filterable": true
},
"type": "string",
"values": [
"report-master"
],
"state": {
"displayName": "job"
}
},
{
"name": "reporter",
"config": {
"filterable": true
},
"type": "string",
"values": [
"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
],
"state": {
"displayName": "reporter"
}
},
{
"name": "Value #B",
"type": "number",
"config": {},
"values": [
4.1986
],
"state": {
"displayName": "Value #B"
}
}
]
},
"logsResult": null
}
</details>
```
increase(telliot_trader_eth_converted{reporter="0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"}[20m])
```
produces the correct result of 0.175
```
increase(telliot_trader_eth_converted{reporter="0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"}[10m])
```
produces the incorrect result of 0.176
```
increase(telliot_trader_eth_converted{reporter="0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"}[5m])
```
produces the incorrect result of 0.177
* Prometheus version:
v2.28.1
* Alertmanager version:
insert output of `alertmanager --version` here (if relevant to the issue)
* Prometheus configuration file:
```
scrape_interval: 5s
```
Happy to give access to the prometheus server if needed. Ping me on slack | 1.0 | Increase produces different results for different range - <details>
<summary>Raw output from <b>telliot_trader_eth_converted{reporter="0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"}</b>
</summary>
{
"state": "Done",
"series": [
{
"meta": {
"preferredVisualisationType": "table"
},
"refId": "B",
"length": 1,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633927683000
],
"state": null
},
{
"name": "__name__",
"config": {
"filterable": true
},
"type": "string",
"values": [
"telliot_trader_eth_converted"
],
"state": {
"displayName": "__name__"
}
},
{
"name": "instance",
"config": {
"filterable": true
},
"type": "string",
"values": [
"10.244.0.57:9090"
],
"state": {
"displayName": "instance"
}
},
{
"name": "job",
"config": {
"filterable": true
},
"type": "string",
"values": [
"report-master"
],
"state": {
"displayName": "job"
}
},
{
"name": "reporter",
"config": {
"filterable": true
},
"type": "string",
"values": [
"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
],
"state": {
"displayName": "reporter"
}
},
{
"name": "Value #B",
"type": "number",
"config": {},
"values": [
4.1986
],
"state": {
"displayName": "Value #B"
}
}
]
},
{
"meta": {
"preferredVisualisationType": "graph"
},
"refId": "B",
"length": 56,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633926855000,
1633926870000,
1633926885000,
1633926900000,
1633926915000,
1633926930000,
1633926945000,
1633926960000,
1633926975000,
1633926990000,
1633927005000,
1633927020000,
1633927035000,
1633927050000,
1633927065000,
1633927080000,
1633927095000,
1633927110000,
1633927125000,
1633927140000,
1633927155000,
1633927170000,
1633927185000,
1633927200000,
1633927215000,
1633927230000,
1633927245000,
1633927260000,
1633927275000,
1633927290000,
1633927305000,
1633927320000,
1633927335000,
1633927350000,
1633927365000,
1633927380000,
1633927395000,
1633927410000,
1633927425000,
1633927440000,
1633927455000,
1633927470000,
1633927485000,
1633927500000,
1633927515000,
1633927530000,
1633927545000,
1633927560000,
1633927575000,
1633927590000,
1633927605000,
1633927620000,
1633927635000,
1633927650000,
1633927665000,
1633927680000
],
"state": null
},
{
"name": "Value",
"type": "number",
"config": {
"displayNameFromDS": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
},
"labels": {
"instance": "10.244.0.57:9090",
"job": "report-master",
"reporter": "0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
},
"values": [
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986
],
"state": {
"calcs": {
"sum": 231.10809999999998,
"max": 4.1986,
"min": 4.0241,
"logmin": 4.0241,
"mean": 4.126930357142856,
"last": 4.1986,
"first": 4.0241,
"lastNotNull": 4.1986,
"firstNotNull": 4.0241,
"count": 56,
"nonNullCount": 56,
"allIsNull": false,
"allIsZero": false,
"range": 0.1745000000000001,
"diff": 0.1745000000000001,
"delta": 0.1745000000000001,
"step": 0,
"diffperc": 0.04336373350562862,
"previousDeltaUp": true
},
"displayName": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
}
],
"name": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
],
"annotations": [],
"request": {
"app": "explore",
"dashboardId": 0,
"timezone": "utc",
"startTime": 1633952908797,
"interval": "15s",
"intervalMs": 15000,
"panelId": "Q-cd81dace-291d-407b-9dcc-1f1b2783fe38-0Q-b0bb7a99-e20a-4548-bd68-beac321c0a0e-1Q-2f5d8626-b814-4328-baad-7d27c8c0f46f-2Q-7f89b16e-1772-4f07-8e33-533f08659cae-3Q-c7017f11-7c17-42a4-bfb5-e80ed4ad13c1-4Q-bbff9114-1bb2-4bf1-9ad0-18867a4e0e28-5",
"targets": [
{
"refId": "A",
"key": "Q-cd81dace-291d-407b-9dcc-1f1b2783fe38-0",
"exemplar": true,
"expr": "increase ( last_over_time(telliot_trader_eth_converted{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}[5m:]) - on(reporter) last_over_time(telliot_trackerTellor_submit_cost[5m:])[5m:]) ",
"hide": true
},
{
"refId": "B",
"key": "Q-b0bb7a99-e20a-4548-bd68-beac321c0a0e-1",
"exemplar": true,
"expr": "telliot_trader_eth_converted{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}",
"hide": false
},
{
"refId": "C",
"key": "Q-2f5d8626-b814-4328-baad-7d27c8c0f46f-2",
"exemplar": true,
"expr": "telliot_trackerTellor_submit_cost{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}",
"hide": true
},
{
"refId": "D",
"key": "Q-7f89b16e-1772-4f07-8e33-533f08659cae-3",
"exemplar": true,
"expr": "(increase(telliot_trader_eth_converted{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}[5m])>0) - on(reporter) (increase(telliot_trackerTellor_submit_cost{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}[5m])>0)",
"hide": true
},
{
"refId": "E",
"key": "Q-c7017f11-7c17-42a4-bfb5-e80ed4ad13c1-4",
"exemplar": true,
"expr": "increase(telliot_trader_eth_converted{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}[5m])",
"hide": true
},
{
"refId": "F",
"key": "Q-bbff9114-1bb2-4bf1-9ad0-18867a4e0e28-5",
"exemplar": true,
"expr": "increase(telliot_trackerTellor_submit_cost{reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}[5m])",
"hide": true
}
],
"range": {
"from": "2021-10-11T04:34:26.070Z",
"to": "2021-10-11T04:48:02.098Z",
"raw": {
"from": "2021-10-11T04:34:26.070Z",
"to": "2021-10-11T04:48:02.098Z"
}
},
"requestId": "explore_left",
"rangeRaw": {
"from": "2021-10-11T04:34:26.070Z",
"to": "2021-10-11T04:48:02.098Z"
},
"scopedVars": {
"__interval": {
"text": "15s",
"value": "15s"
},
"__interval_ms": {
"text": 15000,
"value": 15000
}
},
"maxDataPoints": 1860,
"liveStreaming": false,
"endTime": 1633952909208
},
"timeRange": {
"from": "2021-10-11T04:34:26.070Z",
"to": "2021-10-11T04:48:02.098Z",
"raw": {
"from": "2021-10-11T04:34:26.070Z",
"to": "2021-10-11T04:48:02.098Z"
}
},
"timings": {
"dataProcessingTime": 0.04500001668930054
},
"graphFrames": [
{
"meta": {
"preferredVisualisationType": "graph"
},
"refId": "B",
"length": 56,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633926855000,
1633926870000,
1633926885000,
1633926900000,
1633926915000,
1633926930000,
1633926945000,
1633926960000,
1633926975000,
1633926990000,
1633927005000,
1633927020000,
1633927035000,
1633927050000,
1633927065000,
1633927080000,
1633927095000,
1633927110000,
1633927125000,
1633927140000,
1633927155000,
1633927170000,
1633927185000,
1633927200000,
1633927215000,
1633927230000,
1633927245000,
1633927260000,
1633927275000,
1633927290000,
1633927305000,
1633927320000,
1633927335000,
1633927350000,
1633927365000,
1633927380000,
1633927395000,
1633927410000,
1633927425000,
1633927440000,
1633927455000,
1633927470000,
1633927485000,
1633927500000,
1633927515000,
1633927530000,
1633927545000,
1633927560000,
1633927575000,
1633927590000,
1633927605000,
1633927620000,
1633927635000,
1633927650000,
1633927665000,
1633927680000
],
"state": null
},
{
"name": "Value",
"type": "number",
"config": {
"displayNameFromDS": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
},
"labels": {
"instance": "10.244.0.57:9090",
"job": "report-master",
"reporter": "0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
},
"values": [
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986
],
"state": {
"calcs": {
"sum": 231.10809999999998,
"max": 4.1986,
"min": 4.0241,
"logmin": 4.0241,
"mean": 4.126930357142856,
"last": 4.1986,
"first": 4.0241,
"lastNotNull": 4.1986,
"firstNotNull": 4.0241,
"count": 56,
"nonNullCount": 56,
"allIsNull": false,
"allIsZero": false,
"range": 0.1745000000000001,
"diff": 0.1745000000000001,
"delta": 0.1745000000000001,
"step": 0,
"diffperc": 0.04336373350562862,
"previousDeltaUp": true
},
"displayName": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
}
],
"name": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
],
"tableFrames": [
{
"meta": {
"preferredVisualisationType": "table"
},
"refId": "B",
"length": 1,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633927683000
],
"state": null
},
{
"name": "__name__",
"config": {
"filterable": true
},
"type": "string",
"values": [
"telliot_trader_eth_converted"
],
"state": {
"displayName": "__name__"
}
},
{
"name": "instance",
"config": {
"filterable": true
},
"type": "string",
"values": [
"10.244.0.57:9090"
],
"state": {
"displayName": "instance"
}
},
{
"name": "job",
"config": {
"filterable": true
},
"type": "string",
"values": [
"report-master"
],
"state": {
"displayName": "job"
}
},
{
"name": "reporter",
"config": {
"filterable": true
},
"type": "string",
"values": [
"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
],
"state": {
"displayName": "reporter"
}
},
{
"name": "Value #B",
"type": "number",
"config": {},
"values": [
4.1986
],
"state": {
"displayName": "Value #B"
}
}
]
}
],
"logsFrames": [],
"traceFrames": [],
"nodeGraphFrames": [],
"graphResult": [
{
"meta": {
"preferredVisualisationType": "graph"
},
"refId": "B",
"length": 56,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633926855000,
1633926870000,
1633926885000,
1633926900000,
1633926915000,
1633926930000,
1633926945000,
1633926960000,
1633926975000,
1633926990000,
1633927005000,
1633927020000,
1633927035000,
1633927050000,
1633927065000,
1633927080000,
1633927095000,
1633927110000,
1633927125000,
1633927140000,
1633927155000,
1633927170000,
1633927185000,
1633927200000,
1633927215000,
1633927230000,
1633927245000,
1633927260000,
1633927275000,
1633927290000,
1633927305000,
1633927320000,
1633927335000,
1633927350000,
1633927365000,
1633927380000,
1633927395000,
1633927410000,
1633927425000,
1633927440000,
1633927455000,
1633927470000,
1633927485000,
1633927500000,
1633927515000,
1633927530000,
1633927545000,
1633927560000,
1633927575000,
1633927590000,
1633927605000,
1633927620000,
1633927635000,
1633927650000,
1633927665000,
1633927680000
],
"state": null
},
{
"name": "Value",
"type": "number",
"config": {
"displayNameFromDS": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
},
"labels": {
"instance": "10.244.0.57:9090",
"job": "report-master",
"reporter": "0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
},
"values": [
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.0241,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986,
4.1986
],
"state": {
"calcs": {
"sum": 231.10809999999998,
"max": 4.1986,
"min": 4.0241,
"logmin": 4.0241,
"mean": 4.126930357142856,
"last": 4.1986,
"first": 4.0241,
"lastNotNull": 4.1986,
"firstNotNull": 4.0241,
"count": 56,
"nonNullCount": 56,
"allIsNull": false,
"allIsZero": false,
"range": 0.1745000000000001,
"diff": 0.1745000000000001,
"delta": 0.1745000000000001,
"step": 0,
"diffperc": 0.04336373350562862,
"previousDeltaUp": true
},
"displayName": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
}
],
"name": "telliot_trader_eth_converted{instance=\"10.244.0.57:9090\", job=\"report-master\", reporter=\"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF\"}"
}
],
"tableResult": {
"meta": {
"preferredVisualisationType": "table"
},
"refId": "B",
"length": 1,
"fields": [
{
"name": "Time",
"type": "time",
"config": {},
"values": [
1633927683000
],
"state": null
},
{
"name": "__name__",
"config": {
"filterable": true
},
"type": "string",
"values": [
"telliot_trader_eth_converted"
],
"state": {
"displayName": "__name__"
}
},
{
"name": "instance",
"config": {
"filterable": true
},
"type": "string",
"values": [
"10.244.0.57:9090"
],
"state": {
"displayName": "instance"
}
},
{
"name": "job",
"config": {
"filterable": true
},
"type": "string",
"values": [
"report-master"
],
"state": {
"displayName": "job"
}
},
{
"name": "reporter",
"config": {
"filterable": true
},
"type": "string",
"values": [
"0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"
],
"state": {
"displayName": "reporter"
}
},
{
"name": "Value #B",
"type": "number",
"config": {},
"values": [
4.1986
],
"state": {
"displayName": "Value #B"
}
}
]
},
"logsResult": null
}
</details>
```
increase(telliot_trader_eth_converted{reporter="0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"}[20m])
```
produces the correct result of 0.175
```
increase(telliot_trader_eth_converted{reporter="0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"}[10m])
```
produces the incorrect result of 0.176
```
increase(telliot_trader_eth_converted{reporter="0xDD6D1C35518fc955BBBeb52C5c1f5Fb4E16D7EAF"}[5m])
```
produces the incorrect result of 0.177
* Prometheus version:
v2.28.1
* Alertmanager version:
insert output of `alertmanager --version` here (if relevant to the issue)
* Prometheus configuration file:
```
scrape_interval: 5s
```
Happy to give access to the prometheus server if needed. Ping me on slack | non_process | increase produces different results for different range raw output from telliot trader eth converted reporter state done series meta preferredvisualisationtype table refid b length fields name time type time config values state null name name config filterable true type string values telliot trader eth converted state displayname name name instance config filterable true type string values state displayname instance name job config filterable true type string values report master state displayname job name reporter config filterable true type string values state displayname reporter name value b type number config values state displayname value b meta preferredvisualisationtype graph refid b length fields name time type time config values state null name value type number config displaynamefromds telliot trader eth converted instance job report master reporter labels instance job report master reporter values state calcs sum max min logmin mean last first lastnotnull firstnotnull count nonnullcount allisnull false alliszero false range diff delta step diffperc previousdeltaup true displayname telliot trader eth converted instance job report master reporter name telliot trader eth converted instance job report master reporter annotations request app explore dashboardid timezone utc starttime interval intervalms panelid q baad targets refid a key q exemplar true expr increase last over time telliot trader eth converted reporter on reporter last over time telliot trackertellor submit cost hide true refid b key q exemplar true expr telliot trader eth converted reporter hide false refid c key q baad exemplar true expr telliot trackertellor submit cost reporter hide true refid d key q exemplar true expr increase telliot trader eth converted reporter on reporter increase telliot trackertellor submit cost reporter hide true refid e key q exemplar true expr increase telliot trader eth converted reporter hide true refid f key q exemplar true expr increase telliot trackertellor submit cost reporter hide true range from to raw from to requestid explore left rangeraw from to scopedvars interval text value interval ms text value maxdatapoints livestreaming false endtime timerange from to raw from to timings dataprocessingtime graphframes meta preferredvisualisationtype graph refid b length fields name time type time config values state null name value type number config displaynamefromds telliot trader eth converted instance job report master reporter labels instance job report master reporter values state calcs sum max min logmin mean last first lastnotnull firstnotnull count nonnullcount allisnull false alliszero false range diff delta step diffperc previousdeltaup true displayname telliot trader eth converted instance job report master reporter name telliot trader eth converted instance job report master reporter tableframes meta preferredvisualisationtype table refid b length fields name time type time config values state null name name config filterable true type string values telliot trader eth converted state displayname name name instance config filterable true type string values state displayname instance name job config filterable true type string values report master state displayname job name reporter config filterable true type string values state displayname reporter name value b type number config values state displayname value b logsframes traceframes nodegraphframes graphresult meta preferredvisualisationtype graph refid b length fields name time type time config values state null name value type number config displaynamefromds telliot trader eth converted instance job report master reporter labels instance job report master reporter values state calcs sum max min logmin mean last first lastnotnull firstnotnull count nonnullcount allisnull false alliszero false range diff delta step diffperc previousdeltaup true displayname telliot trader eth converted instance job report master reporter name telliot trader eth converted instance job report master reporter tableresult meta preferredvisualisationtype table refid b length fields name time type time config values state null name name config filterable true type string values telliot trader eth converted state displayname name name instance config filterable true type string values state displayname instance name job config filterable true type string values report master state displayname job name reporter config filterable true type string values state displayname reporter name value b type number config values state displayname value b logsresult null increase telliot trader eth converted reporter produces the correct result of increase telliot trader eth converted reporter produces the incorrect result of increase telliot trader eth converted reporter produces the incorrect result of prometheus version alertmanager version insert output of alertmanager version here if relevant to the issue prometheus configuration file scrape interval happy to give access to the prometheus server if needed ping me on slack | 0 |
108,164 | 13,562,547,979 | IssuesEvent | 2020-09-18 07:04:40 | unicode-org/icu4x | https://api.github.com/repos/unicode-org/icu4x | opened | Convinience macro naming scheme | A-design discuss | During review process of https://github.com/unicode-org/icu4x/pull/220 @sffc suggested changing the name of macros like `language!()`, `script!()`, `region!()`, `variant!()` and `langid!()` to `icu_language!()`, `icu_script!()`, `icu_region!()`, `icu_variant!()` and `icu_langid!()`.
The rationale given there is that in theory `language` can mean multiple things and it may be confusing for the user whether it is a Unicode Language Identifier, or maybe a programming language, or something yet different.
My position is that if someone called `use icu_locale::macros::langauge;` they are almost for certain going to use that macro this way and it's very unlikely they'll have a conflict with another process macro named `language`.
If they do, they still can `use icu_locale::macros;` and then call `macros::language!()` or even do `use icu_locale::macros::language as icu_language;` if needed.
I'm not sure how to resolve this but I'd like to hear what other people think first :) | 1.0 | Convinience macro naming scheme - During review process of https://github.com/unicode-org/icu4x/pull/220 @sffc suggested changing the name of macros like `language!()`, `script!()`, `region!()`, `variant!()` and `langid!()` to `icu_language!()`, `icu_script!()`, `icu_region!()`, `icu_variant!()` and `icu_langid!()`.
The rationale given there is that in theory `language` can mean multiple things and it may be confusing for the user whether it is a Unicode Language Identifier, or maybe a programming language, or something yet different.
My position is that if someone called `use icu_locale::macros::langauge;` they are almost for certain going to use that macro this way and it's very unlikely they'll have a conflict with another process macro named `language`.
If they do, they still can `use icu_locale::macros;` and then call `macros::language!()` or even do `use icu_locale::macros::language as icu_language;` if needed.
I'm not sure how to resolve this but I'd like to hear what other people think first :) | non_process | convinience macro naming scheme during review process of sffc suggested changing the name of macros like language script region variant and langid to icu language icu script icu region icu variant and icu langid the rationale given there is that in theory language can mean multiple things and it may be confusing for the user whether it is a unicode language identifier or maybe a programming language or something yet different my position is that if someone called use icu locale macros langauge they are almost for certain going to use that macro this way and it s very unlikely they ll have a conflict with another process macro named language if they do they still can use icu locale macros and then call macros language or even do use icu locale macros language as icu language if needed i m not sure how to resolve this but i d like to hear what other people think first | 0 |
17,193 | 3,600,144,822 | IssuesEvent | 2016-02-03 03:17:27 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Cluster teardown leaks default firewall rules | priority/P2 team/test-infra | I enabled a suite with `${FAIL_ON_GCP_RESOURCE_LEAK:="true"}` and it started complaining, thus:
```
18:59:50 --- /var/lib/jenkins/jobs/kubernetes-e2e-gce-ingress/workspace/_artifacts/gcp-resources-before.txt 2016-02-02 18:40:51.265016339 -0800
18:59:50 +++ /var/lib/jenkins/jobs/kubernetes-e2e-gce-ingress/workspace/_artifacts/gcp-resources-after.txt 2016-02-02 18:59:50.725871086 -0800
18:59:50 @@ -30,0 +31 @@ [ routes ]
18:59:50 +default-route-3dba9bdf79298688 e2e-ingress 10.240.0.0/16 1000
18:59:50 @@ -31,0 +33 @@ [ routes ]
18:59:50 +default-route-663d4d06ada780a6 e2e-ingress 0.0.0.0/0 default-internet-gateway 1000
18:59:50 @@ -40,0 +43,2 @@ [ firewall-rules ]
18:59:50 +e2e-ingress-default-internal e2e-ingress 10.0.0.0/8 tcp:1-65535,udp:1-65535,icmp
18:59:50 +e2e-ingress-default-ssh e2e-ingress 0.0.0.0/0 tcp:22
18:59:50 ++ [[ true == \t\r\u\e ]]
18:59:50 ++ echo '!!! FAIL: Google Cloud Platform resources leaked while running tests!'
18:59:50
!!! FAIL: Google Cloud Platform resources leaked while running tests!
```
http://kubekins.dls.corp.google.com/job/kubernetes-e2e-gce-ingress/1/consoleFull
It created those rules though, so why didn't teardown delete them? or am I just misusing a feature?
@kubernetes/goog-testing
| 1.0 | Cluster teardown leaks default firewall rules - I enabled a suite with `${FAIL_ON_GCP_RESOURCE_LEAK:="true"}` and it started complaining, thus:
```
18:59:50 --- /var/lib/jenkins/jobs/kubernetes-e2e-gce-ingress/workspace/_artifacts/gcp-resources-before.txt 2016-02-02 18:40:51.265016339 -0800
18:59:50 +++ /var/lib/jenkins/jobs/kubernetes-e2e-gce-ingress/workspace/_artifacts/gcp-resources-after.txt 2016-02-02 18:59:50.725871086 -0800
18:59:50 @@ -30,0 +31 @@ [ routes ]
18:59:50 +default-route-3dba9bdf79298688 e2e-ingress 10.240.0.0/16 1000
18:59:50 @@ -31,0 +33 @@ [ routes ]
18:59:50 +default-route-663d4d06ada780a6 e2e-ingress 0.0.0.0/0 default-internet-gateway 1000
18:59:50 @@ -40,0 +43,2 @@ [ firewall-rules ]
18:59:50 +e2e-ingress-default-internal e2e-ingress 10.0.0.0/8 tcp:1-65535,udp:1-65535,icmp
18:59:50 +e2e-ingress-default-ssh e2e-ingress 0.0.0.0/0 tcp:22
18:59:50 ++ [[ true == \t\r\u\e ]]
18:59:50 ++ echo '!!! FAIL: Google Cloud Platform resources leaked while running tests!'
18:59:50
!!! FAIL: Google Cloud Platform resources leaked while running tests!
```
http://kubekins.dls.corp.google.com/job/kubernetes-e2e-gce-ingress/1/consoleFull
It created those rules though, so why didn't teardown delete them? or am I just misusing a feature?
@kubernetes/goog-testing
| non_process | cluster teardown leaks default firewall rules i enabled a suite with fail on gcp resource leak true and it started complaining thus var lib jenkins jobs kubernetes gce ingress workspace artifacts gcp resources before txt var lib jenkins jobs kubernetes gce ingress workspace artifacts gcp resources after txt default route ingress default route ingress default internet gateway ingress default internal ingress tcp udp icmp ingress default ssh ingress tcp echo fail google cloud platform resources leaked while running tests fail google cloud platform resources leaked while running tests it created those rules though so why didn t teardown delete them or am i just misusing a feature kubernetes goog testing | 0 |
5,334 | 8,150,360,551 | IssuesEvent | 2018-08-22 12:45:51 | threefoldtech/jumpscale_lib | https://api.github.com/repos/threefoldtech/jumpscale_lib | closed | Add atomic swap support to j.clients.blockchain.Electrum | process_duplicate type_feature | Currently there is a Go implementation at https://github.com/rivine/atomicswap
We need to see how bewst to provide this functionality in jumpscale
| 1.0 | Add atomic swap support to j.clients.blockchain.Electrum - Currently there is a Go implementation at https://github.com/rivine/atomicswap
We need to see how bewst to provide this functionality in jumpscale
| process | add atomic swap support to j clients blockchain electrum currently there is a go implementation at we need to see how bewst to provide this functionality in jumpscale | 1 |
182,384 | 30,838,663,506 | IssuesEvent | 2023-08-02 09:09:53 | wyshlist/wyshlist | https://api.github.com/repos/wyshlist/wyshlist | closed | [UX/UI] Redesign new user & User sign_in pages & Password change | enhancement High priority Design | - [x] Figma design high fidelity
- [ ] Front end code | 1.0 | [UX/UI] Redesign new user & User sign_in pages & Password change - - [x] Figma design high fidelity
- [ ] Front end code | non_process | redesign new user user sign in pages password change figma design high fidelity front end code | 0 |
19,362 | 25,491,627,227 | IssuesEvent | 2022-11-27 05:54:50 | python/cpython | https://api.github.com/repos/python/cpython | closed | asyncio: support multiprocessing (support fork) | type-feature expert-asyncio 3.12 expert-multiprocessing | BPO | [22087](https://bugs.python.org/issue22087)
--- | :---
Nosy | @gvanrossum, @pitrou, @1st1, @thehesiod, @miss-islington
PRs | <li>python/cpython#7208</li><li>python/cpython#7215</li><li>python/cpython#7218</li><li>python/cpython#7226</li><li>python/cpython#7232</li><li>python/cpython#7233</li>
Files | <li>[test_loop.py](https://bugs.python.org/file36117/test_loop.py "Uploaded as text/plain at 2014-07-26.18:01:38 by dan.oreilly"): Test script demonstrating the issue</li><li>[handle_mp_unix.diff](https://bugs.python.org/file36118/handle_mp_unix.diff "Uploaded as text/plain at 2014-07-26.18:20:15 by dan.oreilly"): Patch that makes _UnixDefaultEventLoopPolicy create a new loop object if get_event_loop is called in a forked mp child process</li><li>[handle-mp_unix2.patch](https://bugs.python.org/file36119/handle-mp_unix2.patch "Uploaded as text/plain at 2014-07-26.20:13:57 by dan.oreilly"): Use os.getpid() instead of multiprocessing. Store pid state in Policy instance rather than the Loop instance.</li><li>[handle_mp_unix_with_test.diff](https://bugs.python.org/file36134/handle_mp_unix_with_test.diff "Uploaded as text/plain at 2014-07-27.16:09:52 by dan.oreilly"): Adds a unit test to previous patch</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2014-07-26.18:01:10.150>
labels = ['type-bug', 'expert-asyncio']
title = 'asyncio: support multiprocessing (support fork)'
updated_at = <Date 2018-05-30.00:56:36.541>
user = 'https://bugs.python.org/danoreilly'
```
bugs.python.org fields:
```python
activity = <Date 2018-05-30.00:56:36.541>
actor = 'yselivanov'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['asyncio']
creation = <Date 2014-07-26.18:01:10.150>
creator = 'dan.oreilly'
dependencies = []
files = ['36117', '36118', '36119', '36134']
hgrepos = []
issue_num = 22087
keywords = ['patch']
message_count = 23.0
messages = ['224082', '224084', '224085', '224097', '224125', '224140', '224143', '224144', '224145', '226698', '235404', '235411', '288327', '297222', '297226', '297227', '297229', '318077', '318092', '318135', '318140', '318143', '318144']
nosy_count = 7.0
nosy_names = ['gvanrossum', 'pitrou', 'zmedico', 'yselivanov', 'thehesiod', 'dan.oreilly', 'miss-islington']
pr_nums = ['7208', '7215', '7218', '7226', '7232', '7233']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue22087'
versions = ['Python 3.4', 'Python 3.5', 'Python 3.6']
```
</p></details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-99539
* gh-99745
* gh-99756
* gh-99769
<!-- /gh-linked-prs -->
| 1.0 | asyncio: support multiprocessing (support fork) - BPO | [22087](https://bugs.python.org/issue22087)
--- | :---
Nosy | @gvanrossum, @pitrou, @1st1, @thehesiod, @miss-islington
PRs | <li>python/cpython#7208</li><li>python/cpython#7215</li><li>python/cpython#7218</li><li>python/cpython#7226</li><li>python/cpython#7232</li><li>python/cpython#7233</li>
Files | <li>[test_loop.py](https://bugs.python.org/file36117/test_loop.py "Uploaded as text/plain at 2014-07-26.18:01:38 by dan.oreilly"): Test script demonstrating the issue</li><li>[handle_mp_unix.diff](https://bugs.python.org/file36118/handle_mp_unix.diff "Uploaded as text/plain at 2014-07-26.18:20:15 by dan.oreilly"): Patch that makes _UnixDefaultEventLoopPolicy create a new loop object if get_event_loop is called in a forked mp child process</li><li>[handle-mp_unix2.patch](https://bugs.python.org/file36119/handle-mp_unix2.patch "Uploaded as text/plain at 2014-07-26.20:13:57 by dan.oreilly"): Use os.getpid() instead of multiprocessing. Store pid state in Policy instance rather than the Loop instance.</li><li>[handle_mp_unix_with_test.diff](https://bugs.python.org/file36134/handle_mp_unix_with_test.diff "Uploaded as text/plain at 2014-07-27.16:09:52 by dan.oreilly"): Adds a unit test to previous patch</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2014-07-26.18:01:10.150>
labels = ['type-bug', 'expert-asyncio']
title = 'asyncio: support multiprocessing (support fork)'
updated_at = <Date 2018-05-30.00:56:36.541>
user = 'https://bugs.python.org/danoreilly'
```
bugs.python.org fields:
```python
activity = <Date 2018-05-30.00:56:36.541>
actor = 'yselivanov'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['asyncio']
creation = <Date 2014-07-26.18:01:10.150>
creator = 'dan.oreilly'
dependencies = []
files = ['36117', '36118', '36119', '36134']
hgrepos = []
issue_num = 22087
keywords = ['patch']
message_count = 23.0
messages = ['224082', '224084', '224085', '224097', '224125', '224140', '224143', '224144', '224145', '226698', '235404', '235411', '288327', '297222', '297226', '297227', '297229', '318077', '318092', '318135', '318140', '318143', '318144']
nosy_count = 7.0
nosy_names = ['gvanrossum', 'pitrou', 'zmedico', 'yselivanov', 'thehesiod', 'dan.oreilly', 'miss-islington']
pr_nums = ['7208', '7215', '7218', '7226', '7232', '7233']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue22087'
versions = ['Python 3.4', 'Python 3.5', 'Python 3.6']
```
</p></details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-99539
* gh-99745
* gh-99756
* gh-99769
<!-- /gh-linked-prs -->
| process | asyncio support multiprocessing support fork bpo nosy gvanrossum pitrou thehesiod miss islington prs python cpython python cpython python cpython python cpython python cpython python cpython files uploaded as text plain at by dan oreilly test script demonstrating the issue uploaded as text plain at by dan oreilly patch that makes unixdefaulteventlooppolicy create a new loop object if get event loop is called in a forked mp child process uploaded as text plain at by dan oreilly use os getpid instead of multiprocessing store pid state in policy instance rather than the loop instance uploaded as text plain at by dan oreilly adds a unit test to previous patch note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title asyncio support multiprocessing support fork updated at user bugs python org fields python activity actor yselivanov assignee none closed false closed date none closer none components creation creator dan oreilly dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage patch review status open superseder none type behavior url versions linked prs gh gh gh gh | 1 |
441 | 2,873,611,261 | IssuesEvent | 2015-06-08 17:59:08 | besasm/EMGAATS | https://api.github.com/repos/besasm/EMGAATS | opened | create directors for STRT area types | process question | Need to layout process first.
1 Initially set up initial directors with to node IDs based on ?
Option a: intersect with model Surface Subcatchments,
Option b: link to inlet then to node from delineated catchement
Option c: ???
2 Figure out how to incorporate existing inflow controls.
discussion item to include Kristi | 1.0 | create directors for STRT area types - Need to layout process first.
1 Initially set up initial directors with to node IDs based on ?
Option a: intersect with model Surface Subcatchments,
Option b: link to inlet then to node from delineated catchement
Option c: ???
2 Figure out how to incorporate existing inflow controls.
discussion item to include Kristi | process | create directors for strt area types need to layout process first initially set up initial directors with to node ids based on option a intersect with model surface subcatchments option b link to inlet then to node from delineated catchement option c figure out how to incorporate existing inflow controls discussion item to include kristi | 1 |
43,962 | 11,352,252,488 | IssuesEvent | 2020-01-24 13:14:22 | EightShapes/esds-build | https://api.github.com/repos/EightShapes/esds-build | closed | Integrate build messages with platform specific notifications | [Build] | ## Acceptance Criteria
* As a build tool user I want to see a "build failed/succeeded" message as a Mac OS or Windows notification so I don't have to check my terminal for any build issues. | 1.0 | Integrate build messages with platform specific notifications - ## Acceptance Criteria
* As a build tool user I want to see a "build failed/succeeded" message as a Mac OS or Windows notification so I don't have to check my terminal for any build issues. | non_process | integrate build messages with platform specific notifications acceptance criteria as a build tool user i want to see a build failed succeeded message as a mac os or windows notification so i don t have to check my terminal for any build issues | 0 |
469,129 | 13,501,867,236 | IssuesEvent | 2020-09-13 05:04:05 | olive-editor/olive | https://api.github.com/repos/olive-editor/olive | closed | [Feature Request] Basic Mixing Controls Pan / Level | Legacy (Unsupported) Low Priority | audio mixer window would be great for live sound recording. You could all so use it to automating your valium or pan

https://www.youtube.com/watch?v=xNw0Iw03F2M&t=728s

you could add jack audio to send audio to and from a daw software
http://jackaudio.org/
https://github.com/jackaudio | 1.0 | [Feature Request] Basic Mixing Controls Pan / Level - audio mixer window would be great for live sound recording. You could all so use it to automating your valium or pan

https://www.youtube.com/watch?v=xNw0Iw03F2M&t=728s

you could add jack audio to send audio to and from a daw software
http://jackaudio.org/
https://github.com/jackaudio | non_process | basic mixing controls pan level audio mixer window would be great for live sound recording you could all so use it to automating your valium or pan you could add jack audio to send audio to and from a daw software | 0 |
544,149 | 15,890,182,020 | IssuesEvent | 2021-04-10 14:30:59 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | Windows collection not included in default EE | component:ee priority:high state:needs_devel type:bug | <!---
The Ansible community is highly committed to the security of our open source
projects. Security concerns should be reported directly by email to
security@ansible.com. For more information on the Ansible community's
practices regarding responsible disclosure, see
https://www.ansible.com/security
-->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- Installer
##### SUMMARY
(Apologies if "Bug Report" is not the best description of this issue)
Hello, I'm probably one of very few who will have this issue, but it seems the default awx-ee image installed with 18.0.0 does not contain the ansible.windows collection. I've tried (and failed) to build a custom EE using ansible-builder and listing the collection in requirements.yml, but I noticed in issue #7058 that the goal of the default EE was to include collections available in previous releases (currently running 14.0.0 in production without this issue). If your recommendation is to build the EE then I will figure it out, but I would like to see this included by default if possible. I understand if automating Windows with Ansible is not a very common use case, and if this is low priority. ๐
##### ENVIRONMENT
* AWX version: 18.0.0
* AWX install method: awx-operator (using microk8s)
* Operating System: Ubuntu 18.04
##### STEPS TO REPRODUCE
Running any playbook that includes a module in the ansible.windows collection (e.g. win_ping)
##### EXPECTED RESULTS
Module to work as it has in previous releases.
##### ACTUAL RESULTS
`{
"msg": "The module win_ping was redirected to ansible.windows.win_ping, which could not be loaded.",
"_ansible_no_log": false
}`
##### ADDITIONAL INFORMATION
| 1.0 | Windows collection not included in default EE - <!---
The Ansible community is highly committed to the security of our open source
projects. Security concerns should be reported directly by email to
security@ansible.com. For more information on the Ansible community's
practices regarding responsible disclosure, see
https://www.ansible.com/security
-->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- Installer
##### SUMMARY
(Apologies if "Bug Report" is not the best description of this issue)
Hello, I'm probably one of very few who will have this issue, but it seems the default awx-ee image installed with 18.0.0 does not contain the ansible.windows collection. I've tried (and failed) to build a custom EE using ansible-builder and listing the collection in requirements.yml, but I noticed in issue #7058 that the goal of the default EE was to include collections available in previous releases (currently running 14.0.0 in production without this issue). If your recommendation is to build the EE then I will figure it out, but I would like to see this included by default if possible. I understand if automating Windows with Ansible is not a very common use case, and if this is low priority. ๐
##### ENVIRONMENT
* AWX version: 18.0.0
* AWX install method: awx-operator (using microk8s)
* Operating System: Ubuntu 18.04
##### STEPS TO REPRODUCE
Running any playbook that includes a module in the ansible.windows collection (e.g. win_ping)
##### EXPECTED RESULTS
Module to work as it has in previous releases.
##### ACTUAL RESULTS
`{
"msg": "The module win_ping was redirected to ansible.windows.win_ping, which could not be loaded.",
"_ansible_no_log": false
}`
##### ADDITIONAL INFORMATION
| non_process | windows collection not included in default ee the ansible community is highly committed to the security of our open source projects security concerns should be reported directly by email to security ansible com for more information on the ansible community s practices regarding responsible disclosure see issue type bug report component name installer summary apologies if bug report is not the best description of this issue hello i m probably one of very few who will have this issue but it seems the default awx ee image installed with does not contain the ansible windows collection i ve tried and failed to build a custom ee using ansible builder and listing the collection in requirements yml but i noticed in issue that the goal of the default ee was to include collections available in previous releases currently running in production without this issue if your recommendation is to build the ee then i will figure it out but i would like to see this included by default if possible i understand if automating windows with ansible is not a very common use case and if this is low priority ๐ environment awx version awx install method awx operator using operating system ubuntu steps to reproduce running any playbook that includes a module in the ansible windows collection e g win ping expected results module to work as it has in previous releases actual results msg the module win ping was redirected to ansible windows win ping which could not be loaded ansible no log false additional information | 0 |
31,877 | 26,221,879,313 | IssuesEvent | 2023-01-04 15:26:21 | grafana/agent | https://api.github.com/repos/grafana/agent | closed | Flow: Add ebpf exporter | type/infrastructure | Create `prometheus.integration.ebpf` component, this is a complex exporter and may require additional considerations. | 1.0 | Flow: Add ebpf exporter - Create `prometheus.integration.ebpf` component, this is a complex exporter and may require additional considerations. | non_process | flow add ebpf exporter create prometheus integration ebpf component this is a complex exporter and may require additional considerations | 0 |
22,453 | 31,224,333,295 | IssuesEvent | 2023-08-19 00:07:18 | googleapis/google-cloud-node | https://api.github.com/repos/googleapis/google-cloud-node | closed | Warning: a recent release failed | type: process | The following release PRs may have failed:
* #4497 - The release job failed -- check the build log.
* #4467 - The release job failed -- check the build log. | 1.0 | Warning: a recent release failed - The following release PRs may have failed:
* #4497 - The release job failed -- check the build log.
* #4467 - The release job failed -- check the build log. | process | warning a recent release failed the following release prs may have failed the release job failed check the build log the release job failed check the build log | 1 |
23,208 | 4,894,077,516 | IssuesEvent | 2016-11-19 03:33:24 | zsh-users/antigen | https://api.github.com/repos/zsh-users/antigen | closed | RELEASE.md: Document release process | Documentation | New versions
1- How to build and test a new version
2- Update CHANGELOG.md
3- Build a new VERSION file
4- Post release documentation/draft. Show off new features/changes.
Release Candidates
1- Testing process
2- Deadline to test
| 1.0 | RELEASE.md: Document release process - New versions
1- How to build and test a new version
2- Update CHANGELOG.md
3- Build a new VERSION file
4- Post release documentation/draft. Show off new features/changes.
Release Candidates
1- Testing process
2- Deadline to test
| non_process | release md document release process new versions how to build and test a new version update changelog md build a new version file post release documentation draft show off new features changes release candidates testing process deadline to test | 0 |
185,685 | 21,843,716,819 | IssuesEvent | 2022-05-18 01:03:03 | snowflakedb/snowflake-jdbc | https://api.github.com/repos/snowflakedb/snowflake-jdbc | closed | SNOW-591439: CVE-2022-30126 (Medium) detected in tika-core-1.25.jar - autoclosed | security vulnerability | ## CVE-2022-30126 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tika-core-1.25.jar</b></p></summary>
<p>This is the core Apache Tikaโข toolkit library from which all other modules inherit functionality. It also
includes the core facades for the Tika API.</p>
<p>Library home page: <a href="http://tika.apache.org/">http://tika.apache.org/</a></p>
<p>Path to dependency file: /public_pom.xml</p>
<p>Path to vulnerable library: /sitory/org/apache/tika/tika-core/1.25/tika-core-1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **tika-core-1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowflakedb/snowflake-jdbc/commit/8f8ac5cc8e5f49e8df9a7899e9b4e13430114973">8f8ac5cc8e5f49e8df9a7899e9b4e13430114973</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Tika, a regular expression in our StandardsText class, used by the StandardsExtractingContentHandler could lead to a denial of service caused by backtracking on a specially crafted file. This only affects users who are running the StandardsExtractingContentHandler, which is a non-standard handler. This is fixed in 1.28.2 and 2.4.0
<p>Publish Date: 2022-05-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30126>CVE-2022-30126</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-30126">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-30126</a></p>
<p>Release Date: 2022-05-16</p>
<p>Fix Resolution: org.apache.tika:tika-core:1.28.2,2.4.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tika","packageName":"tika-core","packageVersion":"1.25","packageFilePaths":["/public_pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.tika:tika-core:1.25","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tika:tika-core:1.28.2,2.4.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-30126","vulnerabilityDetails":"In Apache Tika, a regular expression in our StandardsText class, used by the StandardsExtractingContentHandler could lead to a denial of service caused by backtracking on a specially crafted file. This only affects users who are running the StandardsExtractingContentHandler, which is a non-standard handler. This is fixed in 1.28.2 and 2.4.0","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30126","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | True | SNOW-591439: CVE-2022-30126 (Medium) detected in tika-core-1.25.jar - autoclosed - ## CVE-2022-30126 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tika-core-1.25.jar</b></p></summary>
<p>This is the core Apache Tikaโข toolkit library from which all other modules inherit functionality. It also
includes the core facades for the Tika API.</p>
<p>Library home page: <a href="http://tika.apache.org/">http://tika.apache.org/</a></p>
<p>Path to dependency file: /public_pom.xml</p>
<p>Path to vulnerable library: /sitory/org/apache/tika/tika-core/1.25/tika-core-1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **tika-core-1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowflakedb/snowflake-jdbc/commit/8f8ac5cc8e5f49e8df9a7899e9b4e13430114973">8f8ac5cc8e5f49e8df9a7899e9b4e13430114973</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Tika, a regular expression in our StandardsText class, used by the StandardsExtractingContentHandler could lead to a denial of service caused by backtracking on a specially crafted file. This only affects users who are running the StandardsExtractingContentHandler, which is a non-standard handler. This is fixed in 1.28.2 and 2.4.0
<p>Publish Date: 2022-05-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30126>CVE-2022-30126</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-30126">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-30126</a></p>
<p>Release Date: 2022-05-16</p>
<p>Fix Resolution: org.apache.tika:tika-core:1.28.2,2.4.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tika","packageName":"tika-core","packageVersion":"1.25","packageFilePaths":["/public_pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.tika:tika-core:1.25","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tika:tika-core:1.28.2,2.4.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-30126","vulnerabilityDetails":"In Apache Tika, a regular expression in our StandardsText class, used by the StandardsExtractingContentHandler could lead to a denial of service caused by backtracking on a specially crafted file. This only affects users who are running the StandardsExtractingContentHandler, which is a non-standard handler. This is fixed in 1.28.2 and 2.4.0","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30126","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | non_process | snow cve medium detected in tika core jar autoclosed cve medium severity vulnerability vulnerable library tika core jar this is the core apache tikaโข toolkit library from which all other modules inherit functionality it also includes the core facades for the tika api library home page a href path to dependency file public pom xml path to vulnerable library sitory org apache tika tika core tika core jar dependency hierarchy x tika core jar vulnerable library found in head commit a href found in base branch master vulnerability details in apache tika a regular expression in our standardstext class used by the standardsextractingcontenthandler could lead to a denial of service caused by backtracking on a specially crafted file this only affects users who are running the standardsextractingcontenthandler which is a non standard handler this is fixed in and publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tika tika core rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache tika tika core isminimumfixversionavailable true minimumfixversion org apache tika tika core isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails in apache tika a regular expression in our standardstext class used by the standardsextractingcontenthandler could lead to a denial of service caused by backtracking on a specially crafted file this only affects users who are running the standardsextractingcontenthandler which is a non standard handler this is fixed in and vulnerabilityurl | 0 |
22,264 | 6,230,119,645 | IssuesEvent | 2017-07-11 07:01:33 | XceedBoucherS/TestImport5 | https://api.github.com/repos/XceedBoucherS/TestImport5 | closed | Review the PropertyGrid Documentation | CodePlex | <b>emartin[CodePlex]</b> <br />The documentation page of the PropertyGrid need to be updated.
The custom Editor example still refer to the Obsolete quotEditorDefinitionquot as an example.
| 1.0 | Review the PropertyGrid Documentation - <b>emartin[CodePlex]</b> <br />The documentation page of the PropertyGrid need to be updated.
The custom Editor example still refer to the Obsolete quotEditorDefinitionquot as an example.
| non_process | review the propertygrid documentation emartin the documentation page of the propertygrid need to be updated the custom editor example still refer to the obsolete quoteditordefinitionquot as an example | 0 |
209,020 | 7,164,269,346 | IssuesEvent | 2018-01-29 10:37:33 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | opened | Improve soft-keyboard handling on mobile | Priority High Writing Flow [Component] Mobile | There are issues with typing on mobile, both on Android and iOS. GIFs:
**iOS**

Problems:
1. Screen jumps on every linebreak as if resetting
2. You lose your place (this is an issue on every other textfield that auto-resizes, including the classic editor)
**Android**

Problems:
- Soft-keyboard sometimes hides itself and quickly opens again, causing a jarring effect
## Causes
It is all but impossible to control when the soft-keyboard shows and hides on mobile devices. It seems to be controlled largely around the Javascript `focus` and `blur` events, whether the keyboard shows or hides.
In the case of Gutenberg, those two events fire when a new paragraph is created. We do this fast enough that sometimes the keyboard doesn't hide on Android, and it never hides on iOS. But we don't do it fast enough to prevent all side-effects:
- on iOS, the blur event for some reason causes the page to scroll to the top.
- on Android, the blur event sometimes causes the keyboard to hide and then quickly show again
The screen jump could potentially be mitigated by #353, but it seems like there could be a larger effort that could solve both. https://notion.so has a block editor that's similar to Gutenberg in many ways, and works paragraphs as individual contenteditable blocks. The problem described in this ticket is not an issue with the Notion editor, neither in Android or iOS:

CC: @mtias @mcsf @aduth @youknowriad would appreciate your thoughts on this. | 1.0 | Improve soft-keyboard handling on mobile - There are issues with typing on mobile, both on Android and iOS. GIFs:
**iOS**

Problems:
1. Screen jumps on every linebreak as if resetting
2. You lose your place (this is an issue on every other textfield that auto-resizes, including the classic editor)
**Android**

Problems:
- Soft-keyboard sometimes hides itself and quickly opens again, causing a jarring effect
## Causes
It is all but impossible to control when the soft-keyboard shows and hides on mobile devices. It seems to be controlled largely around the Javascript `focus` and `blur` events, whether the keyboard shows or hides.
In the case of Gutenberg, those two events fire when a new paragraph is created. We do this fast enough that sometimes the keyboard doesn't hide on Android, and it never hides on iOS. But we don't do it fast enough to prevent all side-effects:
- on iOS, the blur event for some reason causes the page to scroll to the top.
- on Android, the blur event sometimes causes the keyboard to hide and then quickly show again
The screen jump could potentially be mitigated by #353, but it seems like there could be a larger effort that could solve both. https://notion.so has a block editor that's similar to Gutenberg in many ways, and works paragraphs as individual contenteditable blocks. The problem described in this ticket is not an issue with the Notion editor, neither in Android or iOS:

CC: @mtias @mcsf @aduth @youknowriad would appreciate your thoughts on this. | non_process | improve soft keyboard handling on mobile there are issues with typing on mobile both on android and ios gifs ios problems screen jumps on every linebreak as if resetting you lose your place this is an issue on every other textfield that auto resizes including the classic editor android problems soft keyboard sometimes hides itself and quickly opens again causing a jarring effect causes it is all but impossible to control when the soft keyboard shows and hides on mobile devices it seems to be controlled largely around the javascript focus and blur events whether the keyboard shows or hides in the case of gutenberg those two events fire when a new paragraph is created we do this fast enough that sometimes the keyboard doesn t hide on android and it never hides on ios but we don t do it fast enough to prevent all side effects on ios the blur event for some reason causes the page to scroll to the top on android the blur event sometimes causes the keyboard to hide and then quickly show again the screen jump could potentially be mitigated by but it seems like there could be a larger effort that could solve both has a block editor that s similar to gutenberg in many ways and works paragraphs as individual contenteditable blocks the problem described in this ticket is not an issue with the notion editor neither in android or ios cc mtias mcsf aduth youknowriad would appreciate your thoughts on this | 0 |
681,851 | 23,325,098,420 | IssuesEvent | 2022-08-08 20:19:42 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | reopened | Windows - Linux interop issues for services of type LoadBalancer with local traffic policy | kind/bug priority/important-soon sig/windows lifecycle/rotten triage/accepted | ### What happened?
Windows client pods in the cluster see intermittent connection failures when attempting to access a Linux service of type load balancer using its external VIP under the following conditions:
* Linux service has externalTrafficPolicy: Local set.
* The backend pods are distributed across >1 Linux nodes.
### What did you expect to happen?
In kube-proxy Linux, it appears there are special iptables rules to redirect pod -> external VIP traffic to service clusterIP instead:
https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/proxy/iptables/proxier.go#L1455
Something similar presumably also needs to be added to Windows.
### How can we reproduce it (as minimally and precisely as possible)?
1. Deploy azure CNI โฏ+ calico cluster with 3 linux nodes in system nodepool, and a win nodepool
2. Deploy service with basic nginx ingress controller or azure load balancer and set to External Traffic Policy Local. These will run in the linux system nodepool.
3. Create a pod on the win nodepool
4. Curl the LB IP or the external IP
5. As soon as the replicas are distributed across multiple nodes, we will get intermittent failures.
### Anything else we need to know?
The scenario can be made to work successfully when one or more of the following applies:
* Connecting using the cluster service IP instead of external LB frontend VIP (Windows pod -> Linux cluster svc IP)
* The destination service is backed by Windows pods (Windows pod -> Windows VIP).
* If using a Linux client pod instead of Windows pod (Linux pod -> Linux VIP).
* Destination pods backing the service are all located on a single Node.
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
Azure
### OS version
Windows Server 2019
### Install tools
<details>
</details>
### Container runtime (CRI) and and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
Azure CNI | 1.0 | Windows - Linux interop issues for services of type LoadBalancer with local traffic policy - ### What happened?
Windows client pods in the cluster see intermittent connection failures when attempting to access a Linux service of type load balancer using its external VIP under the following conditions:
* Linux service has externalTrafficPolicy: Local set.
* The backend pods are distributed across >1 Linux nodes.
### What did you expect to happen?
In kube-proxy Linux, it appears there are special iptables rules to redirect pod -> external VIP traffic to service clusterIP instead:
https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/proxy/iptables/proxier.go#L1455
Something similar presumably also needs to be added to Windows.
### How can we reproduce it (as minimally and precisely as possible)?
1. Deploy azure CNI โฏ+ calico cluster with 3 linux nodes in system nodepool, and a win nodepool
2. Deploy service with basic nginx ingress controller or azure load balancer and set to External Traffic Policy Local. These will run in the linux system nodepool.
3. Create a pod on the win nodepool
4. Curl the LB IP or the external IP
5. As soon as the replicas are distributed across multiple nodes, we will get intermittent failures.
### Anything else we need to know?
The scenario can be made to work successfully when one or more of the following applies:
* Connecting using the cluster service IP instead of external LB frontend VIP (Windows pod -> Linux cluster svc IP)
* The destination service is backed by Windows pods (Windows pod -> Windows VIP).
* If using a Linux client pod instead of Windows pod (Linux pod -> Linux VIP).
* Destination pods backing the service are all located on a single Node.
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
Azure
### OS version
Windows Server 2019
### Install tools
<details>
</details>
### Container runtime (CRI) and and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
Azure CNI | non_process | windows linux interop issues for services of type loadbalancer with local traffic policy what happened windows client pods in the cluster see intermittent connection failures when attempting to access a linux service of type load balancer using its external vip under the following conditions linux service has externaltrafficpolicy local set the backend pods are distributed across linux nodes what did you expect to happen in kube proxy linux it appears there are special iptables rules to redirect pod external vip traffic to service clusterip instead something similar presumably also needs to be added to windows how can we reproduce it as minimally and precisely as possible deploy azure cni โฏ calico cluster with linux nodes in system nodepool and a win nodepool deploy service with basic nginx ingress controller or azure load balancer and set to external traffic policy local these will run in the linux system nodepool create a pod on the win nodepool curl the lb ip or the external ip as soon as the replicas are distributed across multiple nodes we will get intermittent failures anything else we need to know the scenario can be made to work successfully when one or more of the following applies connecting using the cluster service ip instead of external lb frontend vip windows pod linux cluster svc ip the destination service is backed by windows pods windows pod windows vip if using a linux client pod instead of windows pod linux pod linux vip destination pods backing the service are all located on a single node kubernetes version console kubectl version paste output here cloud provider azure os version windows server install tools container runtime cri and and version if applicable related plugins cni csi and versions if applicable azure cni | 0 |
12,006 | 14,738,174,408 | IssuesEvent | 2021-01-07 03:58:58 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Fair Oaks- Gold River Pediatric Dentistry WGNA288- Late Charges | anc-ops anc-process anp-1 ant-bug ant-parent/primary ant-support has attachment | In GitLab by @kdjstudios on May 11, 2018, 12:49
**Submitted by:** "Martin Villegas" <martin.villegas@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-11-48832/conversation
**Server:** Internal
**Client/Site:** Fairoaks
**Account:** WGNA288
**Issue:**
This client had a credit on his account and was still charged a late fee.
 | 1.0 | Fair Oaks- Gold River Pediatric Dentistry WGNA288- Late Charges - In GitLab by @kdjstudios on May 11, 2018, 12:49
**Submitted by:** "Martin Villegas" <martin.villegas@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-11-48832/conversation
**Server:** Internal
**Client/Site:** Fairoaks
**Account:** WGNA288
**Issue:**
This client had a credit on his account and was still charged a late fee.
 | process | fair oaks gold river pediatric dentistry late charges in gitlab by kdjstudios on may submitted by martin villegas helpdesk server internal client site fairoaks account issue this client had a credit on his account and was still charged a late fee uploads image png | 1 |
8,801 | 8,408,180,370 | IssuesEvent | 2018-10-12 00:03:29 | kubeapps/kubeapps | https://api.github.com/repos/kubeapps/kubeapps | closed | container/component names and structure inconsistent | component/service-catalog one_dot_o priority/important-longterm size/S | 
We need to rename some of these classes to match the Container suffix for containers. The components are also all in index.tsx files instead of files named with the Component class name. The ConfirmDialog component is also included in this.
ProvisionButton should not be in the root Component dir, but rather moved under the ClassView dir. Likewise for SyncButton and ServiceBrokerList. | 1.0 | container/component names and structure inconsistent - 
We need to rename some of these classes to match the Container suffix for containers. The components are also all in index.tsx files instead of files named with the Component class name. The ConfirmDialog component is also included in this.
ProvisionButton should not be in the root Component dir, but rather moved under the ClassView dir. Likewise for SyncButton and ServiceBrokerList. | non_process | container component names and structure inconsistent we need to rename some of these classes to match the container suffix for containers the components are also all in index tsx files instead of files named with the component class name the confirmdialog component is also included in this provisionbutton should not be in the root component dir but rather moved under the classview dir likewise for syncbutton and servicebrokerlist | 0 |
2,930 | 5,917,210,649 | IssuesEvent | 2017-05-22 12:42:50 | intelsdi-x/snap | https://api.github.com/repos/intelsdi-x/snap | closed | Plugin wanted: change detector processor | plugin-wishlist/processor | I'd like to have a processor plugin to detect changes between current and previous values of metrics. | 1.0 | Plugin wanted: change detector processor - I'd like to have a processor plugin to detect changes between current and previous values of metrics. | process | plugin wanted change detector processor i d like to have a processor plugin to detect changes between current and previous values of metrics | 1 |
8,017 | 11,205,751,807 | IssuesEvent | 2020-01-05 16:21:52 | luckyos-code/ArgU | https://api.github.com/repos/luckyos-code/ArgU | closed | Debatten Analyse | doing preprocessing | Ziel: Herausfinden, welche Debatten zu wenig Argumente haben ( <= 2) und entfernen.
Es wรคre wichtig, sich die Argumente von Debatten mit sehr wenig Argumenten anzuschauen um zu entscheiden, ob dieser Schritt sinnvoll ist! Vielleicht gibt es auch sehr gute Debatten, die nur 2 Argumente haben.
Jedes Objekt ```Argument``` hat ein Attribut ```debate_id``` das Aufschluss darรผber geben kann, wie viele Argumente in einer Debatte vorkommen. | 1.0 | Debatten Analyse - Ziel: Herausfinden, welche Debatten zu wenig Argumente haben ( <= 2) und entfernen.
Es wรคre wichtig, sich die Argumente von Debatten mit sehr wenig Argumenten anzuschauen um zu entscheiden, ob dieser Schritt sinnvoll ist! Vielleicht gibt es auch sehr gute Debatten, die nur 2 Argumente haben.
Jedes Objekt ```Argument``` hat ein Attribut ```debate_id``` das Aufschluss darรผber geben kann, wie viele Argumente in einer Debatte vorkommen. | process | debatten analyse ziel herausfinden welche debatten zu wenig argumente haben und entfernen es wรคre wichtig sich die argumente von debatten mit sehr wenig argumenten anzuschauen um zu entscheiden ob dieser schritt sinnvoll ist vielleicht gibt es auch sehr gute debatten die nur argumente haben jedes objekt argument hat ein attribut debate id das aufschluss darรผber geben kann wie viele argumente in einer debatte vorkommen | 1 |
2,741 | 5,631,509,719 | IssuesEvent | 2017-04-05 14:42:18 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Console app uses 100% cpu when is ran by Process.Start with redirecting input/output | area-System.Diagnostics.Process bug | dotnet core version: 1.0.2
OS: Windows 10 or Ubuntu 16.04 x64
How to replicate the issue:
1. Create a Console app A, which will accept any input and print the length of the input.
2. Publish app A to a folder.
3. Create a Console app B, which uses Process.Start to start app A with RedirectStandardInput and RedirectStandardOutput set to true, In startinfo, filename is dotnet, argument is the main dll name in the app A publish folder. Do not use WaitForExit in app B.
4. App B will exit after starting the app A, app A will still be running in the background and uses 100% of cpu.
5. With RedirectStandardInput and RedirectStandardOutput set to false, it will be all good.
any ideas?
| 1.0 | Console app uses 100% cpu when is ran by Process.Start with redirecting input/output - dotnet core version: 1.0.2
OS: Windows 10 or Ubuntu 16.04 x64
How to replicate the issue:
1. Create a Console app A, which will accept any input and print the length of the input.
2. Publish app A to a folder.
3. Create a Console app B, which uses Process.Start to start app A with RedirectStandardInput and RedirectStandardOutput set to true, In startinfo, filename is dotnet, argument is the main dll name in the app A publish folder. Do not use WaitForExit in app B.
4. App B will exit after starting the app A, app A will still be running in the background and uses 100% of cpu.
5. With RedirectStandardInput and RedirectStandardOutput set to false, it will be all good.
any ideas?
| process | console app uses cpu when is ran by process start with redirecting input output dotnet core version os windows or ubuntu how to replicate the issue create a console app a which will accept any input and print the length of the input publish app a to a folder create a console app b which uses process start to start app a with redirectstandardinput and redirectstandardoutput set to true in startinfo filename is dotnet argument is the main dll name in the app a publish folder do not use waitforexit in app b app b will exit after starting the app a app a will still be running in the background and uses of cpu with redirectstandardinput and redirectstandardoutput set to false it will be all good any ideas | 1 |
263,551 | 28,040,426,083 | IssuesEvent | 2023-03-28 18:04:13 | MatBenfield/news | https://api.github.com/repos/MatBenfield/news | closed | [SecurityWeek] GoAnywhere Zero-Day Attack Hits Major Orgs | SecurityWeek Stale |
Several major organizations are confirming impact from the latest zero-day exploits hitting Fortra's GoAnywhere software.
The post [GoAnywhere Zero-Day Attack Hits Major Orgs](https://www.securityweek.com/goanywhere-zero-day-attack-hits-major-orgs/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/goanywhere-zero-day-attack-hits-major-orgs/>
| True | [SecurityWeek] GoAnywhere Zero-Day Attack Hits Major Orgs -
Several major organizations are confirming impact from the latest zero-day exploits hitting Fortra's GoAnywhere software.
The post [GoAnywhere Zero-Day Attack Hits Major Orgs](https://www.securityweek.com/goanywhere-zero-day-attack-hits-major-orgs/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/goanywhere-zero-day-attack-hits-major-orgs/>
| non_process | goanywhere zero day attack hits major orgs several major organizations are confirming impact from the latest zero day exploits hitting fortra s goanywhere software the post appeared first on | 0 |
5,107 | 7,885,397,766 | IssuesEvent | 2018-06-27 12:20:53 | Open-EO/openeo-api | https://api.github.com/repos/Open-EO/openeo-api | opened | xxx_time: Names for aggregate functions are misleading | processes | The aggregation methods like min_time, max_time, last_time (sic!), first_time, mean_time seem to be confusing names for the methods. During the Hackathon it was proposed to change the naming scheme. Depends also on the outcomes of issue #77 as a `aggregate("time", "mean", ...)` or `aggregate_time("mean", ...)` could be introduced, which would solve the problem anyway. | 1.0 | xxx_time: Names for aggregate functions are misleading - The aggregation methods like min_time, max_time, last_time (sic!), first_time, mean_time seem to be confusing names for the methods. During the Hackathon it was proposed to change the naming scheme. Depends also on the outcomes of issue #77 as a `aggregate("time", "mean", ...)` or `aggregate_time("mean", ...)` could be introduced, which would solve the problem anyway. | process | xxx time names for aggregate functions are misleading the aggregation methods like min time max time last time sic first time mean time seem to be confusing names for the methods during the hackathon it was proposed to change the naming scheme depends also on the outcomes of issue as a aggregate time mean or aggregate time mean could be introduced which would solve the problem anyway | 1 |
612,336 | 19,010,094,278 | IssuesEvent | 2021-11-23 08:16:05 | chaotic-aur/packages | https://api.github.com/repos/chaotic-aur/packages | closed | [Request] `check-broken-packages-pacman-hook-git` and `pacdiff-pacman-hook-git` | request:new-pkg priority:low | - Link to the package(s) in AUR: [check-broken-packages-pacman-hook-git](https://aur.archlinux.org/packages/check-broken-packages-pacman-hook-git/), [pacdiff-pacman-hook-git](https://aur.archlinux.org/packages/pacdiff-pacman-hook-git/)
- Utility this package has for you:
I use them to do automatic checks whenever I update my system.
- Do you consider this package(s) to be useful for **every** chaotic user?:
- [ ] YES
- [ ] No, but yes for a great amount.
- [x] No, but yes for a few.
- [ ] No, it's useful only for me.
- Do you consider this package(s) to be useful for feature testing/preview (e.g: mesa-aco, wine-wayland)?:
- [ ] YES
- [x] NO
- Are you sure we don't have this package already (test with `pacman -Ss <pkgname>`)?:
- [x] YES
- Have you tested if this package builds in a clean chroot?:
- [ ] YES
- [x] NO
- Does the package's license allows us to redistribute it?:
- [x] YES (both GPL v3.0 licensed, GitHuib repository [here](https://github.com/desbma/pacman-hooks))
- [ ] No clue.
- [ ] No, but the author doesn't really care, it's just for bureaucracy.
- Have you searched the [issues](https://github.com/chaotic-aur/packages/issues) to ensure this request is new (not duplicated)?:
- [x] YES
- Have you read the [README](https://github.com/chaotic-aur/packages#banished-and-rejected-packages) to ensure this package is not banned?:
- [x] YES | 1.0 | [Request] `check-broken-packages-pacman-hook-git` and `pacdiff-pacman-hook-git` - - Link to the package(s) in AUR: [check-broken-packages-pacman-hook-git](https://aur.archlinux.org/packages/check-broken-packages-pacman-hook-git/), [pacdiff-pacman-hook-git](https://aur.archlinux.org/packages/pacdiff-pacman-hook-git/)
- Utility this package has for you:
I use them to do automatic checks whenever I update my system.
- Do you consider this package(s) to be useful for **every** chaotic user?:
- [ ] YES
- [ ] No, but yes for a great amount.
- [x] No, but yes for a few.
- [ ] No, it's useful only for me.
- Do you consider this package(s) to be useful for feature testing/preview (e.g: mesa-aco, wine-wayland)?:
- [ ] YES
- [x] NO
- Are you sure we don't have this package already (test with `pacman -Ss <pkgname>`)?:
- [x] YES
- Have you tested if this package builds in a clean chroot?:
- [ ] YES
- [x] NO
- Does the package's license allows us to redistribute it?:
- [x] YES (both GPL v3.0 licensed, GitHuib repository [here](https://github.com/desbma/pacman-hooks))
- [ ] No clue.
- [ ] No, but the author doesn't really care, it's just for bureaucracy.
- Have you searched the [issues](https://github.com/chaotic-aur/packages/issues) to ensure this request is new (not duplicated)?:
- [x] YES
- Have you read the [README](https://github.com/chaotic-aur/packages#banished-and-rejected-packages) to ensure this package is not banned?:
- [x] YES | non_process | check broken packages pacman hook git and pacdiff pacman hook git link to the package s in aur utility this package has for you i use them to do automatic checks whenever i update my system do you consider this package s to be useful for every chaotic user yes no but yes for a great amount no but yes for a few no it s useful only for me do you consider this package s to be useful for feature testing preview e g mesa aco wine wayland yes no are you sure we don t have this package already test with pacman ss yes have you tested if this package builds in a clean chroot yes no does the package s license allows us to redistribute it yes both gpl licensed githuib repository no clue no but the author doesn t really care it s just for bureaucracy have you searched the to ensure this request is new not duplicated yes have you read the to ensure this package is not banned yes | 0 |
173,148 | 27,391,937,556 | IssuesEvent | 2023-02-28 16:49:36 | tijlleenders/ZinZen | https://api.github.com/repos/tijlleenders/ZinZen | closed | Design sharing UI for copy paste | UI feature design low prio | My neighbor's son and I are discussing replacing the garden fence together.
I've collected some instruction video's that I'd like to share with him. He doesn't have ZinZen.
Wanted: How do I copy the information from the items in this list to paste in whatever I'm using to share with him?

NB: this way of sharing is for people who don't want to use the built-in sharing via the ZinZen website. Sharing via a private/public link that redirects the clicker to the shared content via their own ZinZen interface is preferred from a ZinZen perspective - but in this case we just want the text + links. | 1.0 | Design sharing UI for copy paste - My neighbor's son and I are discussing replacing the garden fence together.
I've collected some instruction video's that I'd like to share with him. He doesn't have ZinZen.
Wanted: How do I copy the information from the items in this list to paste in whatever I'm using to share with him?

NB: this way of sharing is for people who don't want to use the built-in sharing via the ZinZen website. Sharing via a private/public link that redirects the clicker to the shared content via their own ZinZen interface is preferred from a ZinZen perspective - but in this case we just want the text + links. | non_process | design sharing ui for copy paste my neighbor s son and i are discussing replacing the garden fence together i ve collected some instruction video s that i d like to share with him he doesn t have zinzen wanted how do i copy the information from the items in this list to paste in whatever i m using to share with him nb this way of sharing is for people who don t want to use the built in sharing via the zinzen website sharing via a private public link that redirects the clicker to the shared content via their own zinzen interface is preferred from a zinzen perspective but in this case we just want the text links | 0 |
2,285 | 5,109,851,390 | IssuesEvent | 2017-01-05 22:05:38 | pelias/wof-pip-service | https://api.github.com/repos/pelias/wof-pip-service | closed | Read WOF file path from meta files | processed | Currently the wof-pip-service [calculates](https://github.com/pelias/wof-pip-service/blob/master/src/components/loadJSON.js#L10-L17) the filename for each WOF record it loads from the ID. This works right now, but there's no reason why our code should have knowledge of the file format, as it may (and probably will) change in the future. Instead, we should use the `path` attribute in the CSV files (look at the [country meta file](https://media.githubusercontent.com/media/whosonfirst-data/whosonfirst-data/master/meta/wof-country-latest.csv) for some examples). | 1.0 | Read WOF file path from meta files - Currently the wof-pip-service [calculates](https://github.com/pelias/wof-pip-service/blob/master/src/components/loadJSON.js#L10-L17) the filename for each WOF record it loads from the ID. This works right now, but there's no reason why our code should have knowledge of the file format, as it may (and probably will) change in the future. Instead, we should use the `path` attribute in the CSV files (look at the [country meta file](https://media.githubusercontent.com/media/whosonfirst-data/whosonfirst-data/master/meta/wof-country-latest.csv) for some examples). | process | read wof file path from meta files currently the wof pip service the filename for each wof record it loads from the id this works right now but there s no reason why our code should have knowledge of the file format as it may and probably will change in the future instead we should use the path attribute in the csv files look at the for some examples | 1 |
90,266 | 3,813,678,445 | IssuesEvent | 2016-03-28 07:39:04 | xcat2/xcat-core | https://api.github.com/repos/xcat2/xcat-core | closed | [fvt]2.12๏ผdocker command man page need modified | component:docker priority:normal type:bug | env:ubuntu14.04.3
xcatbuild:
```
root@c910f04x30v14:~# lsdef -v
lsdef - Version 2.12 (git commit 08bac0d779d04a59a4e6397bb13b50bb89be17b2, built Mon Mar 21 15:32:19 EDT 2016)
How to reproduce:
root@c910f04x30v14:~# mkdocker -v
Version string for command mkdocker cannot be found
-------------------------------------> not correct for -v flag here
man root@c910f04x30v14:~# man mkdocker
NAME
mkdocker - Create docker instance.
SYNOPSIS
mkdocker noderange [image=image_name [command=command]] [dockerflag=flags_to_create_instance]
mkdocker [-h|--help]
mkdocker {-v|--version}
--------------------------> no -f flag for command
``` | 1.0 | [fvt]2.12๏ผdocker command man page need modified - env:ubuntu14.04.3
xcatbuild:
```
root@c910f04x30v14:~# lsdef -v
lsdef - Version 2.12 (git commit 08bac0d779d04a59a4e6397bb13b50bb89be17b2, built Mon Mar 21 15:32:19 EDT 2016)
How to reproduce:
root@c910f04x30v14:~# mkdocker -v
Version string for command mkdocker cannot be found
-------------------------------------> not correct for -v flag here
man root@c910f04x30v14:~# man mkdocker
NAME
mkdocker - Create docker instance.
SYNOPSIS
mkdocker noderange [image=image_name [command=command]] [dockerflag=flags_to_create_instance]
mkdocker [-h|--help]
mkdocker {-v|--version}
--------------------------> no -f flag for command
``` | non_process | ๏ผdocker command man page need modified env xcatbuild root lsdef v lsdef version git commit built mon mar edt how to reproduce root mkdocker v version string for command mkdocker cannot be found not correct for v flag here man root man mkdocker name mkdocker create docker instance synopsis mkdocker noderange mkdocker mkdocker v version no f flag for command | 0 |
221,339 | 7,382,193,798 | IssuesEvent | 2018-03-15 03:14:52 | CS2103JAN2018-W13-B4/main | https://api.github.com/repos/CS2103JAN2018-W13-B4/main | closed | 8. As a user I want to delete a tag from a task | priority.medium type.story | ... so that I can remove a tag from a task that no longer belongs to the group. | 1.0 | 8. As a user I want to delete a tag from a task - ... so that I can remove a tag from a task that no longer belongs to the group. | non_process | as a user i want to delete a tag from a task so that i can remove a tag from a task that no longer belongs to the group | 0 |
238,942 | 7,784,843,475 | IssuesEvent | 2018-06-06 14:21:30 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | Allow emailing from address links in SubPanels | Fix Proposed Medium Priority Resolved: Next Release bug category:emails | <!--- Provide a general summary of the issue in the **Title** above -->
If I have an Opportunity/Account (for instance), I would like to be able to click on the Email address of a related Contact in the sub-panel and have the outgoing email automatically associated with the Opportunity or Account.
<!--- Before you open an issue, please check if a similar issue already exists or has been closed before. --->
<!--- If you have discovered a security risk please report it by emailing security@suitecrm.com. This will be delivered to the product team who handle security issues. Please don't disclose security bugs publicly until they have been handled by the security team. --->
#### Issue
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently this kind of thing only works with Accounts, and only by clicking on the email at the top of the account page. Clicking on a link in a subpanel brings up the compose window, but the email is not associated with the record, and **the email address is not filled in** (is this a bug?).
#### Expected Behavior
<!--- Tell us what should happen -->
When clicking on an email address in the Contact subpanel of a Account or Opportunity record, the compose window should be prefilled with the associated Account or Opportunity, and the email address should be filled for the 'To:' field.
#### Actual Behavior
<!--- Tell us what happens instead -->
<!--- Also please check relevant logs (suitecrm.log, php error.log etc.) -->
Currently, when clicking an email address in the Contacts subpanel, the compose window is empty. The email is not associated with the record, and the 'To:' field is not filled in.
#### Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
This isn't a fix, but I've found the code that creates the links is the populateComposeViewFields() function in the modules/Emails/EmailUI.php file.
If I could figure out how to get the Account or Opportunity ID from inside this function I could write some code to do what I want, but I haven't been able to figure it out yet.
**Any help with this would be greatly appreciated!**
#### Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Open up an Account
2. Click on an email address in the Contacts subpanel
3. The compose email window is empty
#### Context
<!--- How has this bug affected you? What were you trying to accomplish? -->
<!--- If you feel this should be a low/medium/high priority then please state so -->
We need a way to automatically associate an email with the Account or Opportunity that we're working with. Manually associating the email is extremely time consuming and error prone.
#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: 7.10.5 (with patch from #5935 applied)
* Browser name and version: Chrome Version 66.0.3359.181 (Official Build) (64-bit)
* Environment name and version: MariaDB (Galera Cluster) v10.1.29-6, Apache 2.4.18-2ubuntu3.8, PHP 7.0.30-0ubuntu0.16.04.1
* Operating System and version: Ubuntu 16.04
| 1.0 | Allow emailing from address links in SubPanels - <!--- Provide a general summary of the issue in the **Title** above -->
If I have an Opportunity/Account (for instance), I would like to be able to click on the Email address of a related Contact in the sub-panel and have the outgoing email automatically associated with the Opportunity or Account.
<!--- Before you open an issue, please check if a similar issue already exists or has been closed before. --->
<!--- If you have discovered a security risk please report it by emailing security@suitecrm.com. This will be delivered to the product team who handle security issues. Please don't disclose security bugs publicly until they have been handled by the security team. --->
#### Issue
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently this kind of thing only works with Accounts, and only by clicking on the email at the top of the account page. Clicking on a link in a subpanel brings up the compose window, but the email is not associated with the record, and **the email address is not filled in** (is this a bug?).
#### Expected Behavior
<!--- Tell us what should happen -->
When clicking on an email address in the Contact subpanel of a Account or Opportunity record, the compose window should be prefilled with the associated Account or Opportunity, and the email address should be filled for the 'To:' field.
#### Actual Behavior
<!--- Tell us what happens instead -->
<!--- Also please check relevant logs (suitecrm.log, php error.log etc.) -->
Currently, when clicking an email address in the Contacts subpanel, the compose window is empty. The email is not associated with the record, and the 'To:' field is not filled in.
#### Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
This isn't a fix, but I've found the code that creates the links is the populateComposeViewFields() function in the modules/Emails/EmailUI.php file.
If I could figure out how to get the Account or Opportunity ID from inside this function I could write some code to do what I want, but I haven't been able to figure it out yet.
**Any help with this would be greatly appreciated!**
#### Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Open up an Account
2. Click on an email address in the Contacts subpanel
3. The compose email window is empty
#### Context
<!--- How has this bug affected you? What were you trying to accomplish? -->
<!--- If you feel this should be a low/medium/high priority then please state so -->
We need a way to automatically associate an email with the Account or Opportunity that we're working with. Manually associating the email is extremely time consuming and error prone.
#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: 7.10.5 (with patch from #5935 applied)
* Browser name and version: Chrome Version 66.0.3359.181 (Official Build) (64-bit)
* Environment name and version: MariaDB (Galera Cluster) v10.1.29-6, Apache 2.4.18-2ubuntu3.8, PHP 7.0.30-0ubuntu0.16.04.1
* Operating System and version: Ubuntu 16.04
| non_process | allow emailing from address links in subpanels if i have an opportunity account for instance i would like to be able to click on the email address of a related contact in the sub panel and have the outgoing email automatically associated with the opportunity or account issue currently this kind of thing only works with accounts and only by clicking on the email at the top of the account page clicking on a link in a subpanel brings up the compose window but the email is not associated with the record and the email address is not filled in is this a bug expected behavior when clicking on an email address in the contact subpanel of a account or opportunity record the compose window should be prefilled with the associated account or opportunity and the email address should be filled for the to field actual behavior currently when clicking an email address in the contacts subpanel the compose window is empty the email is not associated with the record and the to field is not filled in possible fix this isn t a fix but i ve found the code that creates the links is the populatecomposeviewfields function in the modules emails emailui php file if i could figure out how to get the account or opportunity id from inside this function i could write some code to do what i want but i haven t been able to figure it out yet any help with this would be greatly appreciated steps to reproduce open up an account click on an email address in the contacts subpanel the compose email window is empty context we need a way to automatically associate an email with the account or opportunity that we re working with manually associating the email is extremely time consuming and error prone your environment suitecrm version used with patch from applied browser name and version chrome version official build bit environment name and version mariadb galera cluster apache php operating system and version ubuntu | 0 |
17,510 | 23,321,436,301 | IssuesEvent | 2022-08-08 16:45:29 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | [processor/tailsamplingprocessor] move it to a stable component | processor/tailsampling | **Is your feature request related to a problem? Please describe.**
`tailsamplingprocessor` has been in beta for a while, is there any issue or missing feature we can help with?
It probably won't be included in distributions like aws-otel-collector if is not considered stable.
**Describe the solution you'd like**
`tailsamplingprocessor` to be considered a stable component.
**Describe alternatives you've considered**
**Additional context**
Probably related to #1797
| 1.0 | [processor/tailsamplingprocessor] move it to a stable component - **Is your feature request related to a problem? Please describe.**
`tailsamplingprocessor` has been in beta for a while, is there any issue or missing feature we can help with?
It probably won't be included in distributions like aws-otel-collector if is not considered stable.
**Describe the solution you'd like**
`tailsamplingprocessor` to be considered a stable component.
**Describe alternatives you've considered**
**Additional context**
Probably related to #1797
| process | move it to a stable component is your feature request related to a problem please describe tailsamplingprocessor has been in beta for a while is there any issue or missing feature we can help with it probably won t be included in distributions like aws otel collector if is not considered stable describe the solution you d like tailsamplingprocessor to be considered a stable component describe alternatives you ve considered additional context probably related to | 1 |
394,006 | 27,017,355,074 | IssuesEvent | 2023-02-10 20:47:27 | networkx/networkx | https://api.github.com/repos/networkx/networkx | closed | README mentions unexisting force.py file | Documentation | The [README](https://github.com/networkx/networkx/blob/main/examples/external/force/README.txt) mentions the unexisting ``force.py`` file to generate the necessary data for the example.
| 1.0 | README mentions unexisting force.py file - The [README](https://github.com/networkx/networkx/blob/main/examples/external/force/README.txt) mentions the unexisting ``force.py`` file to generate the necessary data for the example.
| non_process | readme mentions unexisting force py file the mentions the unexisting force py file to generate the necessary data for the example | 0 |
8,874 | 11,968,064,710 | IssuesEvent | 2020-04-06 07:59:07 | Ghost-chu/QuickShop-Reremake | https://api.github.com/repos/Ghost-chu/QuickShop-Reremake | reopened | Refactoring | Feature Request In Process v4 goal | Needs heavily refactoring
- be SOLID
- [ ] Single Responsibility Principle
- [x] Open Closed Principle
- [x] Liskov Substitution Principle
- [x] Interface Segregation Principle
- [ ] Dependency Inversion Principle
- [ ] remove cyclic dependencies ( MsgUtil is depending on Util and the other way around, same for DatabaseHelper)
- [x] properly inject dependencies instead of static abuse shit
- [ ] properly handle configurations
- [x] write Adapter for used functionality in libraries
- [x] use PreparedStatement in the right way
- [x] cache data instead of writing each time -> persist every N minutes
- [x] encapsulate the classes to reduce complexity
- [x] cleanup commands, introduce a subcommand system
- [x] use Player#hasPermission instead of using an explicit PermissionProvider that is already injected into Bukkit system
- [x] remove duplication of ServerNMS and ItemNMS
- [ ] refactor MsgUtil
- reduce code duplication
- apply SRP
- [x] write javadoc for plugin
- [x] Remove classes that are not used anymore
- [x] refactor shop loader | 1.0 | Refactoring - Needs heavily refactoring
- be SOLID
- [ ] Single Responsibility Principle
- [x] Open Closed Principle
- [x] Liskov Substitution Principle
- [x] Interface Segregation Principle
- [ ] Dependency Inversion Principle
- [ ] remove cyclic dependencies ( MsgUtil is depending on Util and the other way around, same for DatabaseHelper)
- [x] properly inject dependencies instead of static abuse shit
- [ ] properly handle configurations
- [x] write Adapter for used functionality in libraries
- [x] use PreparedStatement in the right way
- [x] cache data instead of writing each time -> persist every N minutes
- [x] encapsulate the classes to reduce complexity
- [x] cleanup commands, introduce a subcommand system
- [x] use Player#hasPermission instead of using an explicit PermissionProvider that is already injected into Bukkit system
- [x] remove duplication of ServerNMS and ItemNMS
- [ ] refactor MsgUtil
- reduce code duplication
- apply SRP
- [x] write javadoc for plugin
- [x] Remove classes that are not used anymore
- [x] refactor shop loader | process | refactoring needs heavily refactoring be solid single responsibility principle open closed principle liskov substitution principle interface segregation principle dependency inversion principle remove cyclic dependencies msgutil is depending on util and the other way around same for databasehelper properly inject dependencies instead of static abuse shit properly handle configurations write adapter for used functionality in libraries use preparedstatement in the right way cache data instead of writing each time persist every n minutes encapsulate the classes to reduce complexity cleanup commands introduce a subcommand system use player haspermission instead of using an explicit permissionprovider that is already injected into bukkit system remove duplication of servernms and itemnms refactor msgutil reduce code duplication apply srp write javadoc for plugin remove classes that are not used anymore refactor shop loader | 1 |
281,833 | 24,423,626,527 | IssuesEvent | 2022-10-05 23:15:52 | holidaygarrison/Group3 | https://api.github.com/repos/holidaygarrison/Group3 | closed | Continuous integration with test infrastructure ready | testing | Does NOT need to have tests ready, but needs to be linked to CI with empty tests.
| 1.0 | Continuous integration with test infrastructure ready - Does NOT need to have tests ready, but needs to be linked to CI with empty tests.
| non_process | continuous integration with test infrastructure ready does not need to have tests ready but needs to be linked to ci with empty tests | 0 |
11,339 | 14,149,133,597 | IssuesEvent | 2020-11-11 00:05:23 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | Time Served columns are missing when piping messages from Java | log-processing log/date/time format question | Hi,
I have a Java process that reads messages from Elasticsearch and pipes them to goaccess.
If I use the `--real-time-html` option when I start goaccess process from within Java, the Time Served columns (Avg, Max, Cum) are missing (attached real-time-report-without-ts).
This is the command I use from within Java:
```
/usr/local/bin/goaccess --output /opt/aua/goaccess/index.html --time-format '%T' --date-format '%d/%b/%Y' --log-format '%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^" %T %^ %^ %^' --real-time-html --fifo-in /tmp/goaccess.in --fifo-out /tmp/goaccess.out --invalid-requests /tmp/goaccess.invalid
```
If I don't use the `--real-time-html` option when starting `goaccess`, the T.S. columns are generated as expected (attached static-report-with-ts).
```
/usr/local/bin/goaccess --output /opt/aua/goaccess/index.html --time-format '%T' --date-format '%d/%b/%Y' --log-format '%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^" %T %^ %^ %^'
```
If I change my Java program to write the messages into a FileOutputStream instead of the Process.getOutputStream() and than use the file generated with `--real-time-html` everything works as expected (attached real-time-report-with-ts):
```
tail -F /tmp/messages.log | /usr/local/bin/goaccess --output /var/www/html/report.html --time-format '%T' --date-format '%d/%b/%Y' --log-format '%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^" %T %^ %^ %^' --real-time-html --fifo-in /tmp/goaccess.in --fifo-out /tmp/goaccess.out --invalid-requests /tmp/goaccess.invalid
```
I feel this is some buffering issue (something like `grep --line-buffered`) but I'm not sure what exactly. In my code I do the following for every log message:
```
byte[] messageBytes = (msgStr + System.lineSeparator()).getBytes();
processOutputStream.write(messageBytes);
processOutputStream.flush();
```
I configured goaccess with `--with-getline` thinking maybe it will make a difference but it didn't.
I also tried to configure with `--enable-debug` and run goaccess with `--debug-file` - the debug log file was created but it was empty.
BTW the https://goaccess.io/man mentions the option "--log-debug" but seems that the actual option is "--debug-file".
[goaccess-reports.zip](https://github.com/allinurl/goaccess/files/4050557/goaccess-reports.zip)
Can you please suggest what I should check?
My setup:
```
$ goaccess --version
GoAccess - 1.3.
For more details visit: http://goaccess.io
Copyright (C) 2009-2016 by Gerardo Orellana
Build configure arguments:
--enable-debug
--enable-utf8
--with-openssl
$ cat /etc/centos-release
CentOS Linux release 7.7.1908 (Core)
```
Thanks. | 1.0 | Time Served columns are missing when piping messages from Java - Hi,
I have a Java process that reads messages from Elasticsearch and pipes them to goaccess.
If I use the `--real-time-html` option when I start goaccess process from within Java, the Time Served columns (Avg, Max, Cum) are missing (attached real-time-report-without-ts).
This is the command I use from within Java:
```
/usr/local/bin/goaccess --output /opt/aua/goaccess/index.html --time-format '%T' --date-format '%d/%b/%Y' --log-format '%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^" %T %^ %^ %^' --real-time-html --fifo-in /tmp/goaccess.in --fifo-out /tmp/goaccess.out --invalid-requests /tmp/goaccess.invalid
```
If I don't use the `--real-time-html` option when starting `goaccess`, the T.S. columns are generated as expected (attached static-report-with-ts).
```
/usr/local/bin/goaccess --output /opt/aua/goaccess/index.html --time-format '%T' --date-format '%d/%b/%Y' --log-format '%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^" %T %^ %^ %^'
```
If I change my Java program to write the messages into a FileOutputStream instead of the Process.getOutputStream() and than use the file generated with `--real-time-html` everything works as expected (attached real-time-report-with-ts):
```
tail -F /tmp/messages.log | /usr/local/bin/goaccess --output /var/www/html/report.html --time-format '%T' --date-format '%d/%b/%Y' --log-format '%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^" %T %^ %^ %^' --real-time-html --fifo-in /tmp/goaccess.in --fifo-out /tmp/goaccess.out --invalid-requests /tmp/goaccess.invalid
```
I feel this is some buffering issue (something like `grep --line-buffered`) but I'm not sure what exactly. In my code I do the following for every log message:
```
byte[] messageBytes = (msgStr + System.lineSeparator()).getBytes();
processOutputStream.write(messageBytes);
processOutputStream.flush();
```
I configured goaccess with `--with-getline` thinking maybe it will make a difference but it didn't.
I also tried to configure with `--enable-debug` and run goaccess with `--debug-file` - the debug log file was created but it was empty.
BTW the https://goaccess.io/man mentions the option "--log-debug" but seems that the actual option is "--debug-file".
[goaccess-reports.zip](https://github.com/allinurl/goaccess/files/4050557/goaccess-reports.zip)
Can you please suggest what I should check?
My setup:
```
$ goaccess --version
GoAccess - 1.3.
For more details visit: http://goaccess.io
Copyright (C) 2009-2016 by Gerardo Orellana
Build configure arguments:
--enable-debug
--enable-utf8
--with-openssl
$ cat /etc/centos-release
CentOS Linux release 7.7.1908 (Core)
```
Thanks. | process | time served columns are missing when piping messages from java hi i have a java process that reads messages from elasticsearch and pipes them to goaccess if i use the real time html option when i start goaccess process from within java the time served columns avg max cum are missing attached real time report without ts this is the command i use from within java usr local bin goaccess output opt aua goaccess index html time format t date format d b y log format h r s b r u t real time html fifo in tmp goaccess in fifo out tmp goaccess out invalid requests tmp goaccess invalid if i don t use the real time html option when starting goaccess the t s columns are generated as expected attached static report with ts usr local bin goaccess output opt aua goaccess index html time format t date format d b y log format h r s b r u t if i change my java program to write the messages into a fileoutputstream instead of the process getoutputstream and than use the file generated with real time html everything works as expected attached real time report with ts tail f tmp messages log usr local bin goaccess output var www html report html time format t date format d b y log format h r s b r u t real time html fifo in tmp goaccess in fifo out tmp goaccess out invalid requests tmp goaccess invalid i feel this is some buffering issue something like grep line buffered but i m not sure what exactly in my code i do the following for every log message byte messagebytes msgstr system lineseparator getbytes processoutputstream write messagebytes processoutputstream flush i configured goaccess with with getline thinking maybe it will make a difference but it didn t i also tried to configure with enable debug and run goaccess with debug file the debug log file was created but it was empty btw the mentions the option log debug but seems that the actual option is debug file can you please suggest what i should check my setup goaccess version goaccess for more details visit copyright c by gerardo orellana build configure arguments enable debug enable with openssl cat etc centos release centos linux release core thanks | 1 |
1,808 | 4,542,337,516 | IssuesEvent | 2016-09-09 20:50:45 | neuropoly/spinalcordtoolbox | https://api.github.com/repos/neuropoly/spinalcordtoolbox | closed | sct_process_segmentation: Bug using '-vert' flag | bug priority: high sct_process_segmentation | Command:
```
sct_process_segmentation -i /Volumes/folder_shared/als_gm_atrophy/test_benjamin/143/t2/t2_seg.nii.gz -p csa -output-type txt -vert 2:13 -vertfile /Volumes/folder_shared/als_gm_atrophy/test_benjamin/143/t2/label/template/PAM50_levels.nii.gz
```
Error:
```
Selected vertebral levels... 2:13
OK: /Volumes/folder_shared/als_gm_atrophy/test_benjamin/143/t2/label/template/PAM50_levels.nii.gz
Find slices corresponding to vertebral levels based on the centerline...
/Users/chgroc/code/spinalcordtoolbox/scripts/sct_process_segmentation.py:915: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
if vertebral_labeling_data[np.round(x_centerline_fit[i_z]), np.round(y_centerline_fit[i_z]), z_centerline[i_z]] in range(vert_levels_list[0], vert_levels_list[1]+1):
Traceback (most recent call last):
File "/Users/chgroc/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 1035, in <module>
main(sys.argv[1:])
File "/Users/chgroc/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 262, in main
compute_csa(fname_segmentation, output_prefix, param_default.suffix_csa_output_files, output_type, overwrite, verbose, remove_temp_files, step, smoothing_param, figure_fit, slices, vert_lev, fname_vertebral_labeling, algo_fitting = param.algo_fitting, type_window= param.type_window, window_length=param.window_length, angle_correction=angle_correction)
File "/Users/chgroc/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 647, in compute_csa
slices, vert_levels_list, warning = get_slices_matching_with_vertebral_levels_based_centerline(vert_levels, im_vertebral_labeling.data, x_centerline_fit, y_centerline_fit, z_centerline)
File "/Users/chgroc/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 919, in get_slices_matching_with_vertebral_levels_based_centerline
slices = str(min(matching_slices_centerline_vert_labeling))+':'+str(max(matching_slices_centerline_vert_labeling))
ValueError: min() arg is an empty sequence
``` | 1.0 | sct_process_segmentation: Bug using '-vert' flag - Command:
```
sct_process_segmentation -i /Volumes/folder_shared/als_gm_atrophy/test_benjamin/143/t2/t2_seg.nii.gz -p csa -output-type txt -vert 2:13 -vertfile /Volumes/folder_shared/als_gm_atrophy/test_benjamin/143/t2/label/template/PAM50_levels.nii.gz
```
Error:
```
Selected vertebral levels... 2:13
OK: /Volumes/folder_shared/als_gm_atrophy/test_benjamin/143/t2/label/template/PAM50_levels.nii.gz
Find slices corresponding to vertebral levels based on the centerline...
/Users/chgroc/code/spinalcordtoolbox/scripts/sct_process_segmentation.py:915: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
if vertebral_labeling_data[np.round(x_centerline_fit[i_z]), np.round(y_centerline_fit[i_z]), z_centerline[i_z]] in range(vert_levels_list[0], vert_levels_list[1]+1):
Traceback (most recent call last):
File "/Users/chgroc/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 1035, in <module>
main(sys.argv[1:])
File "/Users/chgroc/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 262, in main
compute_csa(fname_segmentation, output_prefix, param_default.suffix_csa_output_files, output_type, overwrite, verbose, remove_temp_files, step, smoothing_param, figure_fit, slices, vert_lev, fname_vertebral_labeling, algo_fitting = param.algo_fitting, type_window= param.type_window, window_length=param.window_length, angle_correction=angle_correction)
File "/Users/chgroc/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 647, in compute_csa
slices, vert_levels_list, warning = get_slices_matching_with_vertebral_levels_based_centerline(vert_levels, im_vertebral_labeling.data, x_centerline_fit, y_centerline_fit, z_centerline)
File "/Users/chgroc/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 919, in get_slices_matching_with_vertebral_levels_based_centerline
slices = str(min(matching_slices_centerline_vert_labeling))+':'+str(max(matching_slices_centerline_vert_labeling))
ValueError: min() arg is an empty sequence
``` | process | sct process segmentation bug using vert flag command sct process segmentation i volumes folder shared als gm atrophy test benjamin seg nii gz p csa output type txt vert vertfile volumes folder shared als gm atrophy test benjamin label template levels nii gz error selected vertebral levels ok volumes folder shared als gm atrophy test benjamin label template levels nii gz find slices corresponding to vertebral levels based on the centerline users chgroc code spinalcordtoolbox scripts sct process segmentation py visibledeprecationwarning using a non integer number instead of an integer will result in an error in the future if vertebral labeling data np round y centerline fit z centerline in range vert levels list vert levels list traceback most recent call last file users chgroc code spinalcordtoolbox scripts sct process segmentation py line in main sys argv file users chgroc code spinalcordtoolbox scripts sct process segmentation py line in main compute csa fname segmentation output prefix param default suffix csa output files output type overwrite verbose remove temp files step smoothing param figure fit slices vert lev fname vertebral labeling algo fitting param algo fitting type window param type window window length param window length angle correction angle correction file users chgroc code spinalcordtoolbox scripts sct process segmentation py line in compute csa slices vert levels list warning get slices matching with vertebral levels based centerline vert levels im vertebral labeling data x centerline fit y centerline fit z centerline file users chgroc code spinalcordtoolbox scripts sct process segmentation py line in get slices matching with vertebral levels based centerline slices str min matching slices centerline vert labeling str max matching slices centerline vert labeling valueerror min arg is an empty sequence | 1 |
1,755 | 3,442,146,468 | IssuesEvent | 2015-12-14 21:23:46 | docker/docker | https://api.github.com/repos/docker/docker | closed | TCPDump in privileged mode | area/security/apparmor kind/bug | Hi All,
When I run an ubuntu container with privileged mode which is needed to run Mininet, I cannot successfully install tcpdump. What's the solution for the error: libcrypto.so.1.0.0: cannot open shared object file: Permission denied?
To reproduce this issue:
sudo docker run --name="ryu-mininet" --privileged=true -it imehrdad2012/mininet /bin/bash
root@152f3f17bef3:/# sudo apt-get install tcpdump
tcpdump: error while loading shared libraries: libcrypto.so.1.0.0: cannot open shared object file: Permission denied
---- More information:
docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): linux/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64
Linux beirut 3.13.0-53-generic #89-Ubuntu SMP Wed May 20 10:34:39 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
docker info
Containers: 3
Images: 6
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 12
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-53-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 32
Total Memory: 125.9 GiB
Name: beirut
ID: NRMH:IAGL:AIND:5ZDM:4O6Y:X6CB:EEPJ:HRSN:3KLK:RZ2N:EHVN:ZJC4
Username: imehrdad2012
Registry: https://index.docker.io/v1/
WARNING: No swap limit support | True | TCPDump in privileged mode - Hi All,
When I run an ubuntu container with privileged mode which is needed to run Mininet, I cannot successfully install tcpdump. What's the solution for the error: libcrypto.so.1.0.0: cannot open shared object file: Permission denied?
To reproduce this issue:
sudo docker run --name="ryu-mininet" --privileged=true -it imehrdad2012/mininet /bin/bash
root@152f3f17bef3:/# sudo apt-get install tcpdump
tcpdump: error while loading shared libraries: libcrypto.so.1.0.0: cannot open shared object file: Permission denied
---- More information:
docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): linux/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64
Linux beirut 3.13.0-53-generic #89-Ubuntu SMP Wed May 20 10:34:39 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
docker info
Containers: 3
Images: 6
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 12
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-53-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 32
Total Memory: 125.9 GiB
Name: beirut
ID: NRMH:IAGL:AIND:5ZDM:4O6Y:X6CB:EEPJ:HRSN:3KLK:RZ2N:EHVN:ZJC4
Username: imehrdad2012
Registry: https://index.docker.io/v1/
WARNING: No swap limit support | non_process | tcpdump in privileged mode hi all when i run an ubuntu container with privileged mode which is needed to run mininet i cannot successfully install tcpdump what s the solution for the error libcrypto so cannot open shared object file permission denied to reproduce this issue sudo docker run name ryu mininet privileged true it mininet bin bash root sudo apt get install tcpdump tcpdump error while loading shared libraries libcrypto so cannot open shared object file permission denied more information docker version client version client api version go version client git commit client os arch client linux server version server api version go version server git commit server os arch server linux linux beirut generic ubuntu smp wed may utc gnu linux docker info containers images storage driver aufs root dir var lib docker aufs backing filesystem extfs dirs supported false execution driver native logging driver json file kernel version generic operating system ubuntu lts cpus total memory gib name beirut id nrmh iagl aind eepj hrsn ehvn username registry warning no swap limit support | 0 |
78,395 | 10,059,507,436 | IssuesEvent | 2019-07-22 16:35:56 | chartjs/Chart.js | https://api.github.com/repos/chartjs/Chart.js | closed | Min and Max on tick cartesian axes | type: documentation | Documentation Is:
<!-- Please place an x (no spaces!) in all [ ] that apply -->
- [ ] Missing or needed
- [ ] Confusing
- [ X] Not Sure?
### MIN and MAX properties
Checking the documentation, commited into `master`, I see that the MIN and MAX property are documented for tick configuration of cartesian axes.
The type of property is defined as `number`.
Nevertheless the CATEGORY and TIME axes don't have numbers but `string` or `time`.
Furthermore I see that MIN and MAX has been removed from `time` object.
Now I'm confused if it's a mistake on property definition (it should be `number!time` at least) or if min and max are not longer available on `time` object.
| 1.0 | Min and Max on tick cartesian axes - Documentation Is:
<!-- Please place an x (no spaces!) in all [ ] that apply -->
- [ ] Missing or needed
- [ ] Confusing
- [ X] Not Sure?
### MIN and MAX properties
Checking the documentation, commited into `master`, I see that the MIN and MAX property are documented for tick configuration of cartesian axes.
The type of property is defined as `number`.
Nevertheless the CATEGORY and TIME axes don't have numbers but `string` or `time`.
Furthermore I see that MIN and MAX has been removed from `time` object.
Now I'm confused if it's a mistake on property definition (it should be `number!time` at least) or if min and max are not longer available on `time` object.
| non_process | min and max on tick cartesian axes documentation is missing or needed confusing not sure min and max properties checking the documentation commited into master i see that the min and max property are documented for tick configuration of cartesian axes the type of property is defined as number nevertheless the category and time axes don t have numbers but string or time furthermore i see that min and max has been removed from time object now i m confused if it s a mistake on property definition it should be number time at least or if min and max are not longer available on time object | 0 |
6,475 | 9,551,323,187 | IssuesEvent | 2019-05-02 14:13:21 | ropensci/software-review-meta | https://api.github.com/repos/ropensci/software-review-meta | closed | Change issue template to include pre-sub template | process | Many pre-sub inquiries fill out a full submission
We agree that it's probably because our issue template simply has the full submission template in it https://github.com/ropensci/onboarding/blob/master/issue_template.md
I think maybe there is a way to have different issue templates for different things? Or am I imagining that?
If not then we could just add the pre-sub template to the top of the issue template, and say remove one or the other depending on what you're doing | 1.0 | Change issue template to include pre-sub template - Many pre-sub inquiries fill out a full submission
We agree that it's probably because our issue template simply has the full submission template in it https://github.com/ropensci/onboarding/blob/master/issue_template.md
I think maybe there is a way to have different issue templates for different things? Or am I imagining that?
If not then we could just add the pre-sub template to the top of the issue template, and say remove one or the other depending on what you're doing | process | change issue template to include pre sub template many pre sub inquiries fill out a full submission we agree that it s probably because our issue template simply has the full submission template in it i think maybe there is a way to have different issue templates for different things or am i imagining that if not then we could just add the pre sub template to the top of the issue template and say remove one or the other depending on what you re doing | 1 |
12,951 | 15,309,098,529 | IssuesEvent | 2021-02-24 23:43:26 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | Loop through Azure Devops parameters. The script example is not valid | cba devops-cicd-process/tech devops/prod doc-bug |
Hi,
I was using this doc to learn how to handle parameters and I found a small issue with one of the examples.
Chapter: Loop through parameters (https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script#loop-through-parameters)
The script example is not correct.
```
...
steps:
- ${{ each parameter in parameters }}:
- script: echo ${{ parameters.Key }}
- script: echo ${{ parameters.Value }}
```
For both echos, the variable name should be 'parameter' instead of 'parameters'.
Cordially,
Romain P.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 790318bb-8220-3241-4ca7-73351074492f
* Version Independent ID: db1da9db-3694-779b-17aa-1ed67fcecf86
* Content: [Use runtime and type-safe parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script)
* Content Source: [docs/pipelines/process/runtime-parameters.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/runtime-parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | Loop through Azure Devops parameters. The script example is not valid -
Hi,
I was using this doc to learn how to handle parameters and I found a small issue with one of the examples.
Chapter: Loop through parameters (https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script#loop-through-parameters)
The script example is not correct.
```
...
steps:
- ${{ each parameter in parameters }}:
- script: echo ${{ parameters.Key }}
- script: echo ${{ parameters.Value }}
```
For both echos, the variable name should be 'parameter' instead of 'parameters'.
Cordially,
Romain P.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 790318bb-8220-3241-4ca7-73351074492f
* Version Independent ID: db1da9db-3694-779b-17aa-1ed67fcecf86
* Content: [Use runtime and type-safe parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script)
* Content Source: [docs/pipelines/process/runtime-parameters.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/runtime-parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | process | loop through azure devops parameters the script example is not valid hi i was using this doc to learn how to handle parameters and i found a small issue with one of the examples chapter loop through parameters the script example is not correct steps each parameter in parameters script echo parameters key script echo parameters value for both echos the variable name should be parameter instead of parameters cordially romain p document details โ do not edit this section it is required for docs microsoft com โ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 1 |
19,509 | 3,214,196,319 | IssuesEvent | 2015-10-06 23:49:03 | prettydiff/prettydiff | https://api.github.com/repos/prettydiff/prettydiff | closed | TypeError: Cannot read property 'isFile' of undefined | Defect Not started | @prettydiff
Referring to my previous issue from 'atom-beautify' in which you replied:
https://github.com/Glavin001/atom-beautify/issues/589
Updated prettydiff to version 1.14.4 (!) after reading your comments and now I receive the following error when trying to run 'prettydiff' from terminal:
```/usr/local/lib/node_modules/prettydiff/api/node-local.js:2124
if (stats.isFile() === true) {
^
TypeError: Cannot read property 'isFile' of undefined
at /usr/local/lib/node_modules/prettydiff/api/node-local.js:2124:26
at FSReqWrap.oncomplete (fs.js:82:15)
...
```
I have tried reinstalling through
- ```$ npm uninstall -g prettydiff```
```prettydiff@1.14.4 node_modules/prettydiff```
- ```$ npm install -g prettydiff```
```/usr/local/bin/prettydiff -> /usr/local/lib/node_modules/prettydiff/bin/prettydiff
/usr/local/lib
โโโ prettydiff@1.14.4``` | 1.0 | TypeError: Cannot read property 'isFile' of undefined - @prettydiff
Referring to my previous issue from 'atom-beautify' in which you replied:
https://github.com/Glavin001/atom-beautify/issues/589
Updated prettydiff to version 1.14.4 (!) after reading your comments and now I receive the following error when trying to run 'prettydiff' from terminal:
```/usr/local/lib/node_modules/prettydiff/api/node-local.js:2124
if (stats.isFile() === true) {
^
TypeError: Cannot read property 'isFile' of undefined
at /usr/local/lib/node_modules/prettydiff/api/node-local.js:2124:26
at FSReqWrap.oncomplete (fs.js:82:15)
...
```
I have tried reinstalling through
- ```$ npm uninstall -g prettydiff```
```prettydiff@1.14.4 node_modules/prettydiff```
- ```$ npm install -g prettydiff```
```/usr/local/bin/prettydiff -> /usr/local/lib/node_modules/prettydiff/bin/prettydiff
/usr/local/lib
โโโ prettydiff@1.14.4``` | non_process | typeerror cannot read property isfile of undefined prettydiff referring to my previous issue from atom beautify in which you replied updated prettydiff to version after reading your comments and now i receive the following error when trying to run prettydiff from terminal usr local lib node modules prettydiff api node local js if stats isfile true typeerror cannot read property isfile of undefined at usr local lib node modules prettydiff api node local js at fsreqwrap oncomplete fs js i have tried reinstalling through npm uninstall g prettydiff prettydiff node modules prettydiff npm install g prettydiff usr local bin prettydiff usr local lib node modules prettydiff bin prettydiff usr local lib โโโ prettydiff | 0 |
147,445 | 11,788,359,497 | IssuesEvent | 2020-03-17 15:27:05 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | Playbook field not shown in Job Template form for user with execute permissions | component:ui priority:medium state:needs_test type:bug | ##### ISSUE TYPE
- Bug Report
##### SUMMARY
The playbook field is missing from the Job Template form when viewed by a user with only execute permissions on the JT.
<img width="1486" alt="Screen Shot 2020-01-27 at 9 22 36 AM" src="https://user-images.githubusercontent.com/9889020/73182228-a437d680-40e6-11ea-909b-2163d4512ccb.png">
The playbook field is available through the API though so I believe it should be shown in this scenario.
##### STEPS TO REPRODUCE
Grant execute permissions to a user without any existing permissions on at JT, view the JT with said user.
##### EXPECTED RESULTS
Read-only playbook field shown
##### ACTUAL RESULTS
Playbook field hidden | 1.0 | Playbook field not shown in Job Template form for user with execute permissions - ##### ISSUE TYPE
- Bug Report
##### SUMMARY
The playbook field is missing from the Job Template form when viewed by a user with only execute permissions on the JT.
<img width="1486" alt="Screen Shot 2020-01-27 at 9 22 36 AM" src="https://user-images.githubusercontent.com/9889020/73182228-a437d680-40e6-11ea-909b-2163d4512ccb.png">
The playbook field is available through the API though so I believe it should be shown in this scenario.
##### STEPS TO REPRODUCE
Grant execute permissions to a user without any existing permissions on at JT, view the JT with said user.
##### EXPECTED RESULTS
Read-only playbook field shown
##### ACTUAL RESULTS
Playbook field hidden | non_process | playbook field not shown in job template form for user with execute permissions issue type bug report summary the playbook field is missing from the job template form when viewed by a user with only execute permissions on the jt img width alt screen shot at am src the playbook field is available through the api though so i believe it should be shown in this scenario steps to reproduce grant execute permissions to a user without any existing permissions on at jt view the jt with said user expected results read only playbook field shown actual results playbook field hidden | 0 |
579,241 | 17,186,331,683 | IssuesEvent | 2021-07-16 02:55:05 | trofimarket/problems | https://api.github.com/repos/trofimarket/problems | closed | problem: bidders cannot settle final payment on other chains | priority 0 | problem: bidders cannot settle final payment on other chains
solution: change the way we do payments and do it like this instead:
1. Take the merchant's platform tax. This is the amount that users must lock (as a deposit) in order to bid. For example if the platform tax is 1% and the user bids 1 BTC, they need to lock 0.01BTC (on BSC), can be in BNB, wBTC, wETH, whatever is already implemented in the SC. This is unlocked if they are outbid.
The SC does not need to track the total bid amount, just the deposit amount (in BTC). The UI can simply calculate what the total bid actually is, based on the deposit amount and the platform tax. So if a user locks 10 ETH in a bid, and chainlink says that this is 0.5546 BTC at the time, and the merchant's tax rate is 1%, then the bid shown in the UI will be 55.46BTC. If that bid wins the auction, then the winner must pay the merchant 55.46-0.5546 = 54.9054 BTC or equivalent.
2. When setting up their account, merchants must input addresses for final settlement on: BSC chain, ETH chain, Bitcoin chain. This should be stored in the auction object so that users can verify not only through the app but through block explorer.
3. The user pays final settlement directly to the merchant's wallet. Our SC does not need to know about it.
4. The merchant releases the NFT to the bidder once final settlement has been made, or they can restart the auction if payment is not made within 24 hours.
5. Regardless of the outcome, the platform tax goes to the platform at the end of the auction.
| 1.0 | problem: bidders cannot settle final payment on other chains - problem: bidders cannot settle final payment on other chains
solution: change the way we do payments and do it like this instead:
1. Take the merchant's platform tax. This is the amount that users must lock (as a deposit) in order to bid. For example if the platform tax is 1% and the user bids 1 BTC, they need to lock 0.01BTC (on BSC), can be in BNB, wBTC, wETH, whatever is already implemented in the SC. This is unlocked if they are outbid.
The SC does not need to track the total bid amount, just the deposit amount (in BTC). The UI can simply calculate what the total bid actually is, based on the deposit amount and the platform tax. So if a user locks 10 ETH in a bid, and chainlink says that this is 0.5546 BTC at the time, and the merchant's tax rate is 1%, then the bid shown in the UI will be 55.46BTC. If that bid wins the auction, then the winner must pay the merchant 55.46-0.5546 = 54.9054 BTC or equivalent.
2. When setting up their account, merchants must input addresses for final settlement on: BSC chain, ETH chain, Bitcoin chain. This should be stored in the auction object so that users can verify not only through the app but through block explorer.
3. The user pays final settlement directly to the merchant's wallet. Our SC does not need to know about it.
4. The merchant releases the NFT to the bidder once final settlement has been made, or they can restart the auction if payment is not made within 24 hours.
5. Regardless of the outcome, the platform tax goes to the platform at the end of the auction.
| non_process | problem bidders cannot settle final payment on other chains problem bidders cannot settle final payment on other chains solution change the way we do payments and do it like this instead take the merchant s platform tax this is the amount that users must lock as a deposit in order to bid for example if the platform tax is and the user bids btc they need to lock on bsc can be in bnb wbtc weth whatever is already implemented in the sc this is unlocked if they are outbid the sc does not need to track the total bid amount just the deposit amount in btc the ui can simply calculate what the total bid actually is based on the deposit amount and the platform tax so if a user locks eth in a bid and chainlink says that this is btc at the time and the merchant s tax rate is then the bid shown in the ui will be if that bid wins the auction then the winner must pay the merchant btc or equivalent when setting up their account merchants must input addresses for final settlement on bsc chain eth chain bitcoin chain this should be stored in the auction object so that users can verify not only through the app but through block explorer the user pays final settlement directly to the merchant s wallet our sc does not need to know about it the merchant releases the nft to the bidder once final settlement has been made or they can restart the auction if payment is not made within hours regardless of the outcome the platform tax goes to the platform at the end of the auction | 0 |
13,622 | 16,236,494,315 | IssuesEvent | 2021-05-07 01:50:10 | Amr-Aboshama/XGeN | https://api.github.com/repos/Amr-Aboshama/XGeN | closed | OCR needs to define a minimum text size on the pdf | Preprocessor | when the text size is so small as in references, the OCR can't detect the spaces that between characters. | 1.0 | OCR needs to define a minimum text size on the pdf - when the text size is so small as in references, the OCR can't detect the spaces that between characters. | process | ocr needs to define a minimum text size on the pdf when the text size is so small as in references the ocr can t detect the spaces that between characters | 1 |
5,199 | 7,974,096,498 | IssuesEvent | 2018-07-17 03:14:47 | rubberduck-vba/Rubberduck | https://api.github.com/repos/rubberduck-vba/Rubberduck | closed | Parse Error (System.OutOfMemoryException) in RD 2.1.0.29597 | parse-tree-processing support | I've just installed RD 2.1.0.29597 on a Windows 7 Enterprise x64 with Office 2010 Professional Plus x86 (no complaints on installation), and am working in Word's VBA IDE. Word has multiple projects installed (most of which are not mine and locked to me).
Parsing often takes a seemingly endless amount of time, often resulting in both Word and the VBA IDE windows going white, or only showing frame outlines with no contents, or alternating between the two, and "(Not Responding)", and sometimes crashing.
When Parsing does "complete", a Parse Error is shown on the RD toolbar with a white X in a red circle to its right, and when that is clicked, the Search Results window loads, with only the Parse Errors tab, but that tab has no contents.
Whether I get to a Parse Error, or the whole thing just crashes may depend on which project is the current active project in the IDE's project explorer.
The header and first part of the log results (Minimum Log Level set to Error) are as follows:
2017-08-10 16:17:27.0166;ERROR-2.1.0.29597;Rubberduck.Parsing.VBA.ComponentParseTask;Exception thrown in thread 42 while parsing module NDeLCustomizations.CommandLineInterface, ParseTaskID 47373ac2-c3fb-46db-8cfb-bae1b3fdb64e.;System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
[Full log file attached.]
There are several successive System.OutOfMemoryException(s) shown for different modules of the same project. All projects compile without error, however.
Essentially, RD is unusable in for Word's VBA IDE. Have not had this problem with Excel, but have very few projects in Excel with much less code.
[RubberduckLog.txt](https://github.com/rubberduck-vba/Rubberduck/files/1216610/RubberduckLog.txt)
| 1.0 | Parse Error (System.OutOfMemoryException) in RD 2.1.0.29597 - I've just installed RD 2.1.0.29597 on a Windows 7 Enterprise x64 with Office 2010 Professional Plus x86 (no complaints on installation), and am working in Word's VBA IDE. Word has multiple projects installed (most of which are not mine and locked to me).
Parsing often takes a seemingly endless amount of time, often resulting in both Word and the VBA IDE windows going white, or only showing frame outlines with no contents, or alternating between the two, and "(Not Responding)", and sometimes crashing.
When Parsing does "complete", a Parse Error is shown on the RD toolbar with a white X in a red circle to its right, and when that is clicked, the Search Results window loads, with only the Parse Errors tab, but that tab has no contents.
Whether I get to a Parse Error, or the whole thing just crashes may depend on which project is the current active project in the IDE's project explorer.
The header and first part of the log results (Minimum Log Level set to Error) are as follows:
2017-08-10 16:17:27.0166;ERROR-2.1.0.29597;Rubberduck.Parsing.VBA.ComponentParseTask;Exception thrown in thread 42 while parsing module NDeLCustomizations.CommandLineInterface, ParseTaskID 47373ac2-c3fb-46db-8cfb-bae1b3fdb64e.;System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
[Full log file attached.]
There are several successive System.OutOfMemoryException(s) shown for different modules of the same project. All projects compile without error, however.
Essentially, RD is unusable in for Word's VBA IDE. Have not had this problem with Excel, but have very few projects in Excel with much less code.
[RubberduckLog.txt](https://github.com/rubberduck-vba/Rubberduck/files/1216610/RubberduckLog.txt)
| process | parse error system outofmemoryexception in rd i ve just installed rd on a windows enterprise with office professional plus no complaints on installation and am working in word s vba ide word has multiple projects installed most of which are not mine and locked to me parsing often takes a seemingly endless amount of time often resulting in both word and the vba ide windows going white or only showing frame outlines with no contents or alternating between the two and not responding and sometimes crashing when parsing does complete a parse error is shown on the rd toolbar with a white x in a red circle to its right and when that is clicked the search results window loads with only the parse errors tab but that tab has no contents whether i get to a parse error or the whole thing just crashes may depend on which project is the current active project in the ide s project explorer the header and first part of the log results minimum log level set to error are as follows error rubberduck parsing vba componentparsetask exception thrown in thread while parsing module ndelcustomizations commandlineinterface parsetaskid system outofmemoryexception exception of type system outofmemoryexception was thrown there are several successive system outofmemoryexception s shown for different modules of the same project all projects compile without error however essentially rd is unusable in for word s vba ide have not had this problem with excel but have very few projects in excel with much less code | 1 |
132,471 | 5,186,720,867 | IssuesEvent | 2017-01-20 14:55:19 | botpress/botpress | https://api.github.com/repos/botpress/botpress | closed | Error on windows | bug priority/urgent | Once I bp init i get this error
```
name: (kk) (node:6784) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Error while obtaining machine id: Error: Command failed: REG QUERY HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography /v MachineGuid
ERROR: The system was unable to find the specified registry key or value.
``` | 1.0 | Error on windows - Once I bp init i get this error
```
name: (kk) (node:6784) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Error while obtaining machine id: Error: Command failed: REG QUERY HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography /v MachineGuid
ERROR: The system was unable to find the specified registry key or value.
``` | non_process | error on windows once i bp init i get this error name kk node unhandledpromiserejectionwarning unhandled promise rejection rejection id error error while obtaining machine id error command failed reg query hkey local machine software microsoft cryptography v machineguid error the system was unable to find the specified registry key or value | 0 |
12,733 | 15,100,872,675 | IssuesEvent | 2021-02-08 06:25:49 | kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines | closed | The released version of the SDK is really old | kind/bug kind/process lifecycle/stale priority/p0 | I've discovered that the SDK lacks the feature that has been committed on June 3.
I think we should make sure that the features are getting released in regular and timely manner.
If that's not possible, we should release the SDK on it's own schedule. Previously the release cadence was two weeks. | 1.0 | The released version of the SDK is really old - I've discovered that the SDK lacks the feature that has been committed on June 3.
I think we should make sure that the features are getting released in regular and timely manner.
If that's not possible, we should release the SDK on it's own schedule. Previously the release cadence was two weeks. | process | the released version of the sdk is really old i ve discovered that the sdk lacks the feature that has been committed on june i think we should make sure that the features are getting released in regular and timely manner if that s not possible we should release the sdk on it s own schedule previously the release cadence was two weeks | 1 |
12,821 | 15,196,288,853 | IssuesEvent | 2021-02-16 08:01:23 | DevExpress/testcafe-hammerhead | https://api.github.com/repos/DevExpress/testcafe-hammerhead | closed | openWindow is calling the URL twice in rapid succession when using TestCafe v1.10.1 | AREA: server FREQUENCY: level 2 REGRESSION STATE: Need response SYSTEM: iframe processing TYPE: bug | ### What is your Test Scenario?
Open a window using the 'openWindow' command. Expect that this URL is loaded only once. Our system will block the request if the user tries to load the specified URL more than once every 30 seconds.
### What is the Current behavior?
After upgrading to TestCafe 1.10.1 from 1.9.4 we noticed that our tests started failing because the 'openWindow' call is making duplicate calls to the URL we pass it. In our software this causes the UI to redirect to a "you must wait 30 seconds before visiting this page" error, which told us that the openWindow is making multiple requests.
### What is the Expected behavior?
We expect the 'openWindow' call to load the URL passed into it only a single time, just like it did in 1.9.4.
### What isย your web application andย your TestCafeย test code?
Start Fiddler locally before running this test.
When you monitor fiddler, you will see that both "openWindow" urls are hit TWICE each.
**Please note:**
If you remove the first openWindow (to our own endpoint) you will see that the remaining openWindow URL is only hit ONCE. I'm not sure if we're doing some redirect that is causing TestCafe to reload it and any subsequent calls, but this didn't happen with 1.9.4 but does now happen with 1.10.1
### Sample Code:
**sample.e2e.ts:**
```TypeScript
let childWindow;
let anotherWindow;
fixture('Sample test to reproduce issues...')
.page('https://www.google.com')
.beforeEach(async (t) => {
childWindow = await t.openWindow(
'https://bac-dev.ehosts.net/netagent/client/unified/desktop/naclient.aspx?LOGINNAME=TestCafe%20Dev%20Customer&PASSWORD=&QUEUE=109&SUBJECT=I%20am%20an%20E2E%20test%20customer%2C%20please%20help%20me!&EMAIL=e2e%40address.com&ENABLE_SEND_TRANSCRIPT=0&SENDCHATTRANSCRIPT=0&REFERER=https%3A%2F%2Fbac-dev.ehosts.net%2Fnetagent%2Flaunch.html&ROUTEIDENT=e2e&PROACTIVEID=&SALESVALUE=&LEADTYPE=&ROUTETOAGENT=0&LangSelection=11&PUSHPAGELOCATION=0&CHATFRAMESIZE=275&UNIFIEDCLIENTTEMP=desktop&PORTALID=EE7B3E9D-D25C-44B6-9AA9-3C142583DA3F&QUESTIONNAIREID=BB152583-8A6D-4B00-BB40-992CB87E0F68&WEBCOLLABKEY=&COBROWSE_ENABLED=0&FONT%5fCHOICE=1&defaultStyleId=EE7B3E9D-D25C-44B6-9AA9-3C142583DA3F&submitBtn=Start+Chat'
);
anotherWindow = await t.openWindow(
'http://www.west.net/~jay/cheat.htm'
)
});
test('Test that will reproduce the issue.', async (t) => {
console.log('Did we reproduce the issue?');
});
```
**runner.ts:**
```TypeScript
import createTestCafe from 'testcafe';
let testcafe: TestCafe;
const browsers = ['chrome:headless'];
async function runTestForBrowser(browser: string) {
try {
// Start running E2E tests
testcafe = await createTestCafe('localhost');
const runner = testcafe.createRunner();
const testSource = ['./sample.e2e.ts'];
// Log the current bacUrl to assist with debugging failed tests
// console.log(`process.env.bacUrl: ${process.env.bacUrl}`);
const failedCount = await runner
.src(testSource)
.browsers(browser)
.reporter([
'spec', // Output to STDOUT (console)
{
name: 'xunit',
output: 'test/reports/bacUIe2e.xml',
},
{
name: 'html',
output: 'test/reports/bacUIe2e.html',
},
])
.screenshots({
path: 'test/reports/screenshots/',
takeOnFails: true,
pathPattern: '${DATE}_${TIME}/test-${TEST_INDEX}/${USERAGENT}/${FILE_INDEX}.png',
})
.run({
debugMode: false,
debugOnFail: true,
stopOnFirstFail: false,
skipJsErrors: true,
skipUncaughtErrors: true,
selectorTimeout: 30000,
assertionTimeout: 10000,
pageLoadTimeout: 1000,
speed: 1,
disablePageCaching: false,
})
.catch((error) => {
console.log(error);
throw new Error(error);
});
if (failedCount > 0) {
console.log('Tests failed: ' + failedCount);
if (testcafe) {
testcafe.close();
}
throw new Error(`TestCafe failed to run. Failed tests: ${failedCount}`);
}
return;
} catch (error) {
if (testcafe) {
testcafe.close();
}
throw new Error(`TestCafe failed to run. ${error}`);
}
}
const runAllBrowsers = async () => {
for (const browser of browsers) {
await runTestForBrowser(browser);
}
process.exit();
};
runAllBrowsers();
```
### Steps to Reproduce:
1. Launch a browser using the '.page' call
2. Launch a child browser using the 'openWindow' call
3. Notice that the 'openWindow' call made multiple requests to the url you passed in (you can see this using Fiddler to capture the requests while the test is running
### Your Environment details:
* testcafe version: 1.10.1
* node.js version: 10.15.0
* command-line arguments: <!-- example: "testcafe ie,chrome -e test.js" -->
* browser name and version: Chrome: Version 87.0.4280.88 (Official Build) (x86_64), Firefox: 83.0
* platform and version: Windows 10, MacOS Catalina Version 10.15.7 | 1.0 | openWindow is calling the URL twice in rapid succession when using TestCafe v1.10.1 - ### What is your Test Scenario?
Open a window using the 'openWindow' command. Expect that this URL is loaded only once. Our system will block the request if the user tries to load the specified URL more than once every 30 seconds.
### What is the Current behavior?
After upgrading to TestCafe 1.10.1 from 1.9.4 we noticed that our tests started failing because the 'openWindow' call is making duplicate calls to the URL we pass it. In our software this causes the UI to redirect to a "you must wait 30 seconds before visiting this page" error, which told us that the openWindow is making multiple requests.
### What is the Expected behavior?
We expect the 'openWindow' call to load the URL passed into it only a single time, just like it did in 1.9.4.
### What isย your web application andย your TestCafeย test code?
Start Fiddler locally before running this test.
When you monitor fiddler, you will see that both "openWindow" urls are hit TWICE each.
**Please note:**
If you remove the first openWindow (to our own endpoint) you will see that the remaining openWindow URL is only hit ONCE. I'm not sure if we're doing some redirect that is causing TestCafe to reload it and any subsequent calls, but this didn't happen with 1.9.4 but does now happen with 1.10.1
### Sample Code:
**sample.e2e.ts:**
```TypeScript
let childWindow;
let anotherWindow;
fixture('Sample test to reproduce issues...')
.page('https://www.google.com')
.beforeEach(async (t) => {
childWindow = await t.openWindow(
'https://bac-dev.ehosts.net/netagent/client/unified/desktop/naclient.aspx?LOGINNAME=TestCafe%20Dev%20Customer&PASSWORD=&QUEUE=109&SUBJECT=I%20am%20an%20E2E%20test%20customer%2C%20please%20help%20me!&EMAIL=e2e%40address.com&ENABLE_SEND_TRANSCRIPT=0&SENDCHATTRANSCRIPT=0&REFERER=https%3A%2F%2Fbac-dev.ehosts.net%2Fnetagent%2Flaunch.html&ROUTEIDENT=e2e&PROACTIVEID=&SALESVALUE=&LEADTYPE=&ROUTETOAGENT=0&LangSelection=11&PUSHPAGELOCATION=0&CHATFRAMESIZE=275&UNIFIEDCLIENTTEMP=desktop&PORTALID=EE7B3E9D-D25C-44B6-9AA9-3C142583DA3F&QUESTIONNAIREID=BB152583-8A6D-4B00-BB40-992CB87E0F68&WEBCOLLABKEY=&COBROWSE_ENABLED=0&FONT%5fCHOICE=1&defaultStyleId=EE7B3E9D-D25C-44B6-9AA9-3C142583DA3F&submitBtn=Start+Chat'
);
anotherWindow = await t.openWindow(
'http://www.west.net/~jay/cheat.htm'
)
});
test('Test that will reproduce the issue.', async (t) => {
console.log('Did we reproduce the issue?');
});
```
**runner.ts:**
```TypeScript
import createTestCafe from 'testcafe';
let testcafe: TestCafe;
const browsers = ['chrome:headless'];
async function runTestForBrowser(browser: string) {
try {
// Start running E2E tests
testcafe = await createTestCafe('localhost');
const runner = testcafe.createRunner();
const testSource = ['./sample.e2e.ts'];
// Log the current bacUrl to assist with debugging failed tests
// console.log(`process.env.bacUrl: ${process.env.bacUrl}`);
const failedCount = await runner
.src(testSource)
.browsers(browser)
.reporter([
'spec', // Output to STDOUT (console)
{
name: 'xunit',
output: 'test/reports/bacUIe2e.xml',
},
{
name: 'html',
output: 'test/reports/bacUIe2e.html',
},
])
.screenshots({
path: 'test/reports/screenshots/',
takeOnFails: true,
pathPattern: '${DATE}_${TIME}/test-${TEST_INDEX}/${USERAGENT}/${FILE_INDEX}.png',
})
.run({
debugMode: false,
debugOnFail: true,
stopOnFirstFail: false,
skipJsErrors: true,
skipUncaughtErrors: true,
selectorTimeout: 30000,
assertionTimeout: 10000,
pageLoadTimeout: 1000,
speed: 1,
disablePageCaching: false,
})
.catch((error) => {
console.log(error);
throw new Error(error);
});
if (failedCount > 0) {
console.log('Tests failed: ' + failedCount);
if (testcafe) {
testcafe.close();
}
throw new Error(`TestCafe failed to run. Failed tests: ${failedCount}`);
}
return;
} catch (error) {
if (testcafe) {
testcafe.close();
}
throw new Error(`TestCafe failed to run. ${error}`);
}
}
const runAllBrowsers = async () => {
for (const browser of browsers) {
await runTestForBrowser(browser);
}
process.exit();
};
runAllBrowsers();
```
### Steps to Reproduce:
1. Launch a browser using the '.page' call
2. Launch a child browser using the 'openWindow' call
3. Notice that the 'openWindow' call made multiple requests to the url you passed in (you can see this using Fiddler to capture the requests while the test is running
### Your Environment details:
* testcafe version: 1.10.1
* node.js version: 10.15.0
* command-line arguments: <!-- example: "testcafe ie,chrome -e test.js" -->
* browser name and version: Chrome: Version 87.0.4280.88 (Official Build) (x86_64), Firefox: 83.0
* platform and version: Windows 10, MacOS Catalina Version 10.15.7 | process | openwindow is calling the url twice in rapid succession when using testcafe what is your test scenario open a window using the openwindow command expect that this url is loaded only once our system will block the request if the user tries to load the specified url more than once every seconds what is the current behavior after upgrading to testcafe from we noticed that our tests started failing because the openwindow call is making duplicate calls to the url we pass it in our software this causes the ui to redirect to a you must wait seconds before visiting this page error which told us that the openwindow is making multiple requests what is the expected behavior we expect the openwindow call to load the url passed into it only a single time just like it did in what isย your web application andย your testcafeย test code start fiddler locally before running this test when you monitor fiddler you will see that both openwindow urls are hit twice each please note if you remove the first openwindow to our own endpoint you will see that the remaining openwindow url is only hit once i m not sure if we re doing some redirect that is causing testcafe to reload it and any subsequent calls but this didn t happen with but does now happen with sample code sample ts typescript let childwindow let anotherwindow fixture sample test to reproduce issues page beforeeach async t childwindow await t openwindow anotherwindow await t openwindow test test that will reproduce the issue async t console log did we reproduce the issue runner ts typescript import createtestcafe from testcafe let testcafe testcafe const browsers async function runtestforbrowser browser string try start running tests testcafe await createtestcafe localhost const runner testcafe createrunner const testsource log the current bacurl to assist with debugging failed tests console log process env bacurl process env bacurl const failedcount await runner src testsource browsers browser reporter spec output to stdout console name xunit output test reports xml name html output test reports html screenshots path test reports screenshots takeonfails true pathpattern date time test test index useragent file index png run debugmode false debugonfail true stoponfirstfail false skipjserrors true skipuncaughterrors true selectortimeout assertiontimeout pageloadtimeout speed disablepagecaching false catch error console log error throw new error error if failedcount console log tests failed failedcount if testcafe testcafe close throw new error testcafe failed to run failed tests failedcount return catch error if testcafe testcafe close throw new error testcafe failed to run error const runallbrowsers async for const browser of browsers await runtestforbrowser browser process exit runallbrowsers steps to reproduce launch a browser using the page call launch a child browser using the openwindow call notice that the openwindow call made multiple requests to the url you passed in you can see this using fiddler to capture the requests while the test is running your environment details testcafe version node js version command line arguments browser name and version chrome version official build firefox platform and version windows macos catalina version | 1 |
177,065 | 28,315,121,317 | IssuesEvent | 2023-04-10 18:53:50 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | Inconsistent A11Y focus for non-dismissible bottom sheet | framework f: material design a: accessibility customer: money (g3) P3 | When `showModalBottomSheet` is called with `isDismissible` `false`, the default A11Y focus on bottom sheet is not always on the first element.
## Steps to Reproduce
With talkback
Talkback version: 12.2
Device: pixel6
OS: Android 12
The following code sample adds two ways to open the bottom sheet - from the floating action button, and from the app bar.
With the following code sample, open the bottom sheet by pressing the floating action button.
When isDismissible is false, the default A11Y focus is inconsistent, depending on where the bottom sheet was triggered from.
- Triggered from the app bar button - the default A11Y will focus on the button (bad - expected focus to be on the title)
- Triggered from the floating action button - the default A11Y will focus on the title (good)
When isDismissible is true, the default A11Y focus will be on the title (good).
<details>
<summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key? key, required this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _showBottomSheet() {
showModalBottomSheet<SafeArea>(
context: context,
// !!! This is very important to reproduce the a11y focus issue. !!!
// Without it, the title will always get focused by default.
// With it, trigger from app bar with focus on button while trigger from
// floating action button will focus on title.
isDismissible: false,
builder: (_) => Semantics(
scopesRoute: true,
explicitChildNodes: true,
child: Column(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
const Text('This is title'),
TextButton(
onPressed: Navigator.of(context).pop,
child: const Text('Ok'),
),
],
),
),
);
}
void _incrementCounter() {
_showBottomSheet();
setState(() {
_counter++;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title, style: TextStyle(fontFamily: 'ProductSans')),
actions: [
TextButton(
onPressed: _showBottomSheet,
child: Text('ShowBottomSheet'),
)
],
),
body: Center(
child: Text(
'Button tapped $_counter time${_counter == 1 ? '' : 's'}.',
key: Key('CountText'),
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
),
);
}
}
```
</details>
See b/237621905 for more details. | 1.0 | Inconsistent A11Y focus for non-dismissible bottom sheet - When `showModalBottomSheet` is called with `isDismissible` `false`, the default A11Y focus on bottom sheet is not always on the first element.
## Steps to Reproduce
With talkback
Talkback version: 12.2
Device: pixel6
OS: Android 12
The following code sample adds two ways to open the bottom sheet - from the floating action button, and from the app bar.
With the following code sample, open the bottom sheet by pressing the floating action button.
When isDismissible is false, the default A11Y focus is inconsistent, depending on where the bottom sheet was triggered from.
- Triggered from the app bar button - the default A11Y will focus on the button (bad - expected focus to be on the title)
- Triggered from the floating action button - the default A11Y will focus on the title (good)
When isDismissible is true, the default A11Y focus will be on the title (good).
<details>
<summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key? key, required this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _showBottomSheet() {
showModalBottomSheet<SafeArea>(
context: context,
// !!! This is very important to reproduce the a11y focus issue. !!!
// Without it, the title will always get focused by default.
// With it, trigger from app bar with focus on button while trigger from
// floating action button will focus on title.
isDismissible: false,
builder: (_) => Semantics(
scopesRoute: true,
explicitChildNodes: true,
child: Column(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
const Text('This is title'),
TextButton(
onPressed: Navigator.of(context).pop,
child: const Text('Ok'),
),
],
),
),
);
}
void _incrementCounter() {
_showBottomSheet();
setState(() {
_counter++;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title, style: TextStyle(fontFamily: 'ProductSans')),
actions: [
TextButton(
onPressed: _showBottomSheet,
child: Text('ShowBottomSheet'),
)
],
),
body: Center(
child: Text(
'Button tapped $_counter time${_counter == 1 ? '' : 's'}.',
key: Key('CountText'),
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
),
);
}
}
```
</details>
See b/237621905 for more details. | non_process | inconsistent focus for non dismissible bottom sheet when showmodalbottomsheet is called with isdismissible false the default focus on bottom sheet is not always on the first element steps to reproduce with talkback talkback version device os android the following code sample adds two ways to open the bottom sheet from the floating action button and from the app bar with the following code sample open the bottom sheet by pressing the floating action button when isdismissible is false the default focus is inconsistent depending on where the bottom sheet was triggered from triggered from the app bar button the default will focus on the button bad expected focus to be on the title triggered from the floating action button the default will focus on the title good when isdismissible is true the default focus will be on the title good code sample dart import package flutter material dart void main runapp myapp class myapp extends statelesswidget override widget build buildcontext context return materialapp title flutter demo home myhomepage title flutter demo home page class myhomepage extends statefulwidget myhomepage key key required this title super key key final string title override myhomepagestate createstate myhomepagestate class myhomepagestate extends state int counter void showbottomsheet showmodalbottomsheet context context this is very important to reproduce the focus issue without it the title will always get focused by default with it trigger from app bar with focus on button while trigger from floating action button will focus on title isdismissible false builder semantics scopesroute true explicitchildnodes true child column mainaxissize mainaxissize min crossaxisalignment crossaxisalignment start children const text this is title textbutton onpressed navigator of context pop child const text ok void incrementcounter showbottomsheet setstate counter override widget build buildcontext context return scaffold appbar appbar title text widget title style textstyle fontfamily productsans actions textbutton onpressed showbottomsheet child text showbottomsheet body center child text button tapped counter time counter s key key counttext floatingactionbutton floatingactionbutton onpressed incrementcounter tooltip increment see b for more details | 0 |
20,839 | 27,610,389,710 | IssuesEvent | 2023-03-09 15:37:06 | Sebastian009w/hyper-burguer | https://api.github.com/repos/Sebastian009w/hyper-burguer | opened | Defined Routes | process | - [ ] Home
- [ ] Login
- [ ] Register
- [ ] Tables
- [ ] Categories
- [ ] Breakfast
- [ ] Lunch
- [ ] Orders | 1.0 | Defined Routes - - [ ] Home
- [ ] Login
- [ ] Register
- [ ] Tables
- [ ] Categories
- [ ] Breakfast
- [ ] Lunch
- [ ] Orders | process | defined routes home login register tables categories breakfast lunch orders | 1 |
10,035 | 13,044,161,501 | IssuesEvent | 2020-07-29 03:47:23 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | UCP: Migrate scalar function `AddDateDurationInt` from TiDB | challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor |
## Description
Port the scalar function `AddDateDurationInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| 2.0 | UCP: Migrate scalar function `AddDateDurationInt` from TiDB -
## Description
Port the scalar function `AddDateDurationInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| process | ucp migrate scalar function adddatedurationint from tidb description port the scalar function adddatedurationint from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb | 1 |
548,550 | 16,066,531,080 | IssuesEvent | 2021-04-23 20:05:13 | VeraPrinsen/isomorphisms | https://api.github.com/repos/VeraPrinsen/isomorphisms | closed | csvwriter simplify test | Low Priority | Ik maak er een issue voor aan, dan kunnen we dat afwegen.
ik heb alleen de file omgezet naar functie in deze PR. Aan de inhoud niks aangepast.
_Originally posted by @mvandermade in https://github.com/VeraPrinsen/isomorphisms/pull/61_ | 1.0 | csvwriter simplify test - Ik maak er een issue voor aan, dan kunnen we dat afwegen.
ik heb alleen de file omgezet naar functie in deze PR. Aan de inhoud niks aangepast.
_Originally posted by @mvandermade in https://github.com/VeraPrinsen/isomorphisms/pull/61_ | non_process | csvwriter simplify test ik maak er een issue voor aan dan kunnen we dat afwegen ik heb alleen de file omgezet naar functie in deze pr aan de inhoud niks aangepast originally posted by mvandermade in | 0 |
816,441 | 30,599,423,344 | IssuesEvent | 2023-07-22 07:00:38 | Weiver-project/Weiver | https://api.github.com/repos/Weiver-project/Weiver | closed | BE_[Feat]: ์ปค๋ฎค๋ํฐ ๊ฒ์๊ธ ์์ฑ(insert) | โจfeat ๐ Priority: High | ## ๐To do List
- [x] ๊ฒ์ํ์ ์์ฑ๋ ์ ๋ณด DB์ ์ ์ฅ(์ก๋ด)
- ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ [๊ฒ์๊ธID, ์ ๋ชฉ, ์์ฑ์ ๊ณ ์ ์์ด๋, ์์ฑ์ ๋๋ค์, ๊ฒ์๊ธ ๋ด์ฉ, ์ฒจ๋ถ ์ด๋ฏธ์ง ์์ฑ์ผ, ์ข์์ ์, ์กฐํ์] DB(Board)์ ์ ์ฅ
- [x] ๊ฒ์ํ์ ์์ฑ๋ ์ ๋ณด DB์ ์ ์ฅ(๋ฆฌ๋ทฐ)
- ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ [๊ฒ์๊ธID, ์ ๋ชฉ, ์์ฑ์ ๊ณ ์ ์์ด๋, ์์ฑ์ ๋๋ค์, ๊ณต์ฐ ์์ด๋, ๊ฒ์๊ธ ๋ด์ฉ, ์ฒจ๋ถ ์ด๋ฏธ์ง ์์ฑ์ผ, ์ข์์ ์, ์กฐํ์] DB(Board)์ ์ ์ฅ
- [x] ์์ฑ๋ ๋๊ธ ์ ๋ณด DB์ ์ ์ฅ
- ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ [๋๊ธID, ์์ฑ์, ๋๊ธ ๋ด์ฉ, ์์ฑ์ผ] DB(comment)์ ์ ์ฅ
- [x] ์์ฑ๋ ๋๋๊ธ ์ ๋ณด DB์ ์ ์ฅ
- ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ [๋๋๊ธID, ์์ฑ์, ๋๊ธ ๋ด์ฉ, ์์ฑ์ผ] DB(recomment)์ ์ ์ฅ
- [x] ๊ฒ์ํ ๋ฐ ๋๊ธ ์์ฑ ์ ์ ์ ๋ฌธ์์ ๊ฒ์๊ธ ์, ๋๊ธ ์ 1 ์ฆ๊ฐํ๋ ํธ๋ฆฌ๊ฑฐ ์์ฑ | 1.0 | BE_[Feat]: ์ปค๋ฎค๋ํฐ ๊ฒ์๊ธ ์์ฑ(insert) - ## ๐To do List
- [x] ๊ฒ์ํ์ ์์ฑ๋ ์ ๋ณด DB์ ์ ์ฅ(์ก๋ด)
- ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ [๊ฒ์๊ธID, ์ ๋ชฉ, ์์ฑ์ ๊ณ ์ ์์ด๋, ์์ฑ์ ๋๋ค์, ๊ฒ์๊ธ ๋ด์ฉ, ์ฒจ๋ถ ์ด๋ฏธ์ง ์์ฑ์ผ, ์ข์์ ์, ์กฐํ์] DB(Board)์ ์ ์ฅ
- [x] ๊ฒ์ํ์ ์์ฑ๋ ์ ๋ณด DB์ ์ ์ฅ(๋ฆฌ๋ทฐ)
- ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ [๊ฒ์๊ธID, ์ ๋ชฉ, ์์ฑ์ ๊ณ ์ ์์ด๋, ์์ฑ์ ๋๋ค์, ๊ณต์ฐ ์์ด๋, ๊ฒ์๊ธ ๋ด์ฉ, ์ฒจ๋ถ ์ด๋ฏธ์ง ์์ฑ์ผ, ์ข์์ ์, ์กฐํ์] DB(Board)์ ์ ์ฅ
- [x] ์์ฑ๋ ๋๊ธ ์ ๋ณด DB์ ์ ์ฅ
- ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ [๋๊ธID, ์์ฑ์, ๋๊ธ ๋ด์ฉ, ์์ฑ์ผ] DB(comment)์ ์ ์ฅ
- [x] ์์ฑ๋ ๋๋๊ธ ์ ๋ณด DB์ ์ ์ฅ
- ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ [๋๋๊ธID, ์์ฑ์, ๋๊ธ ๋ด์ฉ, ์์ฑ์ผ] DB(recomment)์ ์ ์ฅ
- [x] ๊ฒ์ํ ๋ฐ ๋๊ธ ์์ฑ ์ ์ ์ ๋ฌธ์์ ๊ฒ์๊ธ ์, ๋๊ธ ์ 1 ์ฆ๊ฐํ๋ ํธ๋ฆฌ๊ฑฐ ์์ฑ | non_process | be ์ปค๋ฎค๋ํฐ ๊ฒ์๊ธ ์์ฑ insert ๐to do list ๊ฒ์ํ์ ์์ฑ๋ ์ ๋ณด db์ ์ ์ฅ ์ก๋ด ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ db board ์ ์ ์ฅ ๊ฒ์ํ์ ์์ฑ๋ ์ ๋ณด db์ ์ ์ฅ ๋ฆฌ๋ทฐ ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ db board ์ ์ ์ฅ ์์ฑ๋ ๋๊ธ ์ ๋ณด db์ ์ ์ฅ ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ db comment ์ ์ ์ฅ ์์ฑ๋ ๋๋๊ธ ์ ๋ณด db์ ์ ์ฅ ์ ์ ๊ฐ ์์ฑํ ์ ๋ณด insert ์ db recomment ์ ์ ์ฅ ๊ฒ์ํ ๋ฐ ๋๊ธ ์์ฑ ์ ์ ์ ๋ฌธ์์ ๊ฒ์๊ธ ์ ๋๊ธ ์ ์ฆ๊ฐํ๋ ํธ๋ฆฌ๊ฑฐ ์์ฑ | 0 |
37,252 | 18,243,072,672 | IssuesEvent | 2021-10-01 15:00:16 | astropy/specutils | https://api.github.com/repos/astropy/specutils | closed | Spec1D read is slow compared to astropy.io fits.open | bug performance | I tried two methods to read in a JWST NIRSpec data cube. While astropy.io fits.open followed by Spectrum1D() takes 0.06 seconds, Spectrum1D.read takes an unbearably long 19.6 seconds. Why so slow? | True | Spec1D read is slow compared to astropy.io fits.open - I tried two methods to read in a JWST NIRSpec data cube. While astropy.io fits.open followed by Spectrum1D() takes 0.06 seconds, Spectrum1D.read takes an unbearably long 19.6 seconds. Why so slow? | non_process | read is slow compared to astropy io fits open i tried two methods to read in a jwst nirspec data cube while astropy io fits open followed by takes seconds read takes an unbearably long seconds why so slow | 0 |
10,277 | 13,130,687,840 | IssuesEvent | 2020-08-06 15:44:32 | cncf/cnf-conformance | https://api.github.com/repos/cncf/cnf-conformance | closed | [Process] Adopt CNCF's Code of conduct | enhancement process sprint12 | [Process] Adopt and add CNCF's Code of conduct to GitHub repo
---
- [ ] Review https://github.com/cncf/foundation/blob/master/code-of-conduct.md
- [ ] Vote/agree on adoption of CoC
- [ ] Add CoC to cnf-conformance GitHub repo | 1.0 | [Process] Adopt CNCF's Code of conduct - [Process] Adopt and add CNCF's Code of conduct to GitHub repo
---
- [ ] Review https://github.com/cncf/foundation/blob/master/code-of-conduct.md
- [ ] Vote/agree on adoption of CoC
- [ ] Add CoC to cnf-conformance GitHub repo | process | adopt cncf s code of conduct adopt and add cncf s code of conduct to github repo review vote agree on adoption of coc add coc to cnf conformance github repo | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.