Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
73
| repo_url
stringlengths 36
102
| action
stringclasses 3
values | title
stringlengths 1
535
| labels
stringlengths 4
356
| body
stringlengths 4
178k
| index
stringclasses 7
values | text_combine
stringlengths 96
178k
| label
stringclasses 2
values | text
stringlengths 96
174k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,550
| 12,264,080,325
|
IssuesEvent
|
2020-05-07 03:04:50
|
bandprotocol/bandchain
|
https://api.github.com/repos/bandprotocol/bandchain
|
closed
|
Stress test Wenchang operations
|
automation chain
|
Using BigDipper as the base explorer, test interacting with Wenchang testnet and report.
- [x] Send money
- [x] Spam sending money from 100 accounts to the network concurrently
- [x] Delegate
- [x] Withdraw delegation
- [x] Double sign and get jailed
- [x] Apply for validator then keep it down forever (and verify that it will get slashed and jail in 1 day)f
- [x] All proposal related messages
|
1.0
|
Stress test Wenchang operations - Using BigDipper as the base explorer, test interacting with Wenchang testnet and report.
- [x] Send money
- [x] Spam sending money from 100 accounts to the network concurrently
- [x] Delegate
- [x] Withdraw delegation
- [x] Double sign and get jailed
- [x] Apply for validator then keep it down forever (and verify that it will get slashed and jail in 1 day)f
- [x] All proposal related messages
|
automation
|
stress test wenchang operations using bigdipper as the base explorer test interacting with wenchang testnet and report send money spam sending money from accounts to the network concurrently delegate withdraw delegation double sign and get jailed apply for validator then keep it down forever and verify that it will get slashed and jail in day f all proposal related messages
| 1
|
86,261
| 3,704,395,261
|
IssuesEvent
|
2016-02-29 23:59:22
|
SpeedCurve-Metrics/SpeedCurve
|
https://api.github.com/repos/SpeedCurve-Metrics/SpeedCurve
|
closed
|
[Benchmark] Filmstrip not refreshed when switching between templates
|
priority medium status accepted type bug
|
In the "Benchmark" section, when switching between templates, the filmstrip view is greyed and not refreshed. Reloading the page refreshes the filmstrip but is annoying :).
<img width="1271" alt="screen shot 2015-12-04 at 10 46 28 am" src="https://cloud.githubusercontent.com/assets/2169585/11586580/655f581a-9a74-11e5-8d87-7ab1a7e3bfad.png">
|
1.0
|
[Benchmark] Filmstrip not refreshed when switching between templates - In the "Benchmark" section, when switching between templates, the filmstrip view is greyed and not refreshed. Reloading the page refreshes the filmstrip but is annoying :).
<img width="1271" alt="screen shot 2015-12-04 at 10 46 28 am" src="https://cloud.githubusercontent.com/assets/2169585/11586580/655f581a-9a74-11e5-8d87-7ab1a7e3bfad.png">
|
non_automation
|
filmstrip not refreshed when switching between templates in the benchmark section when switching between templates the filmstrip view is greyed and not refreshed reloading the page refreshes the filmstrip but is annoying img width alt screen shot at am src
| 0
|
113,274
| 17,117,946,552
|
IssuesEvent
|
2021-07-11 18:51:59
|
turkdevops/design-language-website
|
https://api.github.com/repos/turkdevops/design-language-website
|
opened
|
CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-5.1.1.tgz
|
security vulnerability
|
## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: design-language-website/package.json</p>
<p>Path to vulnerable library: design-language-website/node_modules/glob-parent</p>
<p>
Dependency Hierarchy:
- gatsby-2.32.12.tgz (Root Library)
- webpack-dev-server-3.11.2.tgz
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-5.1.1.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p>
<p>Path to dependency file: design-language-website/package.json</p>
<p>Path to vulnerable library: design-language-website/node_modules/glob-parent</p>
<p>
Dependency Hierarchy:
- eslint-7.10.0.tgz (Root Library)
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/design-language-website/commit/187b6c70cc572cc46890f19fe80fcaddc53857c4">187b6c70cc572cc46890f19fe80fcaddc53857c4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-5.1.1.tgz - ## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: design-language-website/package.json</p>
<p>Path to vulnerable library: design-language-website/node_modules/glob-parent</p>
<p>
Dependency Hierarchy:
- gatsby-2.32.12.tgz (Root Library)
- webpack-dev-server-3.11.2.tgz
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-5.1.1.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p>
<p>Path to dependency file: design-language-website/package.json</p>
<p>Path to vulnerable library: design-language-website/node_modules/glob-parent</p>
<p>
Dependency Hierarchy:
- eslint-7.10.0.tgz (Root Library)
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/design-language-website/commit/187b6c70cc572cc46890f19fe80fcaddc53857c4">187b6c70cc572cc46890f19fe80fcaddc53857c4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_automation
|
cve high detected in glob parent tgz glob parent tgz cve high severity vulnerability vulnerable libraries glob parent tgz glob parent tgz glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file design language website package json path to vulnerable library design language website node modules glob parent dependency hierarchy gatsby tgz root library webpack dev server tgz chokidar tgz x glob parent tgz vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href path to dependency file design language website package json path to vulnerable library design language website node modules glob parent dependency hierarchy eslint tgz root library x glob parent tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with whitesource
| 0
|
2,847
| 12,702,212,671
|
IssuesEvent
|
2020-06-22 19:43:53
|
submariner-io/submariner
|
https://api.github.com/repos/submariner-io/submariner
|
closed
|
"report-dir" argument can be removed (Ginkgo has --reportFile option)
|
automation enhancement
|
"report-dir" argument for specifying junit tests output directory - can be removed, including any references that uses it (also in Docs):
https://github.com/submariner-io/submariner/blob/06332b91b193c0ab362e7f0a96cd715b8556acd5/test/e2e/framework/test_context.go#L40
Ginkgo has this feature already, for example
--ginkgo.reportFile ${WORKDIR}/e2e_junit_result.xml
|
1.0
|
"report-dir" argument can be removed (Ginkgo has --reportFile option) - "report-dir" argument for specifying junit tests output directory - can be removed, including any references that uses it (also in Docs):
https://github.com/submariner-io/submariner/blob/06332b91b193c0ab362e7f0a96cd715b8556acd5/test/e2e/framework/test_context.go#L40
Ginkgo has this feature already, for example
--ginkgo.reportFile ${WORKDIR}/e2e_junit_result.xml
|
automation
|
report dir argument can be removed ginkgo has reportfile option report dir argument for specifying junit tests output directory can be removed including any references that uses it also in docs ginkgo has this feature already for example ginkgo reportfile workdir junit result xml
| 1
|
1,571
| 10,344,432,220
|
IssuesEvent
|
2019-09-04 11:12:47
|
elastic/apm-server
|
https://api.github.com/repos/elastic/apm-server
|
closed
|
[Automation][apm-ci] Reorder parallel execution of stages
|
automation ci enhancement
|
Let's run more parallel stages to get a quick cycle and reduce the waste of time when running in sequential stages.
Besides, let's ensure the windows stage doesn't populate its failures to the pipeline but the stage. A pipeline is a set of stages.

|
1.0
|
[Automation][apm-ci] Reorder parallel execution of stages - Let's run more parallel stages to get a quick cycle and reduce the waste of time when running in sequential stages.
Besides, let's ensure the windows stage doesn't populate its failures to the pipeline but the stage. A pipeline is a set of stages.

|
automation
|
reorder parallel execution of stages let s run more parallel stages to get a quick cycle and reduce the waste of time when running in sequential stages besides let s ensure the windows stage doesn t populate its failures to the pipeline but the stage a pipeline is a set of stages
| 1
|
8,268
| 26,586,423,169
|
IssuesEvent
|
2023-01-23 02:03:33
|
Project-Herophilus/idaas-connect-automation
|
https://api.github.com/repos/Project-Herophilus/idaas-connect-automation
|
closed
|
Deploying More iDaaS Connect Sub Modules
|
automation cloud native
|
For the next round of deployments lets deploy the following iDaaS-Connect submodules:
- FHIR
- EDI
- Third-Party
- Cloud
- CMS Interoperability
|
1.0
|
Deploying More iDaaS Connect Sub Modules - For the next round of deployments lets deploy the following iDaaS-Connect submodules:
- FHIR
- EDI
- Third-Party
- Cloud
- CMS Interoperability
|
automation
|
deploying more idaas connect sub modules for the next round of deployments lets deploy the following idaas connect submodules fhir edi third party cloud cms interoperability
| 1
|
9,875
| 7,021,923,945
|
IssuesEvent
|
2017-12-22 08:01:58
|
Elgg/Elgg
|
https://api.github.com/repos/Elgg/Elgg
|
closed
|
Saving metadata and all changes automatically in destructor as default policy. (Trac #4597)
|
engine feature performance
|
_Original ticket http://trac.elgg.org/ticket/4597 on 42465681-08-10 by trac user srokap, assigned to unknown._
Elgg version: 1.8
Previously discussed here: https://docs.google.com/document/d/1NrxIj4YOTjNbeXDGW3tpz2lNvaRwL2NDPBNd7TgRfFk/edit?disco=AAAAAEr6svk
This is actually a bit of logic change. Motivation is to reduce writing calls as much as possible to make life easier for deployments with single master and multiple read replicas. This change allows us to hopefully make single call to DB. We also may change metadata multiple times (increment?) without additional cost - we save final version. It's tricky because we may sometimes want to make immediate write, but i think this could be made by some explicit call (ElggEntity->save(params)?) instead of default policy. I also remember Cash speaking something about making writes to DB as late as possible, it would follow the same path.
We might consider saving metadata and all changes automatically in destructor. We tried such concept successfully. Note that also some related bugs were fixed in PHP: https://bugs.php.net/bug.php?id=30210
|
True
|
Saving metadata and all changes automatically in destructor as default policy. (Trac #4597) - _Original ticket http://trac.elgg.org/ticket/4597 on 42465681-08-10 by trac user srokap, assigned to unknown._
Elgg version: 1.8
Previously discussed here: https://docs.google.com/document/d/1NrxIj4YOTjNbeXDGW3tpz2lNvaRwL2NDPBNd7TgRfFk/edit?disco=AAAAAEr6svk
This is actually a bit of logic change. Motivation is to reduce writing calls as much as possible to make life easier for deployments with single master and multiple read replicas. This change allows us to hopefully make single call to DB. We also may change metadata multiple times (increment?) without additional cost - we save final version. It's tricky because we may sometimes want to make immediate write, but i think this could be made by some explicit call (ElggEntity->save(params)?) instead of default policy. I also remember Cash speaking something about making writes to DB as late as possible, it would follow the same path.
We might consider saving metadata and all changes automatically in destructor. We tried such concept successfully. Note that also some related bugs were fixed in PHP: https://bugs.php.net/bug.php?id=30210
|
non_automation
|
saving metadata and all changes automatically in destructor as default policy trac original ticket on by trac user srokap assigned to unknown elgg version previously discussed here this is actually a bit of logic change motivation is to reduce writing calls as much as possible to make life easier for deployments with single master and multiple read replicas this change allows us to hopefully make single call to db we also may change metadata multiple times increment without additional cost we save final version it s tricky because we may sometimes want to make immediate write but i think this could be made by some explicit call elggentity save params instead of default policy i also remember cash speaking something about making writes to db as late as possible it would follow the same path we might consider saving metadata and all changes automatically in destructor we tried such concept successfully note that also some related bugs were fixed in php
| 0
|
90,936
| 10,703,811,015
|
IssuesEvent
|
2019-10-24 10:19:11
|
theodo/falco
|
https://api.github.com/repos/theodo/falco
|
closed
|
Docs repository should be migrated to this repo
|
documentation
|
In order to make Docs edits / PR more easy, and to keep a single repo to keep track of doc issues/PRs, the Docs repo (currently at https://github.com/theodo/getfal.co) should be migrated under a `docs/` folder in this very repo.
|
1.0
|
Docs repository should be migrated to this repo - In order to make Docs edits / PR more easy, and to keep a single repo to keep track of doc issues/PRs, the Docs repo (currently at https://github.com/theodo/getfal.co) should be migrated under a `docs/` folder in this very repo.
|
non_automation
|
docs repository should be migrated to this repo in order to make docs edits pr more easy and to keep a single repo to keep track of doc issues prs the docs repo currently at should be migrated under a docs folder in this very repo
| 0
|
8,829
| 27,172,304,905
|
IssuesEvent
|
2023-02-17 20:39:22
|
OneDrive/onedrive-api-docs
|
https://api.github.com/repos/OneDrive/onedrive-api-docs
|
closed
|
Concurrent createUploadSession requests failing
|
type:bug status:backlogged area:Throttling automation:Closed
|
#749
## Category
- [ ] Question
- [ ] Documentation issue
- [X] Bug
#### Expected or Desired Behavior
I have been using `createUploadSessions` in SPO for months now and it has worked perfectly for uploading a large chunk of files. What I normally do is that I spin up 40 concurrent requests, start uploading the file chunks, and start new sessions once any of the previous ones is finished. This has worked fine until now, the sessions were created, chunks were created and finally the files were created in OneDrive.
#### Observed Behavior
What I'm seeing now that I receive an `invalidRequest` response after creating a bunch of requests and it seems like only a handful of files will get uploaded completely. I can make it work by only creating one session, finishing it, and creating another session. However, this is considerably slower that what it used to be when I was able to upload multiple files concurrently.
```
method: 'POST',
path: '/sites/root/drive/root:%2F<snip>.pdf:/createUploadSession',
responseBody: '{"error":{"code":"invalidRequest","message":"Invalid request","innerError":{"date":"2020-09-06T14:53:11","request-id":"dff3cfa9-54e8-4eb5-b108-bf0dec0b04e6"}}}'
```
If this is a new rate limit being applied, I believe the error code should be changed to something more meaningful or understandable.
#### Steps to Reproduce
Create a bunch of upload sessions concurrently and start uploading chunks. The specific code I'm using is located here:
https://github.com/turist-cloud/ship/tree/master/packages/ship-board
- `src/upload-files.ts`
- `src/fetch-graph-api.ts`
[ ]: http://aka.ms/onedrive-api-issues
[x]: http://aka.ms/onedrive-api-issues
|
1.0
|
Concurrent createUploadSession requests failing - #749
## Category
- [ ] Question
- [ ] Documentation issue
- [X] Bug
#### Expected or Desired Behavior
I have been using `createUploadSessions` in SPO for months now and it has worked perfectly for uploading a large chunk of files. What I normally do is that I spin up 40 concurrent requests, start uploading the file chunks, and start new sessions once any of the previous ones is finished. This has worked fine until now, the sessions were created, chunks were created and finally the files were created in OneDrive.
#### Observed Behavior
What I'm seeing now that I receive an `invalidRequest` response after creating a bunch of requests and it seems like only a handful of files will get uploaded completely. I can make it work by only creating one session, finishing it, and creating another session. However, this is considerably slower that what it used to be when I was able to upload multiple files concurrently.
```
method: 'POST',
path: '/sites/root/drive/root:%2F<snip>.pdf:/createUploadSession',
responseBody: '{"error":{"code":"invalidRequest","message":"Invalid request","innerError":{"date":"2020-09-06T14:53:11","request-id":"dff3cfa9-54e8-4eb5-b108-bf0dec0b04e6"}}}'
```
If this is a new rate limit being applied, I believe the error code should be changed to something more meaningful or understandable.
#### Steps to Reproduce
Create a bunch of upload sessions concurrently and start uploading chunks. The specific code I'm using is located here:
https://github.com/turist-cloud/ship/tree/master/packages/ship-board
- `src/upload-files.ts`
- `src/fetch-graph-api.ts`
[ ]: http://aka.ms/onedrive-api-issues
[x]: http://aka.ms/onedrive-api-issues
|
automation
|
concurrent createuploadsession requests failing category question documentation issue bug expected or desired behavior i have been using createuploadsessions in spo for months now and it has worked perfectly for uploading a large chunk of files what i normally do is that i spin up concurrent requests start uploading the file chunks and start new sessions once any of the previous ones is finished this has worked fine until now the sessions were created chunks were created and finally the files were created in onedrive observed behavior what i m seeing now that i receive an invalidrequest response after creating a bunch of requests and it seems like only a handful of files will get uploaded completely i can make it work by only creating one session finishing it and creating another session however this is considerably slower that what it used to be when i was able to upload multiple files concurrently method post path sites root drive root pdf createuploadsession responsebody error code invalidrequest message invalid request innererror date request id if this is a new rate limit being applied i believe the error code should be changed to something more meaningful or understandable steps to reproduce create a bunch of upload sessions concurrently and start uploading chunks the specific code i m using is located here src upload files ts src fetch graph api ts
| 1
|
334,207
| 24,408,612,613
|
IssuesEvent
|
2022-10-05 10:13:14
|
insightsengineering/tern.mmrm
|
https://api.github.com/repos/insightsengineering/tern.mmrm
|
closed
|
Clean up README
|
documentation good first issue SP1 high priority
|
To do:
- [x] Update according to major refactoring
- [x] Explain clearly how the `mmrm` and `tern.mmrm` packages relate to each other
- [x] also give guidance when to use which
|
1.0
|
Clean up README - To do:
- [x] Update according to major refactoring
- [x] Explain clearly how the `mmrm` and `tern.mmrm` packages relate to each other
- [x] also give guidance when to use which
|
non_automation
|
clean up readme to do update according to major refactoring explain clearly how the mmrm and tern mmrm packages relate to each other also give guidance when to use which
| 0
|
169,516
| 13,150,174,823
|
IssuesEvent
|
2020-08-09 09:57:49
|
Rocologo/MobHunting
|
https://api.github.com/repos/Rocologo/MobHunting
|
closed
|
Error on /mh acheivements
|
Fixed - To be tested
|
When I do /mh acheivements in game I get this console error: https://paste.gg/p/Momshroom/24caaca7b64143b89b35d9148c211b05
MobHunting version 7.5.0
Paper 370 (1.15.2)
|
1.0
|
Error on /mh acheivements - When I do /mh acheivements in game I get this console error: https://paste.gg/p/Momshroom/24caaca7b64143b89b35d9148c211b05
MobHunting version 7.5.0
Paper 370 (1.15.2)
|
non_automation
|
error on mh acheivements when i do mh acheivements in game i get this console error mobhunting version paper
| 0
|
1,318
| 9,905,481,353
|
IssuesEvent
|
2019-06-27 11:42:44
|
elastic/apm-server
|
https://api.github.com/repos/elastic/apm-server
|
closed
|
Deal with failing ci test for saved objects in Kibana
|
[zube]: In Review automation
|
A ci check called _Check Kibana Object updated_ is run on every PR and on push to master. This test runs a command in APM Server to create the Kibana index pattern based on the ES template, and then checks if the created index pattern is in sync with the one bundled in Kibana for APM.
It test fails on following occasions:
- updating libbeat changes inherited fields for the apm-server leading to changes in the Kibana index pattern
- changing fields directly in apm server leading to changes in the Kibana index pattern
- changes in Kibana touching the stored objects (e.g. moving the files around).
Since field changes requires a PR in APM Server and Kibana to be merged at the same time, a lot of PRs fail related to this test, although not directly related.
We should discuss how to improve this situation on a CI level. A few options:
(1) In case only this stage fails we could mark the build as instable.
(2) Run the test as a separate check outside of the `pr-merge`.
(3) Trigger the test only if something in `_meta/fields.common.yml` or in `_beats/libbeat/_meta` changed and on pushes to release branches and master.
I suggest to apply option (3), and maybe also option (2) to give a better overview on the PR what is failing.
|
1.0
|
Deal with failing ci test for saved objects in Kibana - A ci check called _Check Kibana Object updated_ is run on every PR and on push to master. This test runs a command in APM Server to create the Kibana index pattern based on the ES template, and then checks if the created index pattern is in sync with the one bundled in Kibana for APM.
It test fails on following occasions:
- updating libbeat changes inherited fields for the apm-server leading to changes in the Kibana index pattern
- changing fields directly in apm server leading to changes in the Kibana index pattern
- changes in Kibana touching the stored objects (e.g. moving the files around).
Since field changes requires a PR in APM Server and Kibana to be merged at the same time, a lot of PRs fail related to this test, although not directly related.
We should discuss how to improve this situation on a CI level. A few options:
(1) In case only this stage fails we could mark the build as instable.
(2) Run the test as a separate check outside of the `pr-merge`.
(3) Trigger the test only if something in `_meta/fields.common.yml` or in `_beats/libbeat/_meta` changed and on pushes to release branches and master.
I suggest to apply option (3), and maybe also option (2) to give a better overview on the PR what is failing.
|
automation
|
deal with failing ci test for saved objects in kibana a ci check called check kibana object updated is run on every pr and on push to master this test runs a command in apm server to create the kibana index pattern based on the es template and then checks if the created index pattern is in sync with the one bundled in kibana for apm it test fails on following occasions updating libbeat changes inherited fields for the apm server leading to changes in the kibana index pattern changing fields directly in apm server leading to changes in the kibana index pattern changes in kibana touching the stored objects e g moving the files around since field changes requires a pr in apm server and kibana to be merged at the same time a lot of prs fail related to this test although not directly related we should discuss how to improve this situation on a ci level a few options in case only this stage fails we could mark the build as instable run the test as a separate check outside of the pr merge trigger the test only if something in meta fields common yml or in beats libbeat meta changed and on pushes to release branches and master i suggest to apply option and maybe also option to give a better overview on the pr what is failing
| 1
|
74,831
| 3,448,883,569
|
IssuesEvent
|
2015-12-16 10:46:45
|
weaveworks/weave
|
https://api.github.com/repos/weaveworks/weave
|
closed
|
work with dockers on domain sockets other than unix:///var/run/docker
|
chore [component/proxy] [component/router] {priority/high}
|
`weave launch` is not detecting docker socket if `DOCKER_HOST` is set to non-default unix socket
Docker daemon is listening on `unix:///var/run/docker-real.sock` and `$DOCKER_HOST=unix:///var/run/docker-real.sock`
`docker` commands works fine as expected. But `weave lauch` returns
`Cannot connect to the Docker daemon. Is 'docker -d' running on this host?`
**_Note:_**
_This is to achieve something similar here https://github.com/rancher/rancher/issues/2398 to integrate weave into Rancher_
|
1.0
|
work with dockers on domain sockets other than unix:///var/run/docker - `weave launch` is not detecting docker socket if `DOCKER_HOST` is set to non-default unix socket
Docker daemon is listening on `unix:///var/run/docker-real.sock` and `$DOCKER_HOST=unix:///var/run/docker-real.sock`
`docker` commands works fine as expected. But `weave lauch` returns
`Cannot connect to the Docker daemon. Is 'docker -d' running on this host?`
**_Note:_**
_This is to achieve something similar here https://github.com/rancher/rancher/issues/2398 to integrate weave into Rancher_
|
non_automation
|
work with dockers on domain sockets other than unix var run docker weave launch is not detecting docker socket if docker host is set to non default unix socket docker daemon is listening on unix var run docker real sock and docker host unix var run docker real sock docker commands works fine as expected but weave lauch returns cannot connect to the docker daemon is docker d running on this host note this is to achieve something similar here to integrate weave into rancher
| 0
|
415
| 6,304,022,138
|
IssuesEvent
|
2017-07-21 15:00:29
|
blackbaud/skyux2
|
https://api.github.com/repos/blackbaud/skyux2
|
closed
|
Run skyux visual tests through a skyux page
|
automation
|
Currently we are running our visual regression tests by using webpack to serve some component fixtures up locally, and then use the local Browserstack tunnel to test using multiple browsers.
This has a couple of drawbacks:
- The Browserstack local tunnel can be flakey and disconnect randomly at times
- Serving up our files with webpack doesn't allow us to have as many tests running in parallel, because they start slowing down to the point of failure as we add more.
- Our visual tests are not being run in a environment similar to our users (SKY UX host/builder/etc)
To solve this, we should find a way to build our visual tests as a SKY UX app, which our visual tests will then hit remotely.
|
1.0
|
Run skyux visual tests through a skyux page - Currently we are running our visual regression tests by using webpack to serve some component fixtures up locally, and then use the local Browserstack tunnel to test using multiple browsers.
This has a couple of drawbacks:
- The Browserstack local tunnel can be flakey and disconnect randomly at times
- Serving up our files with webpack doesn't allow us to have as many tests running in parallel, because they start slowing down to the point of failure as we add more.
- Our visual tests are not being run in a environment similar to our users (SKY UX host/builder/etc)
To solve this, we should find a way to build our visual tests as a SKY UX app, which our visual tests will then hit remotely.
|
automation
|
run skyux visual tests through a skyux page currently we are running our visual regression tests by using webpack to serve some component fixtures up locally and then use the local browserstack tunnel to test using multiple browsers this has a couple of drawbacks the browserstack local tunnel can be flakey and disconnect randomly at times serving up our files with webpack doesn t allow us to have as many tests running in parallel because they start slowing down to the point of failure as we add more our visual tests are not being run in a environment similar to our users sky ux host builder etc to solve this we should find a way to build our visual tests as a sky ux app which our visual tests will then hit remotely
| 1
|
110,837
| 24,015,635,194
|
IssuesEvent
|
2022-09-15 00:10:46
|
qhy040404/Library-One-Tap-Android
|
https://api.github.com/repos/qhy040404/Library-One-Tap-Android
|
closed
|
Rewrite AboutActivity to use partial chrome
|
enhancement large code low priority UI / UX external
|
### Enhancement propose
Better UX
### Solution

### Additional info
_No response_
|
1.0
|
Rewrite AboutActivity to use partial chrome - ### Enhancement propose
Better UX
### Solution

### Additional info
_No response_
|
non_automation
|
rewrite aboutactivity to use partial chrome enhancement propose better ux solution additional info no response
| 0
|
3,469
| 13,790,468,198
|
IssuesEvent
|
2020-10-09 10:28:35
|
eventespresso/barista
|
https://api.github.com/repos/eventespresso/barista
|
closed
|
Rename ALL `barista-prod` Branches to `barista`
|
C: automation & deployment ⚙️ D: Packages 📦 P3: med priority 😐 T: task 🧹
|
Originally we thought there might also be the need for other barista branches like `barista-dev` in other repos but that doesn't look to be the case now so let's just simplify the naming for now (cuz I'm a lazy typist and that extra `-prod` is an unacceptable burden)
|
1.0
|
Rename ALL `barista-prod` Branches to `barista` - Originally we thought there might also be the need for other barista branches like `barista-dev` in other repos but that doesn't look to be the case now so let's just simplify the naming for now (cuz I'm a lazy typist and that extra `-prod` is an unacceptable burden)
|
automation
|
rename all barista prod branches to barista originally we thought there might also be the need for other barista branches like barista dev in other repos but that doesn t look to be the case now so let s just simplify the naming for now cuz i m a lazy typist and that extra prod is an unacceptable burden
| 1
|
90,451
| 15,856,158,066
|
IssuesEvent
|
2021-04-08 01:39:53
|
heholek/practical-aspnetcore
|
https://api.github.com/repos/heholek/practical-aspnetcore
|
opened
|
CVE-2019-0564 (High) detected in microsoft.aspnetcore.app.2.1.1.nupkg
|
security vulnerability
|
## CVE-2019-0564 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>microsoft.aspnetcore.app.2.1.1.nupkg</b></p></summary>
<p>Microsoft.AspNetCore.App</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.app.2.1.1.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.app.2.1.1.nupkg</a></p>
<p>Path to dependency file: practical-aspnetcore/projects/localization-5/localization-5.csproj</p>
<p>Path to vulnerable library: practical-aspnetcore/projects/localization-5/localization-5.csproj,practical-aspnetcore/projects/localization-6/localization-6.csproj</p>
<p>
Dependency Hierarchy:
- :x: **microsoft.aspnetcore.app.2.1.1.nupkg** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A denial of service vulnerability exists when ASP.NET Core improperly handles web requests, aka "ASP.NET Core Denial of Service Vulnerability." This affects ASP.NET Core 2.1. This CVE ID is unique from CVE-2019-0548.
<p>Publish Date: 2019-01-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0564>CVE-2019-0564</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/aspnet/Announcements/issues/334">https://github.com/aspnet/Announcements/issues/334</a></p>
<p>Release Date: 2019-01-08</p>
<p>Fix Resolution: Microsoft.AspNetCore.WebSockets - 2.1.7,2.2.1;Microsoft.AspNetCore.Server.Kestrel.Core - 2.1.7;System.Net.WebSockets.WebSocketProtocol - 4.5.3;Microsoft.NETCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.All - 2.1.7,2.2.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-0564 (High) detected in microsoft.aspnetcore.app.2.1.1.nupkg - ## CVE-2019-0564 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>microsoft.aspnetcore.app.2.1.1.nupkg</b></p></summary>
<p>Microsoft.AspNetCore.App</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.app.2.1.1.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.app.2.1.1.nupkg</a></p>
<p>Path to dependency file: practical-aspnetcore/projects/localization-5/localization-5.csproj</p>
<p>Path to vulnerable library: practical-aspnetcore/projects/localization-5/localization-5.csproj,practical-aspnetcore/projects/localization-6/localization-6.csproj</p>
<p>
Dependency Hierarchy:
- :x: **microsoft.aspnetcore.app.2.1.1.nupkg** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A denial of service vulnerability exists when ASP.NET Core improperly handles web requests, aka "ASP.NET Core Denial of Service Vulnerability." This affects ASP.NET Core 2.1. This CVE ID is unique from CVE-2019-0548.
<p>Publish Date: 2019-01-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0564>CVE-2019-0564</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/aspnet/Announcements/issues/334">https://github.com/aspnet/Announcements/issues/334</a></p>
<p>Release Date: 2019-01-08</p>
<p>Fix Resolution: Microsoft.AspNetCore.WebSockets - 2.1.7,2.2.1;Microsoft.AspNetCore.Server.Kestrel.Core - 2.1.7;System.Net.WebSockets.WebSocketProtocol - 4.5.3;Microsoft.NETCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.All - 2.1.7,2.2.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_automation
|
cve high detected in microsoft aspnetcore app nupkg cve high severity vulnerability vulnerable library microsoft aspnetcore app nupkg microsoft aspnetcore app library home page a href path to dependency file practical aspnetcore projects localization localization csproj path to vulnerable library practical aspnetcore projects localization localization csproj practical aspnetcore projects localization localization csproj dependency hierarchy x microsoft aspnetcore app nupkg vulnerable library vulnerability details a denial of service vulnerability exists when asp net core improperly handles web requests aka asp net core denial of service vulnerability this affects asp net core this cve id is unique from cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution microsoft aspnetcore websockets microsoft aspnetcore server kestrel core system net websockets websocketprotocol microsoft netcore app microsoft aspnetcore app microsoft aspnetcore all step up your open source security game with whitesource
| 0
|
61,037
| 14,599,420,677
|
IssuesEvent
|
2020-12-21 04:08:27
|
doamatto/phone-passcode-gen
|
https://api.github.com/repos/doamatto/phone-passcode-gen
|
closed
|
CVE-2019-6284 (Medium) detected in opennmsopennms-source-26.0.0-1, node-sass-4.14.1.tgz
|
security vulnerability
|
## CVE-2019-6284 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opennmsopennms-source-26.0.0-1</b>, <b>node-sass-4.14.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: phone-passcode-gen/package.json</p>
<p>Path to vulnerable library: phone-passcode-gen/node_modules/gulp-sass/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.1.0.tgz (Root Library)
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/doamatto/phone-passcode-gen/commit/9ddf2695e14fb4e1ed3b0dcbb49693b394383c4e">9ddf2695e14fb4e1ed3b0dcbb49693b394383c4e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6284>CVE-2019-6284</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-6284 (Medium) detected in opennmsopennms-source-26.0.0-1, node-sass-4.14.1.tgz - ## CVE-2019-6284 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opennmsopennms-source-26.0.0-1</b>, <b>node-sass-4.14.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: phone-passcode-gen/package.json</p>
<p>Path to vulnerable library: phone-passcode-gen/node_modules/gulp-sass/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.1.0.tgz (Root Library)
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/doamatto/phone-passcode-gen/commit/9ddf2695e14fb4e1ed3b0dcbb49693b394383c4e">9ddf2695e14fb4e1ed3b0dcbb49693b394383c4e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6284>CVE-2019-6284</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_automation
|
cve medium detected in opennmsopennms source node sass tgz cve medium severity vulnerability vulnerable libraries opennmsopennms source node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file phone passcode gen package json path to vulnerable library phone passcode gen node modules gulp sass node modules node sass package json dependency hierarchy gulp sass tgz root library x node sass tgz vulnerable library found in head commit a href found in base branch master vulnerability details in libsass a heap based buffer over read exists in sass prelexer alternatives in prelexer hpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
| 0
|
1,919
| 11,097,189,215
|
IssuesEvent
|
2019-12-16 12:51:02
|
wazuh/wazuh-qa
|
https://api.github.com/repos/wazuh/wazuh-qa
|
opened
|
FIM v2.0: Analysisd Integration tests: Error messages
|
automation component/fim
|
## Description
This issue covers the integration test for bad formated messages handling by analysisd. We will treat analysisd as a black box that receives integrity events by its input Unix socket, checking that the correct output is forwarded to the desired socket (simulating Wazuh DB).
Twelve use cases have been defined to check that the FIM event messages are handled properly. These cases should be implemented in the same test.
- [ ] No `timestamp` in a FIM scan message.
- [ ] No `type` in a FIM message
- [ ] Empty `type` in an event message.
- [ ] Incorrect `type` in an event message.
- [ ] The JSON in a DB sync message cannot be parsed.
- [ ] The item `component` cannot be parsed as a string in a DB sync message.
- [ ] The item `type` cannot be parsed as a string in a DB sync message.
- [ ] The item `type` is unknown in a DB sync message.
- [ ] No `data` field in a DB sync message.
**Input location**
The input location for all checks is the analysisd socket:
`/var/ossec/queue/ossec/queue`
**Output location**
The output location for all checks is `ossec.log` file:
`/var/ossec/logs/ossec.log`
## No `timestamp` in a FIM scan message
**Input message**:
`8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"scan_end","data":{}}`
**Output message**:
`No such member \"timestamp\" in FIM scan info event.`
## No `type` in a FIM message
**Input message**:
`8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"data":{"timestamp":1575442712}}`
**Output message**:
`Invalid FIM event`
## Empty `type` in an event message
**Input message**:
`8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"event","data":{"path":"/home/test/file","mode":"real-time","type":"NULL","timestamp":1575421671,"attributes":{"type":"file","size":5,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575421671,"hash_md5":"7be8ec9774fc128d067782134fbc37eb","hash_sha1":"fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7","hash_sha256":"ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7","checksum":"00eaef78d06924374cb291957a1f63e224d76320"},"changed_attributes":["size","mtime","md5","sha1","sha256"],"old_attributes":{"type":"file","size":18,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575416596,"hash_md5":"a3ee12884966cb2512805d2500361913","hash_sha1":"e6e8a61093715af1e4f2a3c0618ce014f0d94fde","hash_sha256":"79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a","checksum":"a02381378af3739e81bea813c1ff6e3d0027498d"}}}
`
**Output message**:
`No member 'type' in Syscheck JSON payload`
## Incorrect event `type` in an event message
**Input message**:
`8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"event","data":{"path":"/home/test/file","mode":"real-time","type":"other","timestamp":1575421671,"attributes":{"type":"file","size":5,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575421671,"hash_md5":"7be8ec9774fc128d067782134fbc37eb","hash_sha1":"fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7","hash_sha256":"ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7","checksum":"00eaef78d06924374cb291957a1f63e224d76320"},"changed_attributes":["size","mtime","md5","sha1","sha256"],"old_attributes":{"type":"file","size":18,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575416596,"hash_md5":"a3ee12884966cb2512805d2500361913","hash_sha1":"e6e8a61093715af1e4f2a3c0618ce014f0d94fde","hash_sha256":"79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a","checksum":"a02381378af3739e81bea813c1ff6e3d0027498d"}}}
`
**Output message**:
`Invalid 'type' value 'incorrect_value' in JSON payload.`
## The JSON in a DB sync message cannot be parsed
**Input message**:
`5:[001] (vm-test-agent) 192.168.57.2->syscheck:{{"component":"syscheck","type":"integrity_check_global","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}}
`
**Output message**:
`dbsync: Cannot parse JSON: %s", lf->log`
## The item `component` cannot be parsed as a string in a DB sync message
**Input message**:
`5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"type":"integrity_check_global","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}}`
**Output message**:
`dbsync: Corrupt message: cannot get component member.`
## The item `type` cannot be parsed as a string in a DB sync message
**Input message**:
`5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"component":"syscheck","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}}
`
**Output message**:
`dbsync: Corrupt message: cannot get type member.`
## No `data` field in a DB sync message
**Input message**:
`5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"component":"syscheck","type":"integrity_check_global","":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}}
`
**Output message**:
`dbsync: Corrupt message: cannot get data member.`
|
1.0
|
FIM v2.0: Analysisd Integration tests: Error messages - ## Description
This issue covers the integration test for bad formated messages handling by analysisd. We will treat analysisd as a black box that receives integrity events by its input Unix socket, checking that the correct output is forwarded to the desired socket (simulating Wazuh DB).
Twelve use cases have been defined to check that the FIM event messages are handled properly. These cases should be implemented in the same test.
- [ ] No `timestamp` in a FIM scan message.
- [ ] No `type` in a FIM message
- [ ] Empty `type` in an event message.
- [ ] Incorrect `type` in an event message.
- [ ] The JSON in a DB sync message cannot be parsed.
- [ ] The item `component` cannot be parsed as a string in a DB sync message.
- [ ] The item `type` cannot be parsed as a string in a DB sync message.
- [ ] The item `type` is unknown in a DB sync message.
- [ ] No `data` field in a DB sync message.
**Input location**
The input location for all checks is the analysisd socket:
`/var/ossec/queue/ossec/queue`
**Output location**
The output location for all checks is `ossec.log` file:
`/var/ossec/logs/ossec.log`
## No `timestamp` in a FIM scan message
**Input message**:
`8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"scan_end","data":{}}`
**Output message**:
`No such member \"timestamp\" in FIM scan info event.`
## No `type` in a FIM message
**Input message**:
`8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"data":{"timestamp":1575442712}}`
**Output message**:
`Invalid FIM event`
## Empty `type` in an event message
**Input message**:
`8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"event","data":{"path":"/home/test/file","mode":"real-time","type":"NULL","timestamp":1575421671,"attributes":{"type":"file","size":5,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575421671,"hash_md5":"7be8ec9774fc128d067782134fbc37eb","hash_sha1":"fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7","hash_sha256":"ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7","checksum":"00eaef78d06924374cb291957a1f63e224d76320"},"changed_attributes":["size","mtime","md5","sha1","sha256"],"old_attributes":{"type":"file","size":18,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575416596,"hash_md5":"a3ee12884966cb2512805d2500361913","hash_sha1":"e6e8a61093715af1e4f2a3c0618ce014f0d94fde","hash_sha256":"79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a","checksum":"a02381378af3739e81bea813c1ff6e3d0027498d"}}}
`
**Output message**:
`No member 'type' in Syscheck JSON payload`
## Incorrect event `type` in an event message
**Input message**:
`8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"event","data":{"path":"/home/test/file","mode":"real-time","type":"other","timestamp":1575421671,"attributes":{"type":"file","size":5,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575421671,"hash_md5":"7be8ec9774fc128d067782134fbc37eb","hash_sha1":"fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7","hash_sha256":"ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7","checksum":"00eaef78d06924374cb291957a1f63e224d76320"},"changed_attributes":["size","mtime","md5","sha1","sha256"],"old_attributes":{"type":"file","size":18,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575416596,"hash_md5":"a3ee12884966cb2512805d2500361913","hash_sha1":"e6e8a61093715af1e4f2a3c0618ce014f0d94fde","hash_sha256":"79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a","checksum":"a02381378af3739e81bea813c1ff6e3d0027498d"}}}
`
**Output message**:
`Invalid 'type' value 'incorrect_value' in JSON payload.`
## The JSON in a DB sync message cannot be parsed
**Input message**:
`5:[001] (vm-test-agent) 192.168.57.2->syscheck:{{"component":"syscheck","type":"integrity_check_global","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}}
`
**Output message**:
`dbsync: Cannot parse JSON: %s", lf->log`
## The item `component` cannot be parsed as a string in a DB sync message
**Input message**:
`5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"type":"integrity_check_global","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}}`
**Output message**:
`dbsync: Corrupt message: cannot get component member.`
## The item `type` cannot be parsed as a string in a DB sync message
**Input message**:
`5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"component":"syscheck","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}}
`
**Output message**:
`dbsync: Corrupt message: cannot get type member.`
## No `data` field in a DB sync message
**Input message**:
`5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"component":"syscheck","type":"integrity_check_global","":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}}
`
**Output message**:
`dbsync: Corrupt message: cannot get data member.`
|
automation
|
fim analysisd integration tests error messages description this issue covers the integration test for bad formated messages handling by analysisd we will treat analysisd as a black box that receives integrity events by its input unix socket checking that the correct output is forwarded to the desired socket simulating wazuh db twelve use cases have been defined to check that the fim event messages are handled properly these cases should be implemented in the same test no timestamp in a fim scan message no type in a fim message empty type in an event message incorrect type in an event message the json in a db sync message cannot be parsed the item component cannot be parsed as a string in a db sync message the item type cannot be parsed as a string in a db sync message the item type is unknown in a db sync message no data field in a db sync message input location the input location for all checks is the analysisd socket var ossec queue ossec queue output location the output location for all checks is ossec log file var ossec logs ossec log no timestamp in a fim scan message input message vm ubuntu agent syscheck type scan end data output message no such member timestamp in fim scan info event no type in a fim message input message vm ubuntu agent syscheck data timestamp output message invalid fim event empty type in an event message input message vm ubuntu agent syscheck type event data path home test file mode real time type null timestamp attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum changed attributes old attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum output message no member type in syscheck json payload incorrect event type in an event message input message vm ubuntu agent syscheck type event data path home test file mode real time type other timestamp attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum changed attributes old attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum output message invalid type value incorrect value in json payload the json in a db sync message cannot be parsed input message vm test agent syscheck component syscheck type integrity check global data id begin home test file end home test checksum output message dbsync cannot parse json s lf log the item component cannot be parsed as a string in a db sync message input message vm test agent syscheck type integrity check global data id begin home test file end home test checksum output message dbsync corrupt message cannot get component member the item type cannot be parsed as a string in a db sync message input message vm test agent syscheck component syscheck data id begin home test file end home test checksum output message dbsync corrupt message cannot get type member no data field in a db sync message input message vm test agent syscheck component syscheck type integrity check global id begin home test file end home test checksum output message dbsync corrupt message cannot get data member
| 1
|
594,990
| 18,058,638,619
|
IssuesEvent
|
2021-09-20 11:30:17
|
ita-social-projects/horondi_client_fe
|
https://api.github.com/repos/ita-social-projects/horondi_client_fe
|
closed
|
[News] 403 Forbidden error message shown
|
bug priority: medium cline part
|
**Environment:** macOS Big Sur 11.4, Firefox 89.0
**Reproducible:** always
**Build found:** 44d1c1b
**Pre-conditions:**
1. Go to https://horondi-front-staging.azurewebsites.net/ as a user
2. Open the console
**Description**
**Steps to reproduce:**
1. Go to the News page
2. Pay attention to error message in the console
**Actual result:**
'403 Forbidden' error message shown on the News page.
**Expected result:**
The user should get all the information from the News page.
<img width="1440" alt="403 Forbidden" src="https://user-images.githubusercontent.com/62054774/121446562-716d4600-c99c-11eb-881f-90a3046e251f.png">
[User story] #50
Ad-hoc
|
1.0
|
[News] 403 Forbidden error message shown - **Environment:** macOS Big Sur 11.4, Firefox 89.0
**Reproducible:** always
**Build found:** 44d1c1b
**Pre-conditions:**
1. Go to https://horondi-front-staging.azurewebsites.net/ as a user
2. Open the console
**Description**
**Steps to reproduce:**
1. Go to the News page
2. Pay attention to error message in the console
**Actual result:**
'403 Forbidden' error message shown on the News page.
**Expected result:**
The user should get all the information from the News page.
<img width="1440" alt="403 Forbidden" src="https://user-images.githubusercontent.com/62054774/121446562-716d4600-c99c-11eb-881f-90a3046e251f.png">
[User story] #50
Ad-hoc
|
non_automation
|
forbidden error message shown environment macos big sur firefox reproducible always build found pre conditions go to as a user open the console description steps to reproduce go to the news page pay attention to error message in the console actual result forbidden error message shown on the news page expected result the user should get all the information from the news page img width alt forbidden src ad hoc
| 0
|
133,527
| 12,543,554,587
|
IssuesEvent
|
2020-06-05 15:44:38
|
databrokerglobal/dxc
|
https://api.github.com/repos/databrokerglobal/dxc
|
closed
|
Make demo environment on Heroku for JTech
|
Priority: Medium documentation enhancement
|
1. Make a separate branch where we remove the local directory checking for demo purposes
2. Deploy on Heroku
|
1.0
|
Make demo environment on Heroku for JTech - 1. Make a separate branch where we remove the local directory checking for demo purposes
2. Deploy on Heroku
|
non_automation
|
make demo environment on heroku for jtech make a separate branch where we remove the local directory checking for demo purposes deploy on heroku
| 0
|
36,516
| 7,976,290,756
|
IssuesEvent
|
2018-07-17 12:13:23
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
FileUpload: auto upload bug
|
defect
|
If user selects many files to upload and during upload process presses `x` to remove file while it's not uploaded yet - any files after it are not uploaded and javascript errors occurre:
```
TypeError: g.row is null fileupload.js.xhtml:1:24907
TypeError: a is null fileupload.js.xhtml:1:29116
```
After this error upload doesn't work anymore.
I noticed, then auto upload is off - `x` buttons are disabled when upload begins, but not in auto upload mode.
`auto="false"`:

`auto="true"`:

|
1.0
|
FileUpload: auto upload bug - If user selects many files to upload and during upload process presses `x` to remove file while it's not uploaded yet - any files after it are not uploaded and javascript errors occurre:
```
TypeError: g.row is null fileupload.js.xhtml:1:24907
TypeError: a is null fileupload.js.xhtml:1:29116
```
After this error upload doesn't work anymore.
I noticed, then auto upload is off - `x` buttons are disabled when upload begins, but not in auto upload mode.
`auto="false"`:

`auto="true"`:

|
non_automation
|
fileupload auto upload bug if user selects many files to upload and during upload process presses x to remove file while it s not uploaded yet any files after it are not uploaded and javascript errors occurre typeerror g row is null fileupload js xhtml typeerror a is null fileupload js xhtml after this error upload doesn t work anymore i noticed then auto upload is off x buttons are disabled when upload begins but not in auto upload mode auto false auto true
| 0
|
3,092
| 13,063,544,294
|
IssuesEvent
|
2020-07-30 16:41:29
|
elastic/apm-integration-testing
|
https://api.github.com/repos/elastic/apm-integration-testing
|
closed
|
--no-XXXXbeat options does not disable beats when you use it with --all
|
[zube]: Backlog automation subtask
|
If you run the following command the docker-compose file will have beats for running and it should not
`scripts/compose.py start master --no-kibana --no-heartbeat --no-metricbeat --no-filebeat --all`
related to https://github.com/elastic/apm-integration-testing/pull/476
|
1.0
|
--no-XXXXbeat options does not disable beats when you use it with --all - If you run the following command the docker-compose file will have beats for running and it should not
`scripts/compose.py start master --no-kibana --no-heartbeat --no-metricbeat --no-filebeat --all`
related to https://github.com/elastic/apm-integration-testing/pull/476
|
automation
|
no xxxxbeat options does not disable beats when you use it with all if you run the following command the docker compose file will have beats for running and it should not scripts compose py start master no kibana no heartbeat no metricbeat no filebeat all related to
| 1
|
72,025
| 18,975,887,470
|
IssuesEvent
|
2021-11-20 01:28:16
|
orbeon/orbeon-forms
|
https://api.github.com/repos/orbeon/orbeon-forms
|
opened
|
Delete publish form definition improvements
|
Module: Form Runner Module: Form Builder
|
Following #3597, some improvements would be welcome:
- Admin page: ability to delete all existing data
- Form Builder: when publishing a form definition, it would be nice to tell user if there is no published form definition BUT there exists data (if the form definition has been deleted), as that data might be incompatible
|
1.0
|
Delete publish form definition improvements - Following #3597, some improvements would be welcome:
- Admin page: ability to delete all existing data
- Form Builder: when publishing a form definition, it would be nice to tell user if there is no published form definition BUT there exists data (if the form definition has been deleted), as that data might be incompatible
|
non_automation
|
delete publish form definition improvements following some improvements would be welcome admin page ability to delete all existing data form builder when publishing a form definition it would be nice to tell user if there is no published form definition but there exists data if the form definition has been deleted as that data might be incompatible
| 0
|
324,499
| 9,904,702,201
|
IssuesEvent
|
2019-06-27 09:45:50
|
kubernetes-sigs/cluster-api-provider-gcp
|
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-gcp
|
closed
|
[FR] authentication with GCP
|
lifecycle/rotten priority/important-soon
|
Currently the authentication is done via cloud service account. Allow authentication similar to that in https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L305
|
1.0
|
[FR] authentication with GCP - Currently the authentication is done via cloud service account. Allow authentication similar to that in https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L305
|
non_automation
|
authentication with gcp currently the authentication is done via cloud service account allow authentication similar to that in
| 0
|
272,326
| 29,795,008,577
|
IssuesEvent
|
2023-06-16 01:03:48
|
billmcchesney1/hadoop
|
https://api.github.com/repos/billmcchesney1/hadoop
|
closed
|
CVE-2020-11023 (Medium) detected in multiple libraries - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.js</b>, <b>jquery-3.4.1.min.js</b>, <b>jquery-3.3.1.tgz</b>, <b>jquery-1.8.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-3.3.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js</a></p>
<p>Path to dependency file: /hadoop-tools/hadoop-sls/src/main/html/showSimulationTrace.html</p>
<p>Path to vulnerable library: /hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js,/hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.4.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js</a></p>
<p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/webapps/static/jquery/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/target/webapps/static/jquery-3.4.1.min.js,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/jquery-3.4.1.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.3.1.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p>
<p>Path to dependency file: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/package.json</p>
<p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/redeyed/examples/browser/index.html,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/bower/lib/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p>
<p>Found in base branch: <b>trunk</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: 3.5.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
True
|
CVE-2020-11023 (Medium) detected in multiple libraries - autoclosed - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.js</b>, <b>jquery-3.4.1.min.js</b>, <b>jquery-3.3.1.tgz</b>, <b>jquery-1.8.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-3.3.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js</a></p>
<p>Path to dependency file: /hadoop-tools/hadoop-sls/src/main/html/showSimulationTrace.html</p>
<p>Path to vulnerable library: /hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js,/hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.4.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js</a></p>
<p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/webapps/static/jquery/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/target/webapps/static/jquery-3.4.1.min.js,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/jquery-3.4.1.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.3.1.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p>
<p>Path to dependency file: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/package.json</p>
<p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/redeyed/examples/browser/index.html,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/bower/lib/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p>
<p>Found in base branch: <b>trunk</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: 3.5.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
non_automation
|
cve medium detected in multiple libraries autoclosed cve medium severity vulnerability vulnerable libraries jquery js jquery min js jquery tgz jquery min js jquery js javascript library for dom operations library home page a href path to dependency file hadoop tools hadoop sls src main html showsimulationtrace html path to vulnerable library hadoop tools hadoop sls src main html js thirdparty jquery js hadoop tools hadoop sls src main html js thirdparty jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn common target classes webapps static jquery jquery min js hadoop hdfs project hadoop hdfs src main webapps static jquery min js hadoop hdfs project hadoop hdfs target webapps static jquery min js hadoop yarn project hadoop yarn hadoop yarn common src main resources webapps static jquery jquery min js hadoop hdfs project hadoop hdfs target test classes webapps static jquery min js dependency hierarchy x jquery min js vulnerable library jquery tgz javascript library for dom operations library home page a href path to dependency file hadoop yarn project hadoop yarn hadoop yarn applications hadoop yarn applications catalog hadoop yarn applications catalog webapp package json path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn applications hadoop yarn applications catalog hadoop yarn applications catalog webapp node modules jquery package json dependency hierarchy x jquery tgz vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules redeyed examples browser index html path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules redeyed examples browser index html hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules bower lib node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch trunk vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr
| 0
|
8,713
| 27,172,157,287
|
IssuesEvent
|
2023-02-17 20:30:30
|
OneDrive/onedrive-api-docs
|
https://api.github.com/repos/OneDrive/onedrive-api-docs
|
closed
|
Permission Denied when using Graph API service to call Sharepoint with an Azure AD Guest account
|
type:bug status:investigating automation:Closed
|
My app is using Azure AD as an entry point to access both Sharepoint and website.
Good Case Scenario:
I login as an AD user, the app runs as it should. I can use both Graph Api and PNP SP to retrieve data from Sharepoint.
Issue:
If an external user (i.e. gmail, yahoo accounts) is used, the Graph Api throws permission denied error. I added the account on both the Azure AD and added it to the Sharepoint users. If I login to Sharepoint manually as an external user, the site will run perfectly fine. My guess is that the token that Graph API uses does not have the correct permissions to consume Sharepoint services. Can you please help?
#### Category
- [ ] Question
- [ ] Documentation issue
- [x] Bug
|
1.0
|
Permission Denied when using Graph API service to call Sharepoint with an Azure AD Guest account - My app is using Azure AD as an entry point to access both Sharepoint and website.
Good Case Scenario:
I login as an AD user, the app runs as it should. I can use both Graph Api and PNP SP to retrieve data from Sharepoint.
Issue:
If an external user (i.e. gmail, yahoo accounts) is used, the Graph Api throws permission denied error. I added the account on both the Azure AD and added it to the Sharepoint users. If I login to Sharepoint manually as an external user, the site will run perfectly fine. My guess is that the token that Graph API uses does not have the correct permissions to consume Sharepoint services. Can you please help?
#### Category
- [ ] Question
- [ ] Documentation issue
- [x] Bug
|
automation
|
permission denied when using graph api service to call sharepoint with an azure ad guest account my app is using azure ad as an entry point to access both sharepoint and website good case scenario i login as an ad user the app runs as it should i can use both graph api and pnp sp to retrieve data from sharepoint issue if an external user i e gmail yahoo accounts is used the graph api throws permission denied error i added the account on both the azure ad and added it to the sharepoint users if i login to sharepoint manually as an external user the site will run perfectly fine my guess is that the token that graph api uses does not have the correct permissions to consume sharepoint services can you please help category question documentation issue bug
| 1
|
1,799
| 10,789,898,892
|
IssuesEvent
|
2019-11-05 12:59:12
|
spacemeshos/go-spacemesh
|
https://api.github.com/repos/spacemeshos/go-spacemesh
|
closed
|
persist database to volume storage in k8s
|
Recovery & Shutdown TN-1.0 automation
|
# Overview / Motivation
Our pods in k8s running spacemesh allocate files for the database, this database keeps growing (it is the mesh), k8s treats this storage as part of the pod memory, means if we have limits on memory we'll eventually reach them no matter what. we need to attach a persistent storage volume to the pod and save the database there.
# The Task
TODO: Clearly describe the issue requirements here...
# Implementation Notes
TODO: Add links to relevant resources, specs, related issues, etc...
# Contribution Guidelines
Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter [dev channel](https://gitter.im/spacemesh-os/Lobby)
1. Introduce yourself on go-spacemesh [dev chat channel](https://gitter.im/spacemesh-os/Lobby) - ask our team any question you may have about this task
2. Fork branch `develop` to your own repo and work in your repo
3. You must document all methods, enums and types with [godoc comments](https://blog.golang.org/godoc-documenting-go-code)
4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature
5. When ready for code review, submit a PR from your repo back to branch `develop`
6. Attach relevant issue to PR
|
1.0
|
persist database to volume storage in k8s - # Overview / Motivation
Our pods in k8s running spacemesh allocate files for the database, this database keeps growing (it is the mesh), k8s treats this storage as part of the pod memory, means if we have limits on memory we'll eventually reach them no matter what. we need to attach a persistent storage volume to the pod and save the database there.
# The Task
TODO: Clearly describe the issue requirements here...
# Implementation Notes
TODO: Add links to relevant resources, specs, related issues, etc...
# Contribution Guidelines
Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter [dev channel](https://gitter.im/spacemesh-os/Lobby)
1. Introduce yourself on go-spacemesh [dev chat channel](https://gitter.im/spacemesh-os/Lobby) - ask our team any question you may have about this task
2. Fork branch `develop` to your own repo and work in your repo
3. You must document all methods, enums and types with [godoc comments](https://blog.golang.org/godoc-documenting-go-code)
4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature
5. When ready for code review, submit a PR from your repo back to branch `develop`
6. Attach relevant issue to PR
|
automation
|
persist database to volume storage in overview motivation our pods in running spacemesh allocate files for the database this database keeps growing it is the mesh treats this storage as part of the pod memory means if we have limits on memory we ll eventually reach them no matter what we need to attach a persistent storage volume to the pod and save the database there the task todo clearly describe the issue requirements here implementation notes todo add links to relevant resources specs related issues etc contribution guidelines important issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity we will not assign tasks to developers who have nt introduced themselves on our gitter introduce yourself on go spacemesh ask our team any question you may have about this task fork branch develop to your own repo and work in your repo you must document all methods enums and types with you must write go unit tests for all types and methods when submitting a component and integration tests if you submit a feature when ready for code review submit a pr from your repo back to branch develop attach relevant issue to pr
| 1
|
5,422
| 19,564,591,404
|
IssuesEvent
|
2022-01-03 21:32:53
|
mozilla-mobile/fenix
|
https://api.github.com/repos/mozilla-mobile/fenix
|
closed
|
Add hand curated parameter files for testing taskgraph changes locally
|
eng:automation needs:triage
|
This will help folks who are making changes to taskgraph test that they aren't making unexpected changes to other contexts (like a release graph). These files currently live here:
https://hg.mozilla.org/build/braindump/file/tip/taskcluster/taskgraph-diff/params-fenix
But having them in the actual repo is much more convenient. The standard spot we're placing them is in `taskcluster/test/params`.
|
1.0
|
Add hand curated parameter files for testing taskgraph changes locally - This will help folks who are making changes to taskgraph test that they aren't making unexpected changes to other contexts (like a release graph). These files currently live here:
https://hg.mozilla.org/build/braindump/file/tip/taskcluster/taskgraph-diff/params-fenix
But having them in the actual repo is much more convenient. The standard spot we're placing them is in `taskcluster/test/params`.
|
automation
|
add hand curated parameter files for testing taskgraph changes locally this will help folks who are making changes to taskgraph test that they aren t making unexpected changes to other contexts like a release graph these files currently live here but having them in the actual repo is much more convenient the standard spot we re placing them is in taskcluster test params
| 1
|
28,951
| 2,712,595,355
|
IssuesEvent
|
2015-04-09 14:40:31
|
HeinrichReimer/material-drawer
|
https://api.github.com/repos/HeinrichReimer/material-drawer
|
closed
|
CloseDrawer Lag
|
bug low priority question
|
Hey, first of all thanks for this great library, I just have 2 small problems:
- I start new activites in the OnItemClickListener for each item and want to close the drawer beforehand. Unfortunately the animation isnt finished when the new activity is started and it get stuck for a brief moment. Is it possible to "wait" for the animation to finish? I'm not sure if im doing something wrong, here is the code for one item:
```java
drawer.addFixedItem(
new DrawerItem()
.setImage(getResources().getDrawable(R.drawable.ic_format_line_spacing_grey600_48dp))
.setTextPrimary(getString(R.string.drawer_sixth_item))
.setTextSecondary(getString(R.string.drawer_sixth_description))
.setOnItemClickListener(new DrawerItem.OnItemClickListener() {
@Override
public void onClick(DrawerItem drawerItem, int i, int position) {
drawerLayout.closeDrawer(drawer);
intent = new Intent(getApplicationContext(), Swipe.class);
intent.putExtra("toGo", 0);
startActivity(intent);
}
})
);
```
What is a good practice to get the drawer in every fragment or activity? Right now I created a base class and let every activity extend it, is this a good idea?
- Debug Log is getting spammed with methodcalls:
```
03-02 15:56:53.298 21730-21730/com.example D/DrawerView﹕ DrawerView()
03-02 15:56:53.298 21730-21730/com.example D/DrawerView﹕ init()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ findViews()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateProfile()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateList()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateProfile()
03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateList()
03-02 15:56:53.328 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.358 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.358 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.378 21730-21730/com.example D/DrawerView﹕ updateList()
03-02 15:56:53.378 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.418 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.418 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.438 21730-21730/com.example D/DrawerView﹕ updateList()
03-02 15:56:53.438 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.498 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.498 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.568 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.568 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.628 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.628 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
```
Thanks for any help!
|
1.0
|
CloseDrawer Lag - Hey, first of all thanks for this great library, I just have 2 small problems:
- I start new activites in the OnItemClickListener for each item and want to close the drawer beforehand. Unfortunately the animation isnt finished when the new activity is started and it get stuck for a brief moment. Is it possible to "wait" for the animation to finish? I'm not sure if im doing something wrong, here is the code for one item:
```java
drawer.addFixedItem(
new DrawerItem()
.setImage(getResources().getDrawable(R.drawable.ic_format_line_spacing_grey600_48dp))
.setTextPrimary(getString(R.string.drawer_sixth_item))
.setTextSecondary(getString(R.string.drawer_sixth_description))
.setOnItemClickListener(new DrawerItem.OnItemClickListener() {
@Override
public void onClick(DrawerItem drawerItem, int i, int position) {
drawerLayout.closeDrawer(drawer);
intent = new Intent(getApplicationContext(), Swipe.class);
intent.putExtra("toGo", 0);
startActivity(intent);
}
})
);
```
What is a good practice to get the drawer in every fragment or activity? Right now I created a base class and let every activity extend it, is this a good idea?
- Debug Log is getting spammed with methodcalls:
```
03-02 15:56:53.298 21730-21730/com.example D/DrawerView﹕ DrawerView()
03-02 15:56:53.298 21730-21730/com.example D/DrawerView﹕ init()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ findViews()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateProfile()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateList()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateProfile()
03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateList()
03-02 15:56:53.328 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.358 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.358 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.378 21730-21730/com.example D/DrawerView﹕ updateList()
03-02 15:56:53.378 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.418 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.418 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.438 21730-21730/com.example D/DrawerView﹕ updateList()
03-02 15:56:53.438 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.498 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.498 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.568 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.568 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
03-02 15:56:53.628 21730-21730/com.example D/DrawerView﹕ updateFixedList()
03-02 15:56:53.628 21730-21730/com.example D/DrawerView﹕ updateListSpacing()
```
Thanks for any help!
|
non_automation
|
closedrawer lag hey first of all thanks for this great library i just have small problems i start new activites in the onitemclicklistener for each item and want to close the drawer beforehand unfortunately the animation isnt finished when the new activity is started and it get stuck for a brief moment is it possible to wait for the animation to finish i m not sure if im doing something wrong here is the code for one item java drawer addfixeditem new draweritem setimage getresources getdrawable r drawable ic format line spacing settextprimary getstring r string drawer sixth item settextsecondary getstring r string drawer sixth description setonitemclicklistener new draweritem onitemclicklistener override public void onclick draweritem draweritem int i int position drawerlayout closedrawer drawer intent new intent getapplicationcontext swipe class intent putextra togo startactivity intent what is a good practice to get the drawer in every fragment or activity right now i created a base class and let every activity extend it is this a good idea debug log is getting spammed with methodcalls com example d drawerview﹕ drawerview com example d drawerview﹕ init com example d drawerview﹕ findviews com example d drawerview﹕ updateprofile com example d drawerview﹕ updatelist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updateprofile com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatelist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatelist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatelist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing thanks for any help
| 0
|
2,034
| 11,296,524,170
|
IssuesEvent
|
2020-01-17 02:11:01
|
StoneCypher/fsl
|
https://api.github.com/repos/StoneCypher/fsl
|
opened
|
Add stale watchdog
|
Automation Chore Cleanup Research material Tooling needed
|
https://github.com/actions/stale
Figure out how to add this to the workflow
Also remember to configure close to 3,650,000 days (we only want the label)
|
1.0
|
Add stale watchdog - https://github.com/actions/stale
Figure out how to add this to the workflow
Also remember to configure close to 3,650,000 days (we only want the label)
|
automation
|
add stale watchdog figure out how to add this to the workflow also remember to configure close to days we only want the label
| 1
|
1,572
| 10,346,472,562
|
IssuesEvent
|
2019-09-04 15:20:12
|
ASL-LEX/asl-lex
|
https://api.github.com/repos/ASL-LEX/asl-lex
|
closed
|
Make a top level python script to pre-generate edge lists
|
automation
|
- [ ] Script should import PyND
- [ ] script should run PyND for a configurable list of features
Will update the criteria once Naomi sends those
|
1.0
|
Make a top level python script to pre-generate edge lists - - [ ] Script should import PyND
- [ ] script should run PyND for a configurable list of features
Will update the criteria once Naomi sends those
|
automation
|
make a top level python script to pre generate edge lists script should import pynd script should run pynd for a configurable list of features will update the criteria once naomi sends those
| 1
|
4,436
| 16,542,140,236
|
IssuesEvent
|
2021-05-27 18:15:31
|
rancher-sandbox/cOS-toolkit
|
https://api.github.com/repos/rancher-sandbox/cOS-toolkit
|
closed
|
ci: docker-build test fails for unavailable space
|
automation bug
|
Seems we run out of space in GH workers when building from the docker image
**cos-toolkit version:**
N/A
**CPU architecture, OS, and Version:**
N/A
**Describe the bug**
``` 📦 build/golang-1.16.4+3 🐋 Generating 'package' image from raccos/fedora:builder-b3dec7ea9a4bb0531b15ad057fa45532 as raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc with build steps
🐋 Downloaded image: raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc
📦 build/golang-1.16.4+3 🔨 Generating delta
Error: while resolving multi-stage images: failed building multi-stage image: Failed compiling build/golang-1.16.4+3: Error met while generating delta: Could not generate changes from layers: Error met while unpacking dst image raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc: failed while extracting rootfs for raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc: Failed exporting image: write /var/tmp/luet/extraction907140095/dst930592905/tmprootfs313814694/.docker_temp_510053849: no space left on device
```
**To Reproduce**
<!-- Steps to reproduce the behavior, including the luet command used -->
**Expected behavior**
Successful build
**Logs**
https://github.com/rancher-sandbox/cOS-toolkit/runs/2659492726
**Additional context**
<!-- Add any other context about the problem here. -->
|
1.0
|
ci: docker-build test fails for unavailable space - Seems we run out of space in GH workers when building from the docker image
**cos-toolkit version:**
N/A
**CPU architecture, OS, and Version:**
N/A
**Describe the bug**
``` 📦 build/golang-1.16.4+3 🐋 Generating 'package' image from raccos/fedora:builder-b3dec7ea9a4bb0531b15ad057fa45532 as raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc with build steps
🐋 Downloaded image: raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc
📦 build/golang-1.16.4+3 🔨 Generating delta
Error: while resolving multi-stage images: failed building multi-stage image: Failed compiling build/golang-1.16.4+3: Error met while generating delta: Could not generate changes from layers: Error met while unpacking dst image raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc: failed while extracting rootfs for raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc: Failed exporting image: write /var/tmp/luet/extraction907140095/dst930592905/tmprootfs313814694/.docker_temp_510053849: no space left on device
```
**To Reproduce**
<!-- Steps to reproduce the behavior, including the luet command used -->
**Expected behavior**
Successful build
**Logs**
https://github.com/rancher-sandbox/cOS-toolkit/runs/2659492726
**Additional context**
<!-- Add any other context about the problem here. -->
|
automation
|
ci docker build test fails for unavailable space seems we run out of space in gh workers when building from the docker image cos toolkit version n a cpu architecture os and version n a describe the bug 📦 build golang 🐋 generating package image from raccos fedora builder as raccos fedora with build steps 🐋 downloaded image raccos fedora 📦 build golang 🔨 generating delta error while resolving multi stage images failed building multi stage image failed compiling build golang error met while generating delta could not generate changes from layers error met while unpacking dst image raccos fedora failed while extracting rootfs for raccos fedora failed exporting image write var tmp luet docker temp no space left on device to reproduce expected behavior successful build logs additional context
| 1
|
30,914
| 11,860,123,272
|
IssuesEvent
|
2020-03-25 14:26:47
|
BrianMcDonaldWS/genie
|
https://api.github.com/repos/BrianMcDonaldWS/genie
|
opened
|
CVE-2019-0201 (Medium) detected in zookeeper-3.4.12.jar
|
security vulnerability
|
## CVE-2019-0201 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>zookeeper-3.4.12.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /tmp/ws-scm/genie/genie-ui/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar</p>
<p>
Dependency Hierarchy:
- spring-integration-zookeeper-5.2.2.RELEASE.jar (Root Library)
- curator-recipes-4.0.1.jar
- curator-framework-4.0.1.jar
- curator-client-4.0.1.jar
- :x: **zookeeper-3.4.12.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/BrianMcDonaldWS/genie/commit/568866fb6e52bc93c68e71b643c3271128773566">568866fb6e52bc93c68e71b643c3271128773566</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.
<p>Publish Date: 2019-05-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0201>CVE-2019-0201</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://zookeeper.apache.org/security.html">https://zookeeper.apache.org/security.html</a></p>
<p>Release Date: 2019-05-23</p>
<p>Fix Resolution: 3.4.14, 3.5.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.zookeeper","packageName":"zookeeper","packageVersion":"3.4.12","isTransitiveDependency":true,"dependencyTree":"org.springframework.integration:spring-integration-zookeeper:5.2.2.RELEASE;org.apache.curator:curator-recipes:4.0.1;org.apache.curator:curator-framework:4.0.1;org.apache.curator:curator-client:4.0.1;org.apache.zookeeper:zookeeper:3.4.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.14, 3.5.5"}],"vulnerabilityIdentifier":"CVE-2019-0201","vulnerabilityDetails":"An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0201","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-0201 (Medium) detected in zookeeper-3.4.12.jar - ## CVE-2019-0201 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>zookeeper-3.4.12.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /tmp/ws-scm/genie/genie-ui/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar</p>
<p>
Dependency Hierarchy:
- spring-integration-zookeeper-5.2.2.RELEASE.jar (Root Library)
- curator-recipes-4.0.1.jar
- curator-framework-4.0.1.jar
- curator-client-4.0.1.jar
- :x: **zookeeper-3.4.12.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/BrianMcDonaldWS/genie/commit/568866fb6e52bc93c68e71b643c3271128773566">568866fb6e52bc93c68e71b643c3271128773566</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.
<p>Publish Date: 2019-05-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0201>CVE-2019-0201</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://zookeeper.apache.org/security.html">https://zookeeper.apache.org/security.html</a></p>
<p>Release Date: 2019-05-23</p>
<p>Fix Resolution: 3.4.14, 3.5.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.zookeeper","packageName":"zookeeper","packageVersion":"3.4.12","isTransitiveDependency":true,"dependencyTree":"org.springframework.integration:spring-integration-zookeeper:5.2.2.RELEASE;org.apache.curator:curator-recipes:4.0.1;org.apache.curator:curator-framework:4.0.1;org.apache.curator:curator-client:4.0.1;org.apache.zookeeper:zookeeper:3.4.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.14, 3.5.5"}],"vulnerabilityIdentifier":"CVE-2019-0201","vulnerabilityDetails":"An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0201","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_automation
|
cve medium detected in zookeeper jar cve medium severity vulnerability vulnerable library zookeeper jar path to dependency file tmp ws scm genie genie ui build gradle path to vulnerable library root gradle caches modules files org apache zookeeper zookeeper zookeeper jar root gradle caches modules files org apache zookeeper zookeeper zookeeper jar dependency hierarchy spring integration zookeeper release jar root library curator recipes jar curator framework jar curator client jar x zookeeper jar vulnerable library found in head commit a href vulnerability details an issue is present in apache zookeeper to and alpha to beta zookeeper’s getacl command doesn’t check any permission when retrieves the acls of the requested node and returns all information contained in the acl id field as plaintext string digestauthenticationprovider overloads the id field with the hash value that is used for user authentication as a consequence if digest authentication is in use the unsalted hash value will be disclosed by getacl request for unauthenticated or unprivileged users publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails an issue is present in apache zookeeper to and alpha to beta zookeeper’s getacl command doesn’t check any permission when retrieves the acls of the requested node and returns all information contained in the acl id field as plaintext string digestauthenticationprovider overloads the id field with the hash value that is used for user authentication as a consequence if digest authentication is in use the unsalted hash value will be disclosed by getacl request for unauthenticated or unprivileged users vulnerabilityurl
| 0
|
5,546
| 20,031,617,376
|
IssuesEvent
|
2022-02-02 07:01:55
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Update of the information
|
automation/svc triaged cxp doc-enhancement Pri2
|
I would suggest the following update to the information on this age.
4. In your Log Analytics workspace, select Computer Groups from the left-hand menu.
5. From Computer Groups in the right-hand pane, the Saved groups tab is shown by default.
6. From the table, click the icon Run query to the right of the item MicrosoftDefaultComputerGroup.
7. In the query editor, change from Tables to Functions. Find the Updates_MicrosoftDefaultComputerGroup and click on it and hold the mouse cursor over it which will show more options, click on the load the function code.
8. The review the code and find the UUID for the machine. Remove the UUID for the machine and repeat the steps for any other machines you want to remove.
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9a94d637-558c-b26e-a1de-c4381aa6783c
* Version Independent ID: d8c47851-0ac5-3932-e1e1-e224285e7476
* Content: [Remove machines from Azure Automation Update Management](https://docs.microsoft.com/en-us/azure/automation/update-management/remove-vms?tabs=azure-vm)
* Content Source: [articles/automation/update-management/remove-vms.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/update-management/remove-vms.md)
* Service: **automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **v-ssudhir**
|
1.0
|
Update of the information - I would suggest the following update to the information on this age.
4. In your Log Analytics workspace, select Computer Groups from the left-hand menu.
5. From Computer Groups in the right-hand pane, the Saved groups tab is shown by default.
6. From the table, click the icon Run query to the right of the item MicrosoftDefaultComputerGroup.
7. In the query editor, change from Tables to Functions. Find the Updates_MicrosoftDefaultComputerGroup and click on it and hold the mouse cursor over it which will show more options, click on the load the function code.
8. The review the code and find the UUID for the machine. Remove the UUID for the machine and repeat the steps for any other machines you want to remove.
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9a94d637-558c-b26e-a1de-c4381aa6783c
* Version Independent ID: d8c47851-0ac5-3932-e1e1-e224285e7476
* Content: [Remove machines from Azure Automation Update Management](https://docs.microsoft.com/en-us/azure/automation/update-management/remove-vms?tabs=azure-vm)
* Content Source: [articles/automation/update-management/remove-vms.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/update-management/remove-vms.md)
* Service: **automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **v-ssudhir**
|
automation
|
update of the information i would suggest the following update to the information on this age in your log analytics workspace select computer groups from the left hand menu from computer groups in the right hand pane the saved groups tab is shown by default from the table click the icon run query to the right of the item microsoftdefaultcomputergroup in the query editor change from tables to functions find the updates microsoftdefaultcomputergroup and click on it and hold the mouse cursor over it which will show more options click on the load the function code the review the code and find the uuid for the machine remove the uuid for the machine and repeat the steps for any other machines you want to remove document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login sgsneha microsoft alias v ssudhir
| 1
|
105,764
| 9,100,680,243
|
IssuesEvent
|
2019-02-20 09:12:48
|
humera987/FXLabs-Test-Automation
|
https://api.github.com/repos/humera987/FXLabs-Test-Automation
|
closed
|
Test : ApiV1ProjectsIdNewAutocodeconfigPostAutocodeconfiguseraAllowAbact3positive
|
test
|
Project : Test
Job : Default
Env : Default
Category : null
Tags : null
Severity : null
Region : US_WEST
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://13.56.210.25/api/v1/api/v1/projects//new/autocodeconfig
Request :
{
"createdBy" : "",
"createdDate" : "",
"genPolicy" : "None",
"generators" : [ {
"abacResources" : [ {
"createBody" : "WqtCiOB7",
"createEndpoint" : "WqtCiOB7",
"createUserAuth" : "WqtCiOB7",
"createdBy" : "",
"createdDate" : "",
"deleteEndpoint" : "WqtCiOB7",
"enumValues" : "WqtCiOB7",
"generatorId" : "WqtCiOB7",
"id" : "",
"inactive" : false,
"initScriptName" : "WqtCiOB7",
"lock" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"resourceName" : "WqtCiOB7",
"scripts" : [ {
"body" : "WqtCiOB7",
"deleteEndPoint" : "WqtCiOB7",
"endpoint" : "WqtCiOB7",
"resourceName" : "WqtCiOB7",
"scriptName" : "WqtCiOB7",
"scriptType" : "WqtCiOB7",
"sequence" : "851820671",
"userAuth" : "WqtCiOB7",
"validationScript" : false
} ],
"typeThreeCreateEndpoint" : "WqtCiOB7",
"validations" : [ {
"body" : "WqtCiOB7",
"endpoint" : "WqtCiOB7",
"inactive" : false,
"lock" : false,
"path" : "WqtCiOB7",
"userAuth" : "WqtCiOB7",
"validationType" : "WqtCiOB7"
} ],
"version" : ""
} ],
"assertionDescription" : "WqtCiOB7",
"assertions" : [ "WqtCiOB7" ],
"assertionsText" : "WqtCiOB7",
"authors" : "WqtCiOB7",
"category" : "Null_Value",
"coverageMultiplier" : "851820671",
"currentScripts" : "851820671",
"database" : {
"name" : "WqtCiOB7",
"version" : ""
},
"displayHeaderDescription" : "WqtCiOB7",
"displayHeaderLabel" : "WqtCiOB7",
"expectedScripts" : "851820671",
"fixHours" : "WqtCiOB7",
"id" : "",
"inactive" : false,
"matches" : [ {
"allowRoles" : "WqtCiOB7",
"bodyProperties" : "WqtCiOB7",
"denyRoles" : "WqtCiOB7",
"id" : "",
"methods" : "WqtCiOB7",
"name" : "WqtCiOB7",
"pathPatterns" : "WqtCiOB7",
"queryParams" : "WqtCiOB7",
"resourceSamples" : "WqtCiOB7",
"value" : "WqtCiOB7"
} ],
"newlyAdded" : false,
"project" : {
"account" : {
"accountType" : "Http",
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"org" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"version" : ""
},
"region" : "WqtCiOB7",
"version" : ""
},
"autoGenSuites" : "851820671",
"branch" : "WqtCiOB7",
"createdBy" : "",
"createdDate" : "",
"description" : "WqtCiOB7",
"genPolicy" : "None",
"id" : "",
"inactive" : false,
"isFileLoad" : "WqtCiOB7",
"issueTracker" : {
"account" : "WqtCiOB7",
"accountType" : "GitLab",
"id" : "",
"name" : "WqtCiOB7",
"projectKey" : "WqtCiOB7",
"url" : "WqtCiOB7"
},
"lastCommit" : "WqtCiOB7",
"lastSync" : null,
"licenses" : [ "WqtCiOB7" ],
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"notifications" : [ {
"account" : "WqtCiOB7",
"channel" : "WqtCiOB7",
"id" : "",
"name" : "WqtCiOB7",
"to" : "WqtCiOB7"
} ],
"openAPISpec" : "WqtCiOB7",
"openText" : "WqtCiOB7",
"org" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"version" : ""
},
"props" : null,
"url" : "WqtCiOB7",
"version" : ""
},
"sequenceOrder" : "851820671",
"severity" : "Minor",
"tags" : [ "WqtCiOB7" ],
"type" : "WqtCiOB7"
} ],
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"openAPISpec" : "WqtCiOB7",
"project" : {
"account" : {
"accountType" : "Http",
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"org" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"version" : ""
},
"region" : "WqtCiOB7",
"version" : ""
},
"autoGenSuites" : "851820671",
"branch" : "WqtCiOB7",
"createdBy" : "",
"createdDate" : "",
"description" : "WqtCiOB7",
"genPolicy" : "None",
"id" : "",
"inactive" : false,
"isFileLoad" : "WqtCiOB7",
"issueTracker" : {
"account" : "WqtCiOB7",
"accountType" : "GitLab",
"id" : "",
"name" : "WqtCiOB7",
"projectKey" : "WqtCiOB7",
"url" : "WqtCiOB7"
},
"lastCommit" : "WqtCiOB7",
"lastSync" : null,
"licenses" : [ "WqtCiOB7" ],
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"notifications" : [ {
"account" : "WqtCiOB7",
"channel" : "WqtCiOB7",
"id" : "",
"name" : "WqtCiOB7",
"to" : "WqtCiOB7"
} ],
"openAPISpec" : "WqtCiOB7",
"openText" : "WqtCiOB7",
"org" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"version" : ""
},
"props" : null,
"url" : "WqtCiOB7",
"version" : ""
},
"version" : ""
}
Response :
I/O error on POST request for "http://13.56.210.25/api/v1/api/v1/projects/new/autocodeconfig": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out
Logs :
Assertion [@StatusCode == 401 OR @StatusCode == 403 OR @Response.errors == true] resolved-to [500 == 401 OR 500 == 403 OR == true] result [Failed]
--- FX Bot ---
|
1.0
|
Test : ApiV1ProjectsIdNewAutocodeconfigPostAutocodeconfiguseraAllowAbact3positive - Project : Test
Job : Default
Env : Default
Category : null
Tags : null
Severity : null
Region : US_WEST
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://13.56.210.25/api/v1/api/v1/projects//new/autocodeconfig
Request :
{
"createdBy" : "",
"createdDate" : "",
"genPolicy" : "None",
"generators" : [ {
"abacResources" : [ {
"createBody" : "WqtCiOB7",
"createEndpoint" : "WqtCiOB7",
"createUserAuth" : "WqtCiOB7",
"createdBy" : "",
"createdDate" : "",
"deleteEndpoint" : "WqtCiOB7",
"enumValues" : "WqtCiOB7",
"generatorId" : "WqtCiOB7",
"id" : "",
"inactive" : false,
"initScriptName" : "WqtCiOB7",
"lock" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"resourceName" : "WqtCiOB7",
"scripts" : [ {
"body" : "WqtCiOB7",
"deleteEndPoint" : "WqtCiOB7",
"endpoint" : "WqtCiOB7",
"resourceName" : "WqtCiOB7",
"scriptName" : "WqtCiOB7",
"scriptType" : "WqtCiOB7",
"sequence" : "851820671",
"userAuth" : "WqtCiOB7",
"validationScript" : false
} ],
"typeThreeCreateEndpoint" : "WqtCiOB7",
"validations" : [ {
"body" : "WqtCiOB7",
"endpoint" : "WqtCiOB7",
"inactive" : false,
"lock" : false,
"path" : "WqtCiOB7",
"userAuth" : "WqtCiOB7",
"validationType" : "WqtCiOB7"
} ],
"version" : ""
} ],
"assertionDescription" : "WqtCiOB7",
"assertions" : [ "WqtCiOB7" ],
"assertionsText" : "WqtCiOB7",
"authors" : "WqtCiOB7",
"category" : "Null_Value",
"coverageMultiplier" : "851820671",
"currentScripts" : "851820671",
"database" : {
"name" : "WqtCiOB7",
"version" : ""
},
"displayHeaderDescription" : "WqtCiOB7",
"displayHeaderLabel" : "WqtCiOB7",
"expectedScripts" : "851820671",
"fixHours" : "WqtCiOB7",
"id" : "",
"inactive" : false,
"matches" : [ {
"allowRoles" : "WqtCiOB7",
"bodyProperties" : "WqtCiOB7",
"denyRoles" : "WqtCiOB7",
"id" : "",
"methods" : "WqtCiOB7",
"name" : "WqtCiOB7",
"pathPatterns" : "WqtCiOB7",
"queryParams" : "WqtCiOB7",
"resourceSamples" : "WqtCiOB7",
"value" : "WqtCiOB7"
} ],
"newlyAdded" : false,
"project" : {
"account" : {
"accountType" : "Http",
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"org" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"version" : ""
},
"region" : "WqtCiOB7",
"version" : ""
},
"autoGenSuites" : "851820671",
"branch" : "WqtCiOB7",
"createdBy" : "",
"createdDate" : "",
"description" : "WqtCiOB7",
"genPolicy" : "None",
"id" : "",
"inactive" : false,
"isFileLoad" : "WqtCiOB7",
"issueTracker" : {
"account" : "WqtCiOB7",
"accountType" : "GitLab",
"id" : "",
"name" : "WqtCiOB7",
"projectKey" : "WqtCiOB7",
"url" : "WqtCiOB7"
},
"lastCommit" : "WqtCiOB7",
"lastSync" : null,
"licenses" : [ "WqtCiOB7" ],
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"notifications" : [ {
"account" : "WqtCiOB7",
"channel" : "WqtCiOB7",
"id" : "",
"name" : "WqtCiOB7",
"to" : "WqtCiOB7"
} ],
"openAPISpec" : "WqtCiOB7",
"openText" : "WqtCiOB7",
"org" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"version" : ""
},
"props" : null,
"url" : "WqtCiOB7",
"version" : ""
},
"sequenceOrder" : "851820671",
"severity" : "Minor",
"tags" : [ "WqtCiOB7" ],
"type" : "WqtCiOB7"
} ],
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"openAPISpec" : "WqtCiOB7",
"project" : {
"account" : {
"accountType" : "Http",
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"org" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"version" : ""
},
"region" : "WqtCiOB7",
"version" : ""
},
"autoGenSuites" : "851820671",
"branch" : "WqtCiOB7",
"createdBy" : "",
"createdDate" : "",
"description" : "WqtCiOB7",
"genPolicy" : "None",
"id" : "",
"inactive" : false,
"isFileLoad" : "WqtCiOB7",
"issueTracker" : {
"account" : "WqtCiOB7",
"accountType" : "GitLab",
"id" : "",
"name" : "WqtCiOB7",
"projectKey" : "WqtCiOB7",
"url" : "WqtCiOB7"
},
"lastCommit" : "WqtCiOB7",
"lastSync" : null,
"licenses" : [ "WqtCiOB7" ],
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"notifications" : [ {
"account" : "WqtCiOB7",
"channel" : "WqtCiOB7",
"id" : "",
"name" : "WqtCiOB7",
"to" : "WqtCiOB7"
} ],
"openAPISpec" : "WqtCiOB7",
"openText" : "WqtCiOB7",
"org" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "WqtCiOB7",
"version" : ""
},
"props" : null,
"url" : "WqtCiOB7",
"version" : ""
},
"version" : ""
}
Response :
I/O error on POST request for "http://13.56.210.25/api/v1/api/v1/projects/new/autocodeconfig": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out
Logs :
Assertion [@StatusCode == 401 OR @StatusCode == 403 OR @Response.errors == true] resolved-to [500 == 401 OR 500 == 403 OR == true] result [Failed]
--- FX Bot ---
|
non_automation
|
test project test job default env default category null tags null severity null region us west result fail status code headers endpoint request createdby createddate genpolicy none generators abacresources createbody createendpoint createuserauth createdby createddate deleteendpoint enumvalues generatorid id inactive false initscriptname lock false modifiedby modifieddate resourcename scripts body deleteendpoint endpoint resourcename scriptname scripttype sequence userauth validationscript false typethreecreateendpoint validations body endpoint inactive false lock false path userauth validationtype version assertiondescription assertions assertionstext authors category null value coveragemultiplier currentscripts database name version displayheaderdescription displayheaderlabel expectedscripts fixhours id inactive false matches allowroles bodyproperties denyroles id methods name pathpatterns queryparams resourcesamples value newlyadded false project account accounttype http createdby createddate id inactive false modifiedby modifieddate name org createdby createddate id inactive false modifiedby modifieddate name version region version autogensuites branch createdby createddate description genpolicy none id inactive false isfileload issuetracker account accounttype gitlab id name projectkey url lastcommit lastsync null licenses modifiedby modifieddate name notifications account channel id name to openapispec opentext org createdby createddate id inactive false modifiedby modifieddate name version props null url version sequenceorder severity minor tags type id inactive false modifiedby modifieddate openapispec project account accounttype http createdby createddate id inactive false modifiedby modifieddate name org createdby createddate id inactive false modifiedby modifieddate name version region version autogensuites branch createdby createddate description genpolicy none id inactive false isfileload issuetracker account accounttype gitlab id name projectkey url lastcommit lastsync null licenses modifiedby modifieddate name notifications account channel id name to openapispec opentext org createdby createddate id inactive false modifiedby modifieddate name version props null url version version response i o error on post request for read timed out nested exception is java net sockettimeoutexception read timed out logs assertion resolved to result fx bot
| 0
|
1,009
| 12,179,383,253
|
IssuesEvent
|
2020-04-28 10:34:49
|
rook/rook
|
https://api.github.com/repos/rook/rook
|
closed
|
Convert the Ceph Cluster controller to the controller-runtime
|
ceph - feature reliability
|
**Is this a bug report or feature request?**
* Feature Request
**What should the feature do:**
Convert the [CephCluster controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/cluster/controller.go) to be managed with the controller-runtime.
Currently Rook only has a simple watch in an informer as seen [here](https://github.com/rook/rook/blob/master/pkg/operator/k8sutil/customresource.go#L54).
**What is use case behind this feature:**
The controller runtime will improve reliability of the operator in several areas:
- Events can be re-queued if failed or the operator is not able to complete the operation
- Exponential backoff is provided automatically for re-queued events
- Waiting for the next event does not need to block on the current event if it is taking a long time and the event can be re-queued.
Several controllers in Rook are using the controller runtime. For examples, see the [pool controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/pool/controller.go) or [disruption budget](https://github.com/rook/rook/blob/master/pkg/operator/ceph/disruption/clusterdisruption/reconcile.go) controller.
|
True
|
Convert the Ceph Cluster controller to the controller-runtime - **Is this a bug report or feature request?**
* Feature Request
**What should the feature do:**
Convert the [CephCluster controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/cluster/controller.go) to be managed with the controller-runtime.
Currently Rook only has a simple watch in an informer as seen [here](https://github.com/rook/rook/blob/master/pkg/operator/k8sutil/customresource.go#L54).
**What is use case behind this feature:**
The controller runtime will improve reliability of the operator in several areas:
- Events can be re-queued if failed or the operator is not able to complete the operation
- Exponential backoff is provided automatically for re-queued events
- Waiting for the next event does not need to block on the current event if it is taking a long time and the event can be re-queued.
Several controllers in Rook are using the controller runtime. For examples, see the [pool controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/pool/controller.go) or [disruption budget](https://github.com/rook/rook/blob/master/pkg/operator/ceph/disruption/clusterdisruption/reconcile.go) controller.
|
non_automation
|
convert the ceph cluster controller to the controller runtime is this a bug report or feature request feature request what should the feature do convert the to be managed with the controller runtime currently rook only has a simple watch in an informer as seen what is use case behind this feature the controller runtime will improve reliability of the operator in several areas events can be re queued if failed or the operator is not able to complete the operation exponential backoff is provided automatically for re queued events waiting for the next event does not need to block on the current event if it is taking a long time and the event can be re queued several controllers in rook are using the controller runtime for examples see the or controller
| 0
|
617,468
| 19,358,763,011
|
IssuesEvent
|
2021-12-16 00:55:39
|
UC-Davis-molecular-computing/scadnano
|
https://api.github.com/repos/UC-Davis-molecular-computing/scadnano
|
closed
|
domain names move when switching orientation of strand
|
bug high priority closed in dev
|
Take a strand with domain labels:

Drag it to reverse its orientation:

The domain labels should stay in the same order 5' to 3', but they have reversed (since they are in the same "screen order" but now the strand is pointing the other way.
See also issue #654, which is a similar issue (but on the design with that issue, this issue does not show up.)
|
1.0
|
domain names move when switching orientation of strand - Take a strand with domain labels:

Drag it to reverse its orientation:

The domain labels should stay in the same order 5' to 3', but they have reversed (since they are in the same "screen order" but now the strand is pointing the other way.
See also issue #654, which is a similar issue (but on the design with that issue, this issue does not show up.)
|
non_automation
|
domain names move when switching orientation of strand take a strand with domain labels drag it to reverse its orientation the domain labels should stay in the same order to but they have reversed since they are in the same screen order but now the strand is pointing the other way see also issue which is a similar issue but on the design with that issue this issue does not show up
| 0
|
5,442
| 19,604,874,410
|
IssuesEvent
|
2022-01-06 08:07:27
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
tikv have not logs saved in k8s
|
type/bug severity/major found/automation
|
## Bug Report
<!-- Thanks for your bug report! Don't worry if you can't fill out all the sections. -->
### What version of TiKV are you using?
/ # ./tikv-server -V
TiKV
Release Version: 5.4.0-alpha
Edition: Community
Git Commit Hash: 99b3436
Git Commit Branch: heads/refs/tags/v5.4.0-nightly
UTC Build Time: 2022-01-04 01:15:55
Rust Version: rustc 1.56.0-nightly (2faabf579 2021-07-27)
Enable Features: jemalloc mem-profiling portable sse test-engines-rocksdb cloud-aws cloud-gcp cloud-azure
Profile: dist_release
### What operating system and CPU are you using?
8core 16G
### Steps to reproduce
no matter
### What did you expect?
tikv logs can be saved
### What did happened?
tikv have not logs saved in k8s

|
1.0
|
tikv have not logs saved in k8s - ## Bug Report
<!-- Thanks for your bug report! Don't worry if you can't fill out all the sections. -->
### What version of TiKV are you using?
/ # ./tikv-server -V
TiKV
Release Version: 5.4.0-alpha
Edition: Community
Git Commit Hash: 99b3436
Git Commit Branch: heads/refs/tags/v5.4.0-nightly
UTC Build Time: 2022-01-04 01:15:55
Rust Version: rustc 1.56.0-nightly (2faabf579 2021-07-27)
Enable Features: jemalloc mem-profiling portable sse test-engines-rocksdb cloud-aws cloud-gcp cloud-azure
Profile: dist_release
### What operating system and CPU are you using?
8core 16G
### Steps to reproduce
no matter
### What did you expect?
tikv logs can be saved
### What did happened?
tikv have not logs saved in k8s

|
automation
|
tikv have not logs saved in bug report what version of tikv are you using tikv server v tikv release version alpha edition community git commit hash git commit branch heads refs tags nightly utc build time rust version rustc nightly enable features jemalloc mem profiling portable sse test engines rocksdb cloud aws cloud gcp cloud azure profile dist release what operating system and cpu are you using steps to reproduce no matter what did you expect tikv logs can be saved what did happened tikv have not logs saved in
| 1
|
4,779
| 17,461,992,914
|
IssuesEvent
|
2021-08-06 11:52:46
|
iGEM-Engineering/iGEM-distribution
|
https://api.github.com/repos/iGEM-Engineering/iGEM-distribution
|
opened
|
Detect twins
|
automation
|
Some parts are likely to be submitted that will be twins of other parts with different names but the same sequence.
We should automatically search for twins.
|
1.0
|
Detect twins - Some parts are likely to be submitted that will be twins of other parts with different names but the same sequence.
We should automatically search for twins.
|
automation
|
detect twins some parts are likely to be submitted that will be twins of other parts with different names but the same sequence we should automatically search for twins
| 1
|
9,705
| 30,305,902,687
|
IssuesEvent
|
2023-07-10 09:27:19
|
litentry/litentry-parachain
|
https://api.github.com/repos/litentry/litentry-parachain
|
closed
|
Create a script/GHA to tell if sidechain on staging works
|
I3-high D6-automation
|
### Context
It's possible that we get error notifications from the staging-sidechain but it still functions.
Before we restart it, it's better to test if "it still works" in the first place. We need a script/GHA for that, similar to ts-test but more light-weighted and accurate.
---
:heavy_check_mark: Please set appropriate **labels** and **assignees** if applicable.
|
1.0
|
Create a script/GHA to tell if sidechain on staging works - ### Context
It's possible that we get error notifications from the staging-sidechain but it still functions.
Before we restart it, it's better to test if "it still works" in the first place. We need a script/GHA for that, similar to ts-test but more light-weighted and accurate.
---
:heavy_check_mark: Please set appropriate **labels** and **assignees** if applicable.
|
automation
|
create a script gha to tell if sidechain on staging works context it s possible that we get error notifications from the staging sidechain but it still functions before we restart it it s better to test if it still works in the first place we need a script gha for that similar to ts test but more light weighted and accurate heavy check mark please set appropriate labels and assignees if applicable
| 1
|
735,083
| 25,378,400,605
|
IssuesEvent
|
2022-11-21 15:41:50
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.msn.com - design is broken
|
browser-firefox priority-critical engine-gecko
|
<!-- @browser: Firefox 107.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0 -->
<!-- @reported_with: addon-reporter-firefox -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/114405 -->
**URL**: https://www.msn.com/en-gb/news/world/meet-sergei-shoigu-russia-s-minister-of-defense-and-possible-successor-to-putin/ss-AAUS24n?cvid=a02f3578ae1540b8bb158c8a9636917c#image=2
**Browser / Version**: Firefox 107.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Design is broken
**Description**: Items are misaligned
**Steps to Reproduce**:
The design is shifted to the right compared with Edge
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/11/0a03e9c5-1f0e-46db-8015-e82a7f975eed.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.msn.com - design is broken - <!-- @browser: Firefox 107.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0 -->
<!-- @reported_with: addon-reporter-firefox -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/114405 -->
**URL**: https://www.msn.com/en-gb/news/world/meet-sergei-shoigu-russia-s-minister-of-defense-and-possible-successor-to-putin/ss-AAUS24n?cvid=a02f3578ae1540b8bb158c8a9636917c#image=2
**Browser / Version**: Firefox 107.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Design is broken
**Description**: Items are misaligned
**Steps to Reproduce**:
The design is shifted to the right compared with Edge
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/11/0a03e9c5-1f0e-46db-8015-e82a7f975eed.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_automation
|
design is broken url browser version firefox operating system windows tested another browser yes edge problem type design is broken description items are misaligned steps to reproduce the design is shifted to the right compared with edge view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
7,384
| 24,755,789,626
|
IssuesEvent
|
2022-10-21 17:35:22
|
o3de/o3de
|
https://api.github.com/repos/o3de/o3de
|
closed
|
test_InstantiatePrefab_LevelPrefab fails on Linux
|
kind/bug priority/major kind/automation feature/prefabs
|
**Describe the bug**
test_InstantiatePrefab_LevelPrefab fails on Linux
```
[2022-10-21T07:06:43.565Z] E [editor_test.log] EXCEPTION raised:
[2022-10-21T07:06:43.565Z] E [editor_test.log] Traceback (most recent call last):
[2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py", line 328, in start_test
[2022-10-21T07:06:43.565Z] E [editor_test.log] test_function()
[2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/Prefab/tests/instantiate_prefab/InstantiatePrefab_LevelPrefab.py", line 30, in InstantiatePrefab_LevelPrefab
[2022-10-21T07:06:43.565Z] E [editor_test.log] test_level_prefab = Prefab.get_prefab(test_level_prefab_path)
[2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/prefab_utils.py", line 201, in get_prefab
[2022-10-21T07:06:43.565Z] E [editor_test.log] assert Prefab.prefab_exists(file_path), f"Attempted to get a prefab \"{file_path}\" that doesn't exist"
[2022-10-21T07:06:43.565Z] E [editor_test.log] AssertionError: Attempted to get a prefab "levels/prefab/QuitOnSuccessfulSpawn/QuitOnSuccessfulSpawn.prefab" that doesn't exist
[2022-10-21T07:06:43.565Z] E [editor_test.log] Test result: FAILURE
```
**Failed Jenkins Job Information:**
[The name of the job that failed, job build number, and code snippit of the failure.](https://jenkins.build.o3de.org/blue/organizations/jenkins/O3DE_periodic-incremental-daily/detail/development/137/pipeline/797)
**Additional context**
Looks to be due to a casing issue with the prefab file path:
```
test_level_prefab_path = os.path.join("levels", "prefab", "QuitOnSuccessfulSpawn", "QuitOnSuccessfulSpawn.prefab")
```
|
1.0
|
test_InstantiatePrefab_LevelPrefab fails on Linux - **Describe the bug**
test_InstantiatePrefab_LevelPrefab fails on Linux
```
[2022-10-21T07:06:43.565Z] E [editor_test.log] EXCEPTION raised:
[2022-10-21T07:06:43.565Z] E [editor_test.log] Traceback (most recent call last):
[2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py", line 328, in start_test
[2022-10-21T07:06:43.565Z] E [editor_test.log] test_function()
[2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/Prefab/tests/instantiate_prefab/InstantiatePrefab_LevelPrefab.py", line 30, in InstantiatePrefab_LevelPrefab
[2022-10-21T07:06:43.565Z] E [editor_test.log] test_level_prefab = Prefab.get_prefab(test_level_prefab_path)
[2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/prefab_utils.py", line 201, in get_prefab
[2022-10-21T07:06:43.565Z] E [editor_test.log] assert Prefab.prefab_exists(file_path), f"Attempted to get a prefab \"{file_path}\" that doesn't exist"
[2022-10-21T07:06:43.565Z] E [editor_test.log] AssertionError: Attempted to get a prefab "levels/prefab/QuitOnSuccessfulSpawn/QuitOnSuccessfulSpawn.prefab" that doesn't exist
[2022-10-21T07:06:43.565Z] E [editor_test.log] Test result: FAILURE
```
**Failed Jenkins Job Information:**
[The name of the job that failed, job build number, and code snippit of the failure.](https://jenkins.build.o3de.org/blue/organizations/jenkins/O3DE_periodic-incremental-daily/detail/development/137/pipeline/797)
**Additional context**
Looks to be due to a casing issue with the prefab file path:
```
test_level_prefab_path = os.path.join("levels", "prefab", "QuitOnSuccessfulSpawn", "QuitOnSuccessfulSpawn.prefab")
```
|
automation
|
test instantiateprefab levelprefab fails on linux describe the bug test instantiateprefab levelprefab fails on linux e exception raised e traceback most recent call last e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in start test e test function e file data workspace automatedtesting gem pythontests prefab tests instantiate prefab instantiateprefab levelprefab py line in instantiateprefab levelprefab e test level prefab prefab get prefab test level prefab path e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools prefab utils py line in get prefab e assert prefab prefab exists file path f attempted to get a prefab file path that doesn t exist e assertionerror attempted to get a prefab levels prefab quitonsuccessfulspawn quitonsuccessfulspawn prefab that doesn t exist e test result failure failed jenkins job information additional context looks to be due to a casing issue with the prefab file path test level prefab path os path join levels prefab quitonsuccessfulspawn quitonsuccessfulspawn prefab
| 1
|
2,905
| 12,754,313,341
|
IssuesEvent
|
2020-06-28 04:31:52
|
chavarera/python-mini-projects
|
https://api.github.com/repos/chavarera/python-mini-projects
|
closed
|
Add Watermark on Set of images
|
Automation
|
**Adding watermark to multiple images using one command.**
Ask the user for the input of specific folder containing images
and watermark image
`Enter Folder Path : E:\Bootstrap\-hotel\redplanet\redplanet\images`
`Enter Watermark Path : E:\python\image watermark\watermark.png`
The output should be in the same folder
`output/filename`
|
1.0
|
Add Watermark on Set of images - **Adding watermark to multiple images using one command.**
Ask the user for the input of specific folder containing images
and watermark image
`Enter Folder Path : E:\Bootstrap\-hotel\redplanet\redplanet\images`
`Enter Watermark Path : E:\python\image watermark\watermark.png`
The output should be in the same folder
`output/filename`
|
automation
|
add watermark on set of images adding watermark to multiple images using one command ask the user for the input of specific folder containing images and watermark image enter folder path e bootstrap hotel redplanet redplanet images enter watermark path e python image watermark watermark png the output should be in the same folder output filename
| 1
|
53,026
| 7,803,352,325
|
IssuesEvent
|
2018-06-10 22:48:22
|
vitessio/vitess
|
https://api.github.com/repos/vitessio/vitess
|
closed
|
Guide to use VItess on AWS kubernetes
|
P3 Type: Documentation
|
Hi,
We want to move of Amazon RDS and use Vitess on kubernetes. I am not able to find any documentation for that.
Please provide any pointer to use Vitess in AWS Kube.
|
1.0
|
Guide to use VItess on AWS kubernetes - Hi,
We want to move of Amazon RDS and use Vitess on kubernetes. I am not able to find any documentation for that.
Please provide any pointer to use Vitess in AWS Kube.
|
non_automation
|
guide to use vitess on aws kubernetes hi we want to move of amazon rds and use vitess on kubernetes i am not able to find any documentation for that please provide any pointer to use vitess in aws kube
| 0
|
45,721
| 2,938,844,454
|
IssuesEvent
|
2015-07-01 13:24:40
|
moneymanagerex/android-money-manager-ex
|
https://api.github.com/repos/moneymanagerex/android-money-manager-ex
|
closed
|
Investigate automatic Dropbox sync, possible cause of exceptions
|
priority
|
The automatic Dropbox sync could be causing the torrent of Illegal State exceptions.
Requires detailed investigation.
DropboxServiceIntent, method downloadFile.
|
1.0
|
Investigate automatic Dropbox sync, possible cause of exceptions - The automatic Dropbox sync could be causing the torrent of Illegal State exceptions.
Requires detailed investigation.
DropboxServiceIntent, method downloadFile.
|
non_automation
|
investigate automatic dropbox sync possible cause of exceptions the automatic dropbox sync could be causing the torrent of illegal state exceptions requires detailed investigation dropboxserviceintent method downloadfile
| 0
|
9,778
| 4,641,460,267
|
IssuesEvent
|
2016-09-30 04:59:10
|
debugworkbench/hydragon
|
https://api.github.com/repos/debugworkbench/hydragon
|
closed
|
Consider replacing DefinitelyTyped typings
|
build Status: Pending Type: Cleanup
|
Seems like https://github.com/typings/typings claims to work with proper external module based typings instead of just ambient external module typings. I'm not entirely sure how it manages to work with TypeScript's node-like module resolution, but that should be easy enough to test with the typings at https://github.com/typings/typed-source-map
If it works as claimed it would be nice to convert the Electron typings over to the proper external module d.ts format.
|
1.0
|
Consider replacing DefinitelyTyped typings - Seems like https://github.com/typings/typings claims to work with proper external module based typings instead of just ambient external module typings. I'm not entirely sure how it manages to work with TypeScript's node-like module resolution, but that should be easy enough to test with the typings at https://github.com/typings/typed-source-map
If it works as claimed it would be nice to convert the Electron typings over to the proper external module d.ts format.
|
non_automation
|
consider replacing definitelytyped typings seems like claims to work with proper external module based typings instead of just ambient external module typings i m not entirely sure how it manages to work with typescript s node like module resolution but that should be easy enough to test with the typings at if it works as claimed it would be nice to convert the electron typings over to the proper external module d ts format
| 0
|
367,024
| 25,715,205,500
|
IssuesEvent
|
2022-12-07 09:50:50
|
zcash/secant-android-wallet
|
https://api.github.com/repos/zcash/secant-android-wallet
|
opened
|
Testing documentation update
|
documentation enhancement
|
## Is your feature request related to a problem? Please describe.
We'd like to have our approach to testing better documented.
## Describe the solution you'd like
Ideally, it'd be one `docs/testing/Testing.md` file, which outlines possibly all corners of how we test the app:
- automated tests (unit x instrumented)
- integration tests
- manual tests
- tests run on CI
- benchmark tests
- screenshot tests
- etc.
|
1.0
|
Testing documentation update - ## Is your feature request related to a problem? Please describe.
We'd like to have our approach to testing better documented.
## Describe the solution you'd like
Ideally, it'd be one `docs/testing/Testing.md` file, which outlines possibly all corners of how we test the app:
- automated tests (unit x instrumented)
- integration tests
- manual tests
- tests run on CI
- benchmark tests
- screenshot tests
- etc.
|
non_automation
|
testing documentation update is your feature request related to a problem please describe we d like to have our approach to testing better documented describe the solution you d like ideally it d be one docs testing testing md file which outlines possibly all corners of how we test the app automated tests unit x instrumented integration tests manual tests tests run on ci benchmark tests screenshot tests etc
| 0
|
750,809
| 26,218,549,951
|
IssuesEvent
|
2023-01-04 13:05:25
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.homedepot.ca - see bug description
|
browser-firefox priority-normal engine-gecko
|
<!-- @browser: Firefox 108.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:108.0) Gecko/20100101 Firefox/108.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/116295 -->
**URL**: https://www.homedepot.ca/checkout
**Browser / Version**: Firefox 108.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: Get "Unknown Error" when trying to checkout on Firefox
**Steps to Reproduce**:
When trying to checkout on Firefox I get a message that says "Unknown Error". This error doesn't show up on Chrome-based browsers.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/31c3787e-b1ee-4028-9481-715dc83ea342.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.homedepot.ca - see bug description - <!-- @browser: Firefox 108.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:108.0) Gecko/20100101 Firefox/108.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/116295 -->
**URL**: https://www.homedepot.ca/checkout
**Browser / Version**: Firefox 108.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: Get "Unknown Error" when trying to checkout on Firefox
**Steps to Reproduce**:
When trying to checkout on Firefox I get a message that says "Unknown Error". This error doesn't show up on Chrome-based browsers.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/31c3787e-b1ee-4028-9481-715dc83ea342.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_automation
|
see bug description url browser version firefox operating system windows tested another browser yes edge problem type something else description get unknown error when trying to checkout on firefox steps to reproduce when trying to checkout on firefox i get a message that says unknown error this error doesn t show up on chrome based browsers view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
2,005
| 11,256,337,208
|
IssuesEvent
|
2020-01-12 15:35:01
|
spacemeshos/go-spacemesh
|
https://api.github.com/repos/spacemeshos/go-spacemesh
|
closed
|
Monitoring system
|
Epic automation monitoring
|
# The Motivation
During testnet and mainnet we would want to collect information that will allow us to track different flows in the system and detect problems and trends without having to rely on data coming from the nodes
# The Requirement
In general, add functionality to gather this information from a tap on the network.
Few examples of info that we would like to collect -
1. Histogram of node versions (can be gathered from handshake messages)
2. Participating in Hare committees
3. PoET usages
|
1.0
|
Monitoring system - # The Motivation
During testnet and mainnet we would want to collect information that will allow us to track different flows in the system and detect problems and trends without having to rely on data coming from the nodes
# The Requirement
In general, add functionality to gather this information from a tap on the network.
Few examples of info that we would like to collect -
1. Histogram of node versions (can be gathered from handshake messages)
2. Participating in Hare committees
3. PoET usages
|
automation
|
monitoring system the motivation during testnet and mainnet we would want to collect information that will allow us to track different flows in the system and detect problems and trends without having to rely on data coming from the nodes the requirement in general add functionality to gather this information from a tap on the network few examples of info that we would like to collect histogram of node versions can be gathered from handshake messages participating in hare committees poet usages
| 1
|
3,941
| 15,014,667,312
|
IssuesEvent
|
2021-02-01 07:02:43
|
MISP/MISP
|
https://api.github.com/repos/MISP/MISP
|
closed
|
MISP Automation , not working properly. event wise data is not getting downloaded.
|
T: support automation
|
Hello,
I am trying to automate the process of suricata rules export . I am trying this API format :
https://[misp url]/events/nids/[format]/download/[eventid]/[frame]/[tags]/[from]/[to]/[last]
my final API would be, let say if I want to export just for event 6:
https://[misp url]/events/nids/suricata/download/6
the above event wise api is not working for any specific event id, it is exporting all the rules from all events.
even when I am trying to export all the suricata rules with the api:
https://[misp url]/events/nids/suricata/download
it is leaving my eventa 6, 4, 1207.. to download the suricata rule for. means it is not completed. though these evens contains IDS published attributes.
please let me have a solution here.
|
1.0
|
MISP Automation , not working properly. event wise data is not getting downloaded. - Hello,
I am trying to automate the process of suricata rules export . I am trying this API format :
https://[misp url]/events/nids/[format]/download/[eventid]/[frame]/[tags]/[from]/[to]/[last]
my final API would be, let say if I want to export just for event 6:
https://[misp url]/events/nids/suricata/download/6
the above event wise api is not working for any specific event id, it is exporting all the rules from all events.
even when I am trying to export all the suricata rules with the api:
https://[misp url]/events/nids/suricata/download
it is leaving my eventa 6, 4, 1207.. to download the suricata rule for. means it is not completed. though these evens contains IDS published attributes.
please let me have a solution here.
|
automation
|
misp automation not working properly event wise data is not getting downloaded hello i am trying to automate the process of suricata rules export i am trying this api format https events nids download my final api would be let say if i want to export just for event https events nids suricata download the above event wise api is not working for any specific event id it is exporting all the rules from all events even when i am trying to export all the suricata rules with the api https events nids suricata download it is leaving my eventa to download the suricata rule for means it is not completed though these evens contains ids published attributes please let me have a solution here
| 1
|
8,405
| 26,916,862,458
|
IssuesEvent
|
2023-02-07 07:23:09
|
red-hat-storage/ocs-ci
|
https://api.github.com/repos/red-hat-storage/ocs-ci
|
opened
|
UI Deployment - Timed Out: Local Storage Installation status is not Succeeded after 300 seconds
|
bug ui_automation
|
UI deployment of vSphere UPI Encryption with LSO is failing on following timeout:
```
2023-02-06 10:04:32 ocs_ci/ocs/ui/deployment_ui.py:436: in install_ocs_ui
2023-02-06 10:04:32 self.install_local_storage_operator()
2023-02-06 10:04:32 ocs_ci/ocs/ui/deployment_ui.py:117: in install_local_storage_operator
2023-02-06 10:04:32 self.verify_operator_succeeded(operator="Local Storage")
2023-02-06 10:04:32 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2023-02-06 10:04:32
2023-02-06 10:04:32 self = <ocs_ci.ocs.ui.deployment_ui.DeploymentUI object at 0x7fb4a0cc5e20>
2023-02-06 10:04:32 operator = 'Local Storage', timeout_install = 300, sleep = 20
2023-02-06 10:04:32
2023-02-06 10:04:32 def verify_operator_succeeded(
2023-02-06 10:04:32 self, operator=OCS_OPERATOR, timeout_install=300, sleep=20
2023-02-06 10:04:32 ):
2023-02-06 10:04:32 """
2023-02-06 10:04:32 Verify Operator Installation
2023-02-06 10:04:32
2023-02-06 10:04:32 Args:
2023-02-06 10:04:32 operator (str): type of operator
2023-02-06 10:04:32 timeout_install (int): Time in seconds to wait
2023-02-06 10:04:32 sleep (int): Sampling time in seconds
2023-02-06 10:04:32
2023-02-06 10:04:32 """
2023-02-06 10:04:32 self.search_operator_installed_operators_page(operator=operator)
2023-02-06 10:04:32 time.sleep(5)
2023-02-06 10:04:32 sample = TimeoutSampler(
2023-02-06 10:04:32 timeout=timeout_install,
2023-02-06 10:04:32 sleep=sleep,
2023-02-06 10:04:32 func=self.check_element_text,
2023-02-06 10:04:32 expected_text="Succeeded",
2023-02-06 10:04:32 )
2023-02-06 10:04:32 if not sample.wait_for_func_status(result=True):
2023-02-06 10:04:32 logger.error(
2023-02-06 10:04:32 f"{operator} Installation status is not Succeeded after {timeout_install} seconds"
2023-02-06 10:04:32 )
2023-02-06 10:04:32 self.take_screenshot()
2023-02-06 10:04:32 > raise TimeoutExpiredError(
2023-02-06 10:04:32 f"{operator} Installation status is not Succeeded after {timeout_install} seconds"
2023-02-06 10:04:32 )
2023-02-06 10:04:32 E ocs_ci.ocs.exceptions.TimeoutExpiredError: Timed Out: Local Storage Installation status is not Succeeded after 300 seconds
2023-02-06 10:04:32
2023-02-06 10:04:32 ocs_ci/ocs/ui/deployment_ui.py:387: TimeoutExpiredError
```
Failed jobs:
* https://url.corp.redhat.com/134c72f
* https://url.corp.redhat.com/562d5f1
Last screenshot taken in the during the failure looks like this:

But when I've connected later to debug this issue, the _Local Storage_ operator was correctly installed:

|
1.0
|
UI Deployment - Timed Out: Local Storage Installation status is not Succeeded after 300 seconds - UI deployment of vSphere UPI Encryption with LSO is failing on following timeout:
```
2023-02-06 10:04:32 ocs_ci/ocs/ui/deployment_ui.py:436: in install_ocs_ui
2023-02-06 10:04:32 self.install_local_storage_operator()
2023-02-06 10:04:32 ocs_ci/ocs/ui/deployment_ui.py:117: in install_local_storage_operator
2023-02-06 10:04:32 self.verify_operator_succeeded(operator="Local Storage")
2023-02-06 10:04:32 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2023-02-06 10:04:32
2023-02-06 10:04:32 self = <ocs_ci.ocs.ui.deployment_ui.DeploymentUI object at 0x7fb4a0cc5e20>
2023-02-06 10:04:32 operator = 'Local Storage', timeout_install = 300, sleep = 20
2023-02-06 10:04:32
2023-02-06 10:04:32 def verify_operator_succeeded(
2023-02-06 10:04:32 self, operator=OCS_OPERATOR, timeout_install=300, sleep=20
2023-02-06 10:04:32 ):
2023-02-06 10:04:32 """
2023-02-06 10:04:32 Verify Operator Installation
2023-02-06 10:04:32
2023-02-06 10:04:32 Args:
2023-02-06 10:04:32 operator (str): type of operator
2023-02-06 10:04:32 timeout_install (int): Time in seconds to wait
2023-02-06 10:04:32 sleep (int): Sampling time in seconds
2023-02-06 10:04:32
2023-02-06 10:04:32 """
2023-02-06 10:04:32 self.search_operator_installed_operators_page(operator=operator)
2023-02-06 10:04:32 time.sleep(5)
2023-02-06 10:04:32 sample = TimeoutSampler(
2023-02-06 10:04:32 timeout=timeout_install,
2023-02-06 10:04:32 sleep=sleep,
2023-02-06 10:04:32 func=self.check_element_text,
2023-02-06 10:04:32 expected_text="Succeeded",
2023-02-06 10:04:32 )
2023-02-06 10:04:32 if not sample.wait_for_func_status(result=True):
2023-02-06 10:04:32 logger.error(
2023-02-06 10:04:32 f"{operator} Installation status is not Succeeded after {timeout_install} seconds"
2023-02-06 10:04:32 )
2023-02-06 10:04:32 self.take_screenshot()
2023-02-06 10:04:32 > raise TimeoutExpiredError(
2023-02-06 10:04:32 f"{operator} Installation status is not Succeeded after {timeout_install} seconds"
2023-02-06 10:04:32 )
2023-02-06 10:04:32 E ocs_ci.ocs.exceptions.TimeoutExpiredError: Timed Out: Local Storage Installation status is not Succeeded after 300 seconds
2023-02-06 10:04:32
2023-02-06 10:04:32 ocs_ci/ocs/ui/deployment_ui.py:387: TimeoutExpiredError
```
Failed jobs:
* https://url.corp.redhat.com/134c72f
* https://url.corp.redhat.com/562d5f1
Last screenshot taken in the during the failure looks like this:

But when I've connected later to debug this issue, the _Local Storage_ operator was correctly installed:

|
automation
|
ui deployment timed out local storage installation status is not succeeded after seconds ui deployment of vsphere upi encryption with lso is failing on following timeout ocs ci ocs ui deployment ui py in install ocs ui self install local storage operator ocs ci ocs ui deployment ui py in install local storage operator self verify operator succeeded operator local storage self operator local storage timeout install sleep def verify operator succeeded self operator ocs operator timeout install sleep verify operator installation args operator str type of operator timeout install int time in seconds to wait sleep int sampling time in seconds self search operator installed operators page operator operator time sleep sample timeoutsampler timeout timeout install sleep sleep func self check element text expected text succeeded if not sample wait for func status result true logger error f operator installation status is not succeeded after timeout install seconds self take screenshot raise timeoutexpirederror f operator installation status is not succeeded after timeout install seconds e ocs ci ocs exceptions timeoutexpirederror timed out local storage installation status is not succeeded after seconds ocs ci ocs ui deployment ui py timeoutexpirederror failed jobs last screenshot taken in the during the failure looks like this but when i ve connected later to debug this issue the local storage operator was correctly installed
| 1
|
252,633
| 27,253,492,002
|
IssuesEvent
|
2023-02-22 09:50:51
|
ManideepJ11/WebGoat
|
https://api.github.com/repos/ManideepJ11/WebGoat
|
opened
|
spring-boot-starter-undertow-2.7.1.jar: 2 vulnerabilities (highest severity is: 7.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-starter-undertow-2.7.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/jboss/xnio/xnio-api/3.8.7.Final/xnio-api-3.8.7.Final.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/ManideepJ11/WebGoat/commit/35a8d45d303047855bb1510970bb5f9397272cd6">35a8d45d303047855bb1510970bb5f9397272cd6</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (spring-boot-starter-undertow version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-0084](https://www.mend.io/vulnerability-database/CVE-2022-0084) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | xnio-api-3.8.7.Final.jar | Transitive | N/A* | ❌ |
| [CVE-2022-2053](https://www.mend.io/vulnerability-database/CVE-2022-2053) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | undertow-core-2.2.18.Final.jar | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-0084</summary>
### Vulnerable Library - <b>xnio-api-3.8.7.Final.jar</b></p>
<p>The API JAR of the XNIO project</p>
<p>Library home page: <a href="http://www.jboss.org/xnio">http://www.jboss.org/xnio</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/jboss/xnio/xnio-api/3.8.7.Final/xnio-api-3.8.7.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-undertow-2.7.1.jar (Root Library)
- undertow-core-2.2.18.Final.jar
- :x: **xnio-api-3.8.7.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ManideepJ11/WebGoat/commit/35a8d45d303047855bb1510970bb5f9397272cd6">35a8d45d303047855bb1510970bb5f9397272cd6</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A flaw was found in XNIO, specifically in the notifyReadClosed method. The issue revealed this method was logging a message to another expected end. This flaw allows an attacker to send flawed requests to a server, possibly causing log contention-related performance concerns or an unwanted disk fill-up.
<p>Publish Date: 2022-08-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0084>CVE-2022-0084</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-08-26</p>
<p>Fix Resolution: org.jboss.xnio:xnio-api:3.8.8.Final</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-2053</summary>
### Vulnerable Library - <b>undertow-core-2.2.18.Final.jar</b></p>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/undertow/undertow-core/2.2.18.Final/undertow-core-2.2.18.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-undertow-2.7.1.jar (Root Library)
- :x: **undertow-core-2.2.18.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ManideepJ11/WebGoat/commit/35a8d45d303047855bb1510970bb5f9397272cd6">35a8d45d303047855bb1510970bb5f9397272cd6</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When a POST request comes through AJP and the request exceeds the max-post-size limit (maxEntitySize), Undertow's AjpServerRequestConduit implementation closes a connection without sending any response to the client/proxy. This behavior results in that a front-end proxy marking the backend worker (application server) as an error state and not forward requests to the worker for a while. In mod_cluster, this continues until the next STATUS request (10 seconds intervals) from the application server updates the server state. So, in the worst case, it can result in "All workers are in error state" and mod_cluster responds "503 Service Unavailable" for a while (up to 10 seconds). In mod_proxy_balancer, it does not forward requests to the worker until the "retry" timeout passes. However, luckily, mod_proxy_balancer has "forcerecovery" setting (On by default; this parameter can force the immediate recovery of all workers without considering the retry parameter of the workers if all workers of a balancer are in error state.). So, unlike mod_cluster, mod_proxy_balancer does not result in responding "503 Service Unavailable". An attacker could use this behavior to send a malicious request and trigger server errors, resulting in DoS (denial of service). This flaw was fixed in Undertow 2.2.19.Final, Undertow 2.3.0.Alpha2.
<p>Publish Date: 2022-08-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-2053>CVE-2022-2053</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-95rf-557x-44g5">https://github.com/advisories/GHSA-95rf-557x-44g5</a></p>
<p>Release Date: 2022-08-05</p>
<p>Fix Resolution: io.undertow:undertow-core:2.2.19.Final</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
spring-boot-starter-undertow-2.7.1.jar: 2 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-starter-undertow-2.7.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/jboss/xnio/xnio-api/3.8.7.Final/xnio-api-3.8.7.Final.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/ManideepJ11/WebGoat/commit/35a8d45d303047855bb1510970bb5f9397272cd6">35a8d45d303047855bb1510970bb5f9397272cd6</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (spring-boot-starter-undertow version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-0084](https://www.mend.io/vulnerability-database/CVE-2022-0084) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | xnio-api-3.8.7.Final.jar | Transitive | N/A* | ❌ |
| [CVE-2022-2053](https://www.mend.io/vulnerability-database/CVE-2022-2053) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | undertow-core-2.2.18.Final.jar | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-0084</summary>
### Vulnerable Library - <b>xnio-api-3.8.7.Final.jar</b></p>
<p>The API JAR of the XNIO project</p>
<p>Library home page: <a href="http://www.jboss.org/xnio">http://www.jboss.org/xnio</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/jboss/xnio/xnio-api/3.8.7.Final/xnio-api-3.8.7.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-undertow-2.7.1.jar (Root Library)
- undertow-core-2.2.18.Final.jar
- :x: **xnio-api-3.8.7.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ManideepJ11/WebGoat/commit/35a8d45d303047855bb1510970bb5f9397272cd6">35a8d45d303047855bb1510970bb5f9397272cd6</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A flaw was found in XNIO, specifically in the notifyReadClosed method. The issue revealed this method was logging a message to another expected end. This flaw allows an attacker to send flawed requests to a server, possibly causing log contention-related performance concerns or an unwanted disk fill-up.
<p>Publish Date: 2022-08-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0084>CVE-2022-0084</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-08-26</p>
<p>Fix Resolution: org.jboss.xnio:xnio-api:3.8.8.Final</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-2053</summary>
### Vulnerable Library - <b>undertow-core-2.2.18.Final.jar</b></p>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/undertow/undertow-core/2.2.18.Final/undertow-core-2.2.18.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-undertow-2.7.1.jar (Root Library)
- :x: **undertow-core-2.2.18.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ManideepJ11/WebGoat/commit/35a8d45d303047855bb1510970bb5f9397272cd6">35a8d45d303047855bb1510970bb5f9397272cd6</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When a POST request comes through AJP and the request exceeds the max-post-size limit (maxEntitySize), Undertow's AjpServerRequestConduit implementation closes a connection without sending any response to the client/proxy. This behavior results in that a front-end proxy marking the backend worker (application server) as an error state and not forward requests to the worker for a while. In mod_cluster, this continues until the next STATUS request (10 seconds intervals) from the application server updates the server state. So, in the worst case, it can result in "All workers are in error state" and mod_cluster responds "503 Service Unavailable" for a while (up to 10 seconds). In mod_proxy_balancer, it does not forward requests to the worker until the "retry" timeout passes. However, luckily, mod_proxy_balancer has "forcerecovery" setting (On by default; this parameter can force the immediate recovery of all workers without considering the retry parameter of the workers if all workers of a balancer are in error state.). So, unlike mod_cluster, mod_proxy_balancer does not result in responding "503 Service Unavailable". An attacker could use this behavior to send a malicious request and trigger server errors, resulting in DoS (denial of service). This flaw was fixed in Undertow 2.2.19.Final, Undertow 2.3.0.Alpha2.
<p>Publish Date: 2022-08-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-2053>CVE-2022-2053</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-95rf-557x-44g5">https://github.com/advisories/GHSA-95rf-557x-44g5</a></p>
<p>Release Date: 2022-08-05</p>
<p>Fix Resolution: io.undertow:undertow-core:2.2.19.Final</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_automation
|
spring boot starter undertow jar vulnerabilities highest severity is vulnerable library spring boot starter undertow jar path to dependency file pom xml path to vulnerable library home wss scanner repository org jboss xnio xnio api final xnio api final jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in spring boot starter undertow version remediation available high xnio api final jar transitive n a high undertow core final jar transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library xnio api final jar the api jar of the xnio project library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org jboss xnio xnio api final xnio api final jar dependency hierarchy spring boot starter undertow jar root library undertow core final jar x xnio api final jar vulnerable library found in head commit a href found in base branch main vulnerability details a flaw was found in xnio specifically in the notifyreadclosed method the issue revealed this method was logging a message to another expected end this flaw allows an attacker to send flawed requests to a server possibly causing log contention related performance concerns or an unwanted disk fill up publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution org jboss xnio xnio api final step up your open source security game with mend cve vulnerable library undertow core final jar path to dependency file pom xml path to vulnerable library home wss scanner repository io undertow undertow core final undertow core final jar dependency hierarchy spring boot starter undertow jar root library x undertow core final jar vulnerable library found in head commit a href found in base branch main vulnerability details when a post request comes through ajp and the request exceeds the max post size limit maxentitysize undertow s ajpserverrequestconduit implementation closes a connection without sending any response to the client proxy this behavior results in that a front end proxy marking the backend worker application server as an error state and not forward requests to the worker for a while in mod cluster this continues until the next status request seconds intervals from the application server updates the server state so in the worst case it can result in all workers are in error state and mod cluster responds service unavailable for a while up to seconds in mod proxy balancer it does not forward requests to the worker until the retry timeout passes however luckily mod proxy balancer has forcerecovery setting on by default this parameter can force the immediate recovery of all workers without considering the retry parameter of the workers if all workers of a balancer are in error state so unlike mod cluster mod proxy balancer does not result in responding service unavailable an attacker could use this behavior to send a malicious request and trigger server errors resulting in dos denial of service this flaw was fixed in undertow final undertow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io undertow undertow core final step up your open source security game with mend
| 0
|
632,375
| 20,193,945,022
|
IssuesEvent
|
2022-02-11 08:55:02
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
support.mozilla.org - see bug description
|
priority-important browser-fenix engine-gecko
|
<!-- @browser: Firefox Mobile 99.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 12; Mobile; rv:99.0) Gecko/99.0 Firefox/99.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/99415 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://support.mozilla.org/en-US/kb/how-send-crash-report-firefox-android
**Browser / Version**: Firefox Mobile 99.0
**Operating System**: Android 12
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: firefox android (night amd day) simy crashes (switches off) on particular website. https://m.locanto.co.uk/belfast/Escorts/20905/
**Steps to Reproduce**:
I simply open google searched result. In locanto website i see the page loading (a little load bar on top) then browser asked if website can store presistant data or something and i said no. Then it chrashed. I relauched but this time said yes. Still crashed. Then installed night version of ffox in and hapoens the same. On chrome this website works. Crash happens when load bar loads nearly to end like 85% or so.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/2/c616436d-dd76-48b6-8b58-db7f1f3e518d.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220209095640</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/2/975499cf-dd65-49d6-a285-7e4087144c80)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
support.mozilla.org - see bug description - <!-- @browser: Firefox Mobile 99.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 12; Mobile; rv:99.0) Gecko/99.0 Firefox/99.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/99415 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://support.mozilla.org/en-US/kb/how-send-crash-report-firefox-android
**Browser / Version**: Firefox Mobile 99.0
**Operating System**: Android 12
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: firefox android (night amd day) simy crashes (switches off) on particular website. https://m.locanto.co.uk/belfast/Escorts/20905/
**Steps to Reproduce**:
I simply open google searched result. In locanto website i see the page loading (a little load bar on top) then browser asked if website can store presistant data or something and i said no. Then it chrashed. I relauched but this time said yes. Still crashed. Then installed night version of ffox in and hapoens the same. On chrome this website works. Crash happens when load bar loads nearly to end like 85% or so.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/2/c616436d-dd76-48b6-8b58-db7f1f3e518d.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220209095640</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/2/975499cf-dd65-49d6-a285-7e4087144c80)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_automation
|
support mozilla org see bug description url browser version firefox mobile operating system android tested another browser yes chrome problem type something else description firefox android night amd day simy crashes switches off on particular website steps to reproduce i simply open google searched result in locanto website i see the page loading a little load bar on top then browser asked if website can store presistant data or something and i said no then it chrashed i relauched but this time said yes still crashed then installed night version of ffox in and hapoens the same on chrome this website works crash happens when load bar loads nearly to end like or so view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
2,077
| 11,355,873,528
|
IssuesEvent
|
2020-01-24 21:08:08
|
kedacore/keda-scaler-durable-functions
|
https://api.github.com/repos/kedacore/keda-scaler-durable-functions
|
opened
|
Release Pipeline for the helm chart
|
automation priority-low
|
As a developer, I want to have a release pipeline for the helm chart so that developer can deploy new version very quickly and safety.
### Success Criteria
- [ ] Deploy new helm chart with a new version tag.
- [ ] Upload new docker version to DockerHub
- [ ] (Optional) Automate to create Release Note
|
1.0
|
Release Pipeline for the helm chart - As a developer, I want to have a release pipeline for the helm chart so that developer can deploy new version very quickly and safety.
### Success Criteria
- [ ] Deploy new helm chart with a new version tag.
- [ ] Upload new docker version to DockerHub
- [ ] (Optional) Automate to create Release Note
|
automation
|
release pipeline for the helm chart as a developer i want to have a release pipeline for the helm chart so that developer can deploy new version very quickly and safety success criteria deploy new helm chart with a new version tag upload new docker version to dockerhub optional automate to create release note
| 1
|
156,594
| 19,901,497,497
|
IssuesEvent
|
2022-01-25 08:27:16
|
kedacore/keda
|
https://api.github.com/repos/kedacore/keda
|
opened
|
CVE-2021-37713 (High) detected in tar-6.1.0.tgz
|
security vulnerability
|
## CVE-2021-37713 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: /tests/package.json</p>
<p>Path to vulnerable library: /tests/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- client-node-0.15.0.tgz (Root Library)
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kedacore/keda/commit/4213ed86dc859b83c4f126853835fab3dc987b5d">4213ed86dc859b83c4f126853835fab3dc987b5d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain `..` path portions, and resolving the sanitized paths against the extraction target directory. This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as `C:some\path`. If the drive letter does not match the extraction target, for example `D:\extraction\dir`, then the result of `path.resolve(extractionDirectory, entryPath)` would resolve against the current working directory on the `C:` drive, rather than the extraction target directory. Additionally, a `..` portion of the path could occur immediately after the drive letter, such as `C:../foo`, and was not properly sanitized by the logic that checked for `..` within the normalized and split portions of the path. This only affects users of `node-tar` on Windows systems. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does. Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37713>CVE-2021-37713</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh">https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18, 5.0.10, 6.1.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-37713 (High) detected in tar-6.1.0.tgz - ## CVE-2021-37713 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: /tests/package.json</p>
<p>Path to vulnerable library: /tests/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- client-node-0.15.0.tgz (Root Library)
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kedacore/keda/commit/4213ed86dc859b83c4f126853835fab3dc987b5d">4213ed86dc859b83c4f126853835fab3dc987b5d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain `..` path portions, and resolving the sanitized paths against the extraction target directory. This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as `C:some\path`. If the drive letter does not match the extraction target, for example `D:\extraction\dir`, then the result of `path.resolve(extractionDirectory, entryPath)` would resolve against the current working directory on the `C:` drive, rather than the extraction target directory. Additionally, a `..` portion of the path could occur immediately after the drive letter, such as `C:../foo`, and was not properly sanitized by the logic that checked for `..` within the normalized and split portions of the path. This only affects users of `node-tar` on Windows systems. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does. Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37713>CVE-2021-37713</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh">https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18, 5.0.10, 6.1.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_automation
|
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file tests package json path to vulnerable library tests node modules tar package json dependency hierarchy client node tgz root library x tar tgz vulnerable library found in head commit a href found in base branch main vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted this is in part accomplished by sanitizing absolute paths of entries within the archive skipping archive entries that contain path portions and resolving the sanitized paths against the extraction target directory this logic was insufficient on windows systems when extracting tar files that contained a path that was not an absolute path but specified a drive letter different from the extraction target such as c some path if the drive letter does not match the extraction target for example d extraction dir then the result of path resolve extractiondirectory entrypath would resolve against the current working directory on the c drive rather than the extraction target directory additionally a portion of the path could occur immediately after the drive letter such as c foo and was not properly sanitized by the logic that checked for within the normalized and split portions of the path this only affects users of node tar on windows systems these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar there is no reasonable way to work around this issue without performing the same path normalization procedures that node tar now does users are encouraged to upgrade to the latest patched versions of node tar rather than attempt to sanitize paths themselves publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar step up your open source security game with whitesource
| 0
|
84,471
| 24,319,310,577
|
IssuesEvent
|
2022-09-30 09:17:58
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
opened
|
[Task]: POC for Draw to add widget on Canvas
|
UI Builders Pod Task
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
- [ ] Select widget from explorer to enter drawing mode
- [ ] Draw on Main canvas to add widget
- [ ] Make sure Drawing sticks to the grid instead of in between the grids
- [ ] Identify the canvas Id being drawn to enable drawing on other canvas type widgets
- [ ] Handle collisions, ( either by stop drawing or by reflowing other widgets )
|
1.0
|
[Task]: POC for Draw to add widget on Canvas - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
- [ ] Select widget from explorer to enter drawing mode
- [ ] Draw on Main canvas to add widget
- [ ] Make sure Drawing sticks to the grid instead of in between the grids
- [ ] Identify the canvas Id being drawn to enable drawing on other canvas type widgets
- [ ] Handle collisions, ( either by stop drawing or by reflowing other widgets )
|
non_automation
|
poc for draw to add widget on canvas is there an existing issue for this i have searched the existing issues subtasks select widget from explorer to enter drawing mode draw on main canvas to add widget make sure drawing sticks to the grid instead of in between the grids identify the canvas id being drawn to enable drawing on other canvas type widgets handle collisions either by stop drawing or by reflowing other widgets
| 0
|
809,140
| 30,176,474,803
|
IssuesEvent
|
2023-07-04 05:20:56
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.google.com - see bug description
|
priority-critical browser-focus-geckoview engine-gecko
|
<!-- @browser: Firefox Mobile 115.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 12; Mobile; rv:109.0) Gecko/115.0 Firefox/115.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/124370 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.google.com/search?q=adna+Maresa+Villanueva+Cedillo&client=firefox-b-m&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiWw73V4fP_AhUSH0QIHViTBmIQ_AUIBygC&biw=486&bih=955&biw=486&bih=955&biw=486&bih=955#imgrc=0fx922ikGoTfBM
**Browser / Version**: Firefox Mobile 115.0
**Operating System**: Android 12
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: the photo is personal
**Steps to Reproduce**:
The photo that appears is mine
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/7/0ed6ceef-ea2e-4458-bd3d-8dd1c96f4d10.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230629134642</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2023/7/f5931510-82af-417a-a6b5-09e7b399e6ab)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.google.com - see bug description - <!-- @browser: Firefox Mobile 115.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 12; Mobile; rv:109.0) Gecko/115.0 Firefox/115.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/124370 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.google.com/search?q=adna+Maresa+Villanueva+Cedillo&client=firefox-b-m&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiWw73V4fP_AhUSH0QIHViTBmIQ_AUIBygC&biw=486&bih=955&biw=486&bih=955&biw=486&bih=955#imgrc=0fx922ikGoTfBM
**Browser / Version**: Firefox Mobile 115.0
**Operating System**: Android 12
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: the photo is personal
**Steps to Reproduce**:
The photo that appears is mine
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/7/0ed6ceef-ea2e-4458-bd3d-8dd1c96f4d10.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230629134642</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2023/7/f5931510-82af-417a-a6b5-09e7b399e6ab)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_automation
|
see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description the photo is personal steps to reproduce the photo that appears is mine view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
2,433
| 11,949,900,258
|
IssuesEvent
|
2020-04-03 14:22:51
|
elastic/opbeans-loadgen
|
https://api.github.com/repos/elastic/opbeans-loadgen
|
opened
|
Add post request call for the opbeans-ruby project
|
automation
|
The opbeans-ruby app has a post endpoint `api/orders#create`
Currently, there is a secario for sending a request to a post endpoint for python and go, but not for Ruby. ([ref here](https://github.com/elastic/opbeans-loadgen/blob/8435d472ac515fa13505fd117397a16fa1436027/molotov_scenarios.py#L113-L136))
I'd like to suggest that the Ruby post endpoint be requested as well. I was interested in this when testing the `capture_body` config option with `apm-integration-testing`.
|
1.0
|
Add post request call for the opbeans-ruby project - The opbeans-ruby app has a post endpoint `api/orders#create`
Currently, there is a secario for sending a request to a post endpoint for python and go, but not for Ruby. ([ref here](https://github.com/elastic/opbeans-loadgen/blob/8435d472ac515fa13505fd117397a16fa1436027/molotov_scenarios.py#L113-L136))
I'd like to suggest that the Ruby post endpoint be requested as well. I was interested in this when testing the `capture_body` config option with `apm-integration-testing`.
|
automation
|
add post request call for the opbeans ruby project the opbeans ruby app has a post endpoint api orders create currently there is a secario for sending a request to a post endpoint for python and go but not for ruby i d like to suggest that the ruby post endpoint be requested as well i was interested in this when testing the capture body config option with apm integration testing
| 1
|
7,730
| 25,490,482,778
|
IssuesEvent
|
2022-11-27 01:26:44
|
ccodwg/Covid19CanadaBot
|
https://api.github.com/repos/ccodwg/Covid19CanadaBot
|
closed
|
Improvements to error logging
|
enhancement data-validation automation
|
- [x] Ensure failed downloads are explicitly logged in an easy-to-understand way
- [x] ~~When an error for a particular dataset/value is logged, should have a script that checks if a) there is already a value defined for today (and whether than value is automated or manual) and b) if there is a value, if that value is different from the previous day's value (e.g., maybe the previously logged value was pre-update). This will help make the error log more actionable.~~
Note that these improvements may require more than a simple sink function as is currently implemented.
|
1.0
|
Improvements to error logging - - [x] Ensure failed downloads are explicitly logged in an easy-to-understand way
- [x] ~~When an error for a particular dataset/value is logged, should have a script that checks if a) there is already a value defined for today (and whether than value is automated or manual) and b) if there is a value, if that value is different from the previous day's value (e.g., maybe the previously logged value was pre-update). This will help make the error log more actionable.~~
Note that these improvements may require more than a simple sink function as is currently implemented.
|
automation
|
improvements to error logging ensure failed downloads are explicitly logged in an easy to understand way when an error for a particular dataset value is logged should have a script that checks if a there is already a value defined for today and whether than value is automated or manual and b if there is a value if that value is different from the previous day s value e g maybe the previously logged value was pre update this will help make the error log more actionable note that these improvements may require more than a simple sink function as is currently implemented
| 1
|
12,038
| 14,194,137,431
|
IssuesEvent
|
2020-11-15 01:46:56
|
oshi/oshi
|
https://api.github.com/repos/oshi/oshi
|
closed
|
/proc/cpuinfo on Orange Pi
|
compatibility confirmed bug good first issue
|
On my Orange Pi One, running Armbian, the contents of `/proc/cpuinfo` is:
```
Processor : ARMv7 Processor rev 5 (v7l)
processor : 0
...
```
(https://pastebin.com/FNJqgy29)
The CPU info is not listed after the column `model name` but after the column `Processor` (with a capital P). As far as I can see, only one of either will be present, but not both.
Extra information:
```
$ uname -a
Linux orangepione 3.4.113-sun8i #2 SMP PREEMPT Wed May 8 15:09:43 CEST 2019 armv7l armv7l armv7l GNU/Linux
```
|
True
|
/proc/cpuinfo on Orange Pi - On my Orange Pi One, running Armbian, the contents of `/proc/cpuinfo` is:
```
Processor : ARMv7 Processor rev 5 (v7l)
processor : 0
...
```
(https://pastebin.com/FNJqgy29)
The CPU info is not listed after the column `model name` but after the column `Processor` (with a capital P). As far as I can see, only one of either will be present, but not both.
Extra information:
```
$ uname -a
Linux orangepione 3.4.113-sun8i #2 SMP PREEMPT Wed May 8 15:09:43 CEST 2019 armv7l armv7l armv7l GNU/Linux
```
|
non_automation
|
proc cpuinfo on orange pi on my orange pi one running armbian the contents of proc cpuinfo is processor processor rev processor the cpu info is not listed after the column model name but after the column processor with a capital p as far as i can see only one of either will be present but not both extra information uname a linux orangepione smp preempt wed may cest gnu linux
| 0
|
48,227
| 12,177,930,718
|
IssuesEvent
|
2020-04-28 08:09:57
|
tsunamayo/Starship-EVO
|
https://api.github.com/repos/tsunamayo/Starship-EVO
|
opened
|
[New build - DEFAULT] 20w18a: Combat balance, Hotfixes
|
Build Release Note
|
Combat re-balancing following tester feedback. More info in #1866.
=> I have implemented a new Patch note feature with a link to this post. Which means that from there-on you will have to wait a few minutes for me to upload the build on Steam!
Change and Features:
- Shield capacity and recharge rate decreased
- Laser range and velocity increased
- Laser spread decreased
- Gatling barrel now fires continuously. Values tweaked
- Recoil barrel now fires bigger shot less frequently.
- Side addons are now consuming heat only
- Volley mechanism removed.
- Charge side addon now fires bigger shot less frequently.
- Patch Notes on Title Menu
Bug Fixes:
#1881 Client cant use starter block.
#1887 Esc key bugged in Options Menu
#1873 Dropdown menus turn white in certain resolutions
#1870 Game Volume setting not applied on Codex sound
#1865 Beam weapon config screen misses some info
#1833 Small laser fire non-stop
#1860 Client cant use starter block.
#1783 Large wedge children entity issue.
#1878 Not Hull kill for ship not piloted
#1862 Mouse sensitivity to 0 on new player
#1882 Texture alignment on Shield and Reactor
#1825 Turret shoot at target not selected
|
1.0
|
[New build - DEFAULT] 20w18a: Combat balance, Hotfixes - Combat re-balancing following tester feedback. More info in #1866.
=> I have implemented a new Patch note feature with a link to this post. Which means that from there-on you will have to wait a few minutes for me to upload the build on Steam!
Change and Features:
- Shield capacity and recharge rate decreased
- Laser range and velocity increased
- Laser spread decreased
- Gatling barrel now fires continuously. Values tweaked
- Recoil barrel now fires bigger shot less frequently.
- Side addons are now consuming heat only
- Volley mechanism removed.
- Charge side addon now fires bigger shot less frequently.
- Patch Notes on Title Menu
Bug Fixes:
#1881 Client cant use starter block.
#1887 Esc key bugged in Options Menu
#1873 Dropdown menus turn white in certain resolutions
#1870 Game Volume setting not applied on Codex sound
#1865 Beam weapon config screen misses some info
#1833 Small laser fire non-stop
#1860 Client cant use starter block.
#1783 Large wedge children entity issue.
#1878 Not Hull kill for ship not piloted
#1862 Mouse sensitivity to 0 on new player
#1882 Texture alignment on Shield and Reactor
#1825 Turret shoot at target not selected
|
non_automation
|
combat balance hotfixes combat re balancing following tester feedback more info in i have implemented a new patch note feature with a link to this post which means that from there on you will have to wait a few minutes for me to upload the build on steam change and features shield capacity and recharge rate decreased laser range and velocity increased laser spread decreased gatling barrel now fires continuously values tweaked recoil barrel now fires bigger shot less frequently side addons are now consuming heat only volley mechanism removed charge side addon now fires bigger shot less frequently patch notes on title menu bug fixes client cant use starter block esc key bugged in options menu dropdown menus turn white in certain resolutions game volume setting not applied on codex sound beam weapon config screen misses some info small laser fire non stop client cant use starter block large wedge children entity issue not hull kill for ship not piloted mouse sensitivity to on new player texture alignment on shield and reactor turret shoot at target not selected
| 0
|
3,989
| 6,917,720,900
|
IssuesEvent
|
2017-11-29 09:36:04
|
nerdalize/nerd
|
https://api.github.com/repos/nerdalize/nerd
|
opened
|
Allow testing of the CLI against configurable Kubernetes versions
|
Dev Process
|
We want to make the minikube Kubernetes version we test against configurable through a environment variable
## Expected Behavior
`KUBE_VERSION=1.8.0; ./make.sh test` should switch to testing against a 1.8.0 minikube vm
## Actual Behavior
It tests against a hardcoded Kubernetes version (1.7.5)
## Steps to Reproduce the Problem
1. `./make.sh test`
## Specifications
- Version 0.6.0 dev branch
- Platform: MacOS
- Subsystem: High Sierra
## Anything else we need to know?
We are probably not able to provide full support but we should be able to test and document what kube versions are supported
|
1.0
|
Allow testing of the CLI against configurable Kubernetes versions - We want to make the minikube Kubernetes version we test against configurable through a environment variable
## Expected Behavior
`KUBE_VERSION=1.8.0; ./make.sh test` should switch to testing against a 1.8.0 minikube vm
## Actual Behavior
It tests against a hardcoded Kubernetes version (1.7.5)
## Steps to Reproduce the Problem
1. `./make.sh test`
## Specifications
- Version 0.6.0 dev branch
- Platform: MacOS
- Subsystem: High Sierra
## Anything else we need to know?
We are probably not able to provide full support but we should be able to test and document what kube versions are supported
|
non_automation
|
allow testing of the cli against configurable kubernetes versions we want to make the minikube kubernetes version we test against configurable through a environment variable expected behavior kube version make sh test should switch to testing against a minikube vm actual behavior it tests against a hardcoded kubernetes version steps to reproduce the problem make sh test specifications version dev branch platform macos subsystem high sierra anything else we need to know we are probably not able to provide full support but we should be able to test and document what kube versions are supported
| 0
|
68,404
| 21,664,135,192
|
IssuesEvent
|
2022-05-07 00:26:00
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Custom user status messages form reloads page on submit
|
T-Defect S-Minor A-Custom-Status O-Occasional Z-Labs
|
### Steps to reproduce
1. Enable the "Custom user status" feature in "Labs"
2. Click on your profile picture in the top left corner.
3. Type out a message in the "Set a new status" field.
4. Press the Return / Enter button on the keyboard.

### Outcome
#### What did you expect?
I expected the same action to fire as would have happened if I press the "Set status" button.
#### What happened instead?
Page reloaded and my status was not saved
### Operating system
Microsoft Windows 10 LTSC
### Browser information
Firefox 97
### URL for webapp
_No response_
### Application version
Element version: 8e480c72d386-react-8e480c72d386-js-8e480c72d386
### Homeserver
Synapse 1.52.0
### Will you send logs?
No
|
1.0
|
Custom user status messages form reloads page on submit - ### Steps to reproduce
1. Enable the "Custom user status" feature in "Labs"
2. Click on your profile picture in the top left corner.
3. Type out a message in the "Set a new status" field.
4. Press the Return / Enter button on the keyboard.

### Outcome
#### What did you expect?
I expected the same action to fire as would have happened if I press the "Set status" button.
#### What happened instead?
Page reloaded and my status was not saved
### Operating system
Microsoft Windows 10 LTSC
### Browser information
Firefox 97
### URL for webapp
_No response_
### Application version
Element version: 8e480c72d386-react-8e480c72d386-js-8e480c72d386
### Homeserver
Synapse 1.52.0
### Will you send logs?
No
|
non_automation
|
custom user status messages form reloads page on submit steps to reproduce enable the custom user status feature in labs click on your profile picture in the top left corner type out a message in the set a new status field press the return enter button on the keyboard outcome what did you expect i expected the same action to fire as would have happened if i press the set status button what happened instead page reloaded and my status was not saved operating system microsoft windows ltsc browser information firefox url for webapp no response application version element version react js homeserver synapse will you send logs no
| 0
|
142,654
| 11,488,651,444
|
IssuesEvent
|
2020-02-11 14:16:20
|
unfoldingWord/translationCore
|
https://api.github.com/repos/unfoldingWord/translationCore
|
closed
|
Version Number missing from the Win-64 Build
|
QA/ElsyTested QA/KozTested QA/Pass
|
Version Number missing from Windows-64 build for the latest v 2.1.0(c42350b).


|
2.0
|
Version Number missing from the Win-64 Build - Version Number missing from Windows-64 build for the latest v 2.1.0(c42350b).


|
non_automation
|
version number missing from the win build version number missing from windows build for the latest v
| 0
|
436,468
| 30,553,684,401
|
IssuesEvent
|
2023-07-20 10:11:42
|
Perl/perl5
|
https://api.github.com/repos/Perl/perl5
|
opened
|
[doc] use v5.36, only partially enables warnings
|
Needs Triage documentation
|
perl5360delta says:
Furthermore, use v5.36 will also enable warnings as if you'd written use warnings.
but the 'once' warning is an exception:
perl -e'use v5.36; no strict; print $i'
Use of uninitialized value $i in print at -e line 1.
vs:
perl -e'use v5.36; no strict; use warnings; print $i'
Name "main::i" used only once: possible typo at -e line 1.
Use of uninitialized value $i in print at -e line 1.
|
1.0
|
[doc] use v5.36, only partially enables warnings - perl5360delta says:
Furthermore, use v5.36 will also enable warnings as if you'd written use warnings.
but the 'once' warning is an exception:
perl -e'use v5.36; no strict; print $i'
Use of uninitialized value $i in print at -e line 1.
vs:
perl -e'use v5.36; no strict; use warnings; print $i'
Name "main::i" used only once: possible typo at -e line 1.
Use of uninitialized value $i in print at -e line 1.
|
non_automation
|
use only partially enables warnings says furthermore use will also enable warnings as if you d written use warnings but the once warning is an exception perl e use no strict print i use of uninitialized value i in print at e line vs perl e use no strict use warnings print i name main i used only once possible typo at e line use of uninitialized value i in print at e line
| 0
|
3,019
| 12,991,044,208
|
IssuesEvent
|
2020-07-23 02:11:40
|
chavarera/python-mini-projects
|
https://api.github.com/repos/chavarera/python-mini-projects
|
opened
|
Create application to save Google keep note to CSV file and vice versa
|
Automation
|
**problem statement**
Create application to save Google keep note to CSV file and vice versa.
|
1.0
|
Create application to save Google keep note to CSV file and vice versa - **problem statement**
Create application to save Google keep note to CSV file and vice versa.
|
automation
|
create application to save google keep note to csv file and vice versa problem statement create application to save google keep note to csv file and vice versa
| 1
|
7,971
| 25,950,714,418
|
IssuesEvent
|
2022-12-17 15:06:28
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
link to inventory documentation from azure vm -> inventory blade is broken
|
automation/svc triaged cxp doc-bug change-inventory-management/subsvc Pri2
|
In Azure VM, under the inventory blade, there is a link to documentation that is broken. The link URL is: https://learn.microsoft.com/azure/automation/change-tracking/overview-monitoring-agent"
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 38d9c239-5c29-7164-f604-0e2aa8cbffa1
* Version Independent ID: af36f9b1-5a21-56ee-4651-a4d61af0883c
* Content: [Azure Automation Change Tracking and Inventory overview](https://learn.microsoft.com/en-us/azure/automation/change-tracking/overview?tabs=python-2)
* Content Source: [articles/automation/change-tracking/overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/change-tracking/overview.md)
* Service: **automation**
* Sub-service: **change-inventory-management**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
1.0
|
link to inventory documentation from azure vm -> inventory blade is broken - In Azure VM, under the inventory blade, there is a link to documentation that is broken. The link URL is: https://learn.microsoft.com/azure/automation/change-tracking/overview-monitoring-agent"
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 38d9c239-5c29-7164-f604-0e2aa8cbffa1
* Version Independent ID: af36f9b1-5a21-56ee-4651-a4d61af0883c
* Content: [Azure Automation Change Tracking and Inventory overview](https://learn.microsoft.com/en-us/azure/automation/change-tracking/overview?tabs=python-2)
* Content Source: [articles/automation/change-tracking/overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/change-tracking/overview.md)
* Service: **automation**
* Sub-service: **change-inventory-management**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
automation
|
link to inventory documentation from azure vm inventory blade is broken in azure vm under the inventory blade there is a link to documentation that is broken the link url is document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service automation sub service change inventory management github login snehasudhirg microsoft alias sudhirsneha
| 1
|
4,777
| 17,456,050,197
|
IssuesEvent
|
2021-08-06 01:27:43
|
OpenZeppelin/openzeppelin-contracts
|
https://api.github.com/repos/OpenZeppelin/openzeppelin-contracts
|
closed
|
Even more fine-grained tests split for parallelism in CircleCI
|
automation
|
- Instruction: https://tech.sumone.com.br/parallel-tests-on-circleci-5236b8336031.
As a follow up of #1841
|
1.0
|
Even more fine-grained tests split for parallelism in CircleCI - - Instruction: https://tech.sumone.com.br/parallel-tests-on-circleci-5236b8336031.
As a follow up of #1841
|
automation
|
even more fine grained tests split for parallelism in circleci instruction as a follow up of
| 1
|
499,598
| 14,451,029,499
|
IssuesEvent
|
2020-12-08 10:23:41
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
meet.google.com - see bug description
|
browser-firefox engine-gecko priority-critical status-needsinfo-oana
|
<!-- @browser: Firefox 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:83.0) Gecko/20100101 Firefox/83.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63054 -->
**URL**: https://meet.google.com
**Browser / Version**: Firefox 83.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: The site is pretty slow in Firefox but in chromium ther is no issue
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
meet.google.com - see bug description - <!-- @browser: Firefox 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:83.0) Gecko/20100101 Firefox/83.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63054 -->
**URL**: https://meet.google.com
**Browser / Version**: Firefox 83.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: The site is pretty slow in Firefox but in chromium ther is no issue
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_automation
|
meet google com see bug description url browser version firefox operating system windows tested another browser yes edge problem type something else description the site is pretty slow in firefox but in chromium ther is no issue steps to reproduce browser configuration none from with ❤️
| 0
|
46,662
| 11,866,052,983
|
IssuesEvent
|
2020-03-26 02:26:26
|
spack/spack
|
https://api.github.com/repos/spack/spack
|
opened
|
autoconf and other packages on ppc64le
|
build-error
|
### Spack version
<!-- Add the output to the command below -->
```console
[kai@longhorn ~]$ spack --version
```
### Steps to reproduce the issue
```console
[kai@longhorn ~]$ spack spec autoconf
Input spec
--------------------------------
autoconf
Concretized
--------------------------------
autoconf@2.69%gcc@7.3.0 arch=linux-rhel7-power9le
^m4@1.4.18%gcc@7.3.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-rhel7-power9le
^libsigsegv@2.12%gcc@7.3.0 arch=linux-rhel7-power9le
^perl@5.30.1%gcc@7.3.0+cpanm+shared+threads arch=linux-rhel7-power9le
^gdbm@1.18.1%gcc@7.3.0 arch=linux-rhel7-power9le
^readline@8.0%gcc@7.3.0 arch=linux-rhel7-power9le
^ncurses@6.2%gcc@7.3.0~symlinks+termlib arch=linux-rhel7-power9le
^pkgconf@1.6.3%gcc@7.3.0 arch=linux-rhel7-power9le
[kai@longhorn ~]$ spack install autoconf
[...]
See build log for details:
/tmp/kai/spack-stage/spack-stage-autoconf-2.69-ftyunbfd663jlfj24legpgewbdsjygse/spack-build-out.txt
Traceback (most recent call last):
File "/home/01537/kai/build/spack/lib/spack/spack/build_environment.py", line 801, in child_process
return_value = function()
File "/home/01537/kai/build/spack/lib/spack/spack/installer.py", line 1113, in build_process
phase(pkg.spec, pkg.prefix)
File "/home/01537/kai/build/spack/lib/spack/spack/package.py", line 112, in phase_wrapper
callback(instance)
File "/home/01537/kai/build/spack/lib/spack/spack/build_systems/autotools.py", line 160, in _do_patch_config_guess
raise RuntimeError('Failed to find suitable config.guess')
RuntimeError: Failed to find suitable config.guess
```
### Platform and user environment
```console
[kai@longhorn ~]$ uname -a
Linux login1.longhorn.tacc.utexas.edu 4.14.0-115.10.1.el7a.ppc64le #1 SMP Wed Jun 26 09:32:17 UTC 2019 ppc64le ppc64le ppc64le GNU/Linux
[kai@longhorn ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)
```
This machine (longhorn) is similar to Summit, except that it has very little software installed.
I've tracked the issue down already: When trying to install `autoconf`, spack checks whether the included `config.guess` works. It does not, because it does not recognize what should be `powerpc64le-unknown-linux-gnu`. So spack is looking for a newer `config.guess` to replace the included one. On Summit, it'll find one in `/usr/share/automake-x.yy`, and things work. On this machine, automake is not installed (at least not in the standard location). If the package depended on `automake`, it'd look at the spack-installed automake to find a newer `config.guess`. That doesn't apply here.
I don't have access to install anything in `/usr/share`, so the question is, how can I provide an updated `config.guess`? I've hacked around it for now by modifying spack's source so that it searches in my home directory, but clearly that's not a sustainable solution.
[This problem isn't really limited to just the autoconf package, either, but any package that ships with an outdated `config.guess`, which doesn't depend on `automake`: `libsodium` was another manifestation of the same issue.]
One way to make it possible to work around this problem would be for spack to search for `config.guess` in a user-specified location, though I still don't like it, since the user would still have to go find an appropriate `config.guess`, and point spack to it. I'd much rather have something that works out of the box, but I don't have any good idea on how to get it done.
|
1.0
|
autoconf and other packages on ppc64le -
### Spack version
<!-- Add the output to the command below -->
```console
[kai@longhorn ~]$ spack --version
```
### Steps to reproduce the issue
```console
[kai@longhorn ~]$ spack spec autoconf
Input spec
--------------------------------
autoconf
Concretized
--------------------------------
autoconf@2.69%gcc@7.3.0 arch=linux-rhel7-power9le
^m4@1.4.18%gcc@7.3.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-rhel7-power9le
^libsigsegv@2.12%gcc@7.3.0 arch=linux-rhel7-power9le
^perl@5.30.1%gcc@7.3.0+cpanm+shared+threads arch=linux-rhel7-power9le
^gdbm@1.18.1%gcc@7.3.0 arch=linux-rhel7-power9le
^readline@8.0%gcc@7.3.0 arch=linux-rhel7-power9le
^ncurses@6.2%gcc@7.3.0~symlinks+termlib arch=linux-rhel7-power9le
^pkgconf@1.6.3%gcc@7.3.0 arch=linux-rhel7-power9le
[kai@longhorn ~]$ spack install autoconf
[...]
See build log for details:
/tmp/kai/spack-stage/spack-stage-autoconf-2.69-ftyunbfd663jlfj24legpgewbdsjygse/spack-build-out.txt
Traceback (most recent call last):
File "/home/01537/kai/build/spack/lib/spack/spack/build_environment.py", line 801, in child_process
return_value = function()
File "/home/01537/kai/build/spack/lib/spack/spack/installer.py", line 1113, in build_process
phase(pkg.spec, pkg.prefix)
File "/home/01537/kai/build/spack/lib/spack/spack/package.py", line 112, in phase_wrapper
callback(instance)
File "/home/01537/kai/build/spack/lib/spack/spack/build_systems/autotools.py", line 160, in _do_patch_config_guess
raise RuntimeError('Failed to find suitable config.guess')
RuntimeError: Failed to find suitable config.guess
```
### Platform and user environment
```console
[kai@longhorn ~]$ uname -a
Linux login1.longhorn.tacc.utexas.edu 4.14.0-115.10.1.el7a.ppc64le #1 SMP Wed Jun 26 09:32:17 UTC 2019 ppc64le ppc64le ppc64le GNU/Linux
[kai@longhorn ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)
```
This machine (longhorn) is similar to Summit, except that it has very little software installed.
I've tracked the issue down already: When trying to install `autoconf`, spack checks whether the included `config.guess` works. It does not, because it does not recognize what should be `powerpc64le-unknown-linux-gnu`. So spack is looking for a newer `config.guess` to replace the included one. On Summit, it'll find one in `/usr/share/automake-x.yy`, and things work. On this machine, automake is not installed (at least not in the standard location). If the package depended on `automake`, it'd look at the spack-installed automake to find a newer `config.guess`. That doesn't apply here.
I don't have access to install anything in `/usr/share`, so the question is, how can I provide an updated `config.guess`? I've hacked around it for now by modifying spack's source so that it searches in my home directory, but clearly that's not a sustainable solution.
[This problem isn't really limited to just the autoconf package, either, but any package that ships with an outdated `config.guess`, which doesn't depend on `automake`: `libsodium` was another manifestation of the same issue.]
One way to make it possible to work around this problem would be for spack to search for `config.guess` in a user-specified location, though I still don't like it, since the user would still have to go find an appropriate `config.guess`, and point spack to it. I'd much rather have something that works out of the box, but I don't have any good idea on how to get it done.
|
non_automation
|
autoconf and other packages on spack version console spack version steps to reproduce the issue console spack spec autoconf input spec autoconf concretized autoconf gcc arch linux gcc patches sigsegv arch linux libsigsegv gcc arch linux perl gcc cpanm shared threads arch linux gdbm gcc arch linux readline gcc arch linux ncurses gcc symlinks termlib arch linux pkgconf gcc arch linux spack install autoconf see build log for details tmp kai spack stage spack stage autoconf spack build out txt traceback most recent call last file home kai build spack lib spack spack build environment py line in child process return value function file home kai build spack lib spack spack installer py line in build process phase pkg spec pkg prefix file home kai build spack lib spack spack package py line in phase wrapper callback instance file home kai build spack lib spack spack build systems autotools py line in do patch config guess raise runtimeerror failed to find suitable config guess runtimeerror failed to find suitable config guess platform and user environment console uname a linux longhorn tacc utexas edu smp wed jun utc gnu linux cat etc redhat release red hat enterprise linux server release maipo this machine longhorn is similar to summit except that it has very little software installed i ve tracked the issue down already when trying to install autoconf spack checks whether the included config guess works it does not because it does not recognize what should be unknown linux gnu so spack is looking for a newer config guess to replace the included one on summit it ll find one in usr share automake x yy and things work on this machine automake is not installed at least not in the standard location if the package depended on automake it d look at the spack installed automake to find a newer config guess that doesn t apply here i don t have access to install anything in usr share so the question is how can i provide an updated config guess i ve hacked around it for now by modifying spack s source so that it searches in my home directory but clearly that s not a sustainable solution one way to make it possible to work around this problem would be for spack to search for config guess in a user specified location though i still don t like it since the user would still have to go find an appropriate config guess and point spack to it i d much rather have something that works out of the box but i don t have any good idea on how to get it done
| 0
|
60,106
| 7,319,319,807
|
IssuesEvent
|
2018-03-02 00:11:26
|
MetaMask/metamask-extension
|
https://api.github.com/repos/MetaMask/metamask-extension
|
closed
|
Feature request: undo send
|
L3-ui T1-enhancement T3-discussion T4-needsdesign
|
Wallets in a way are much like an email client. It's a way for people to communicate, only with money instead of text.
People make mistakes. And these mistakes can be very costly. While an email gone wrong can have bad consequences, sending a huge amount of ETH because of a momentary distraction or a "fat finger" can have a devastating effect for an individual.
Gmail has introduced [undo send](https://support.google.com/mail/answer/2819488?co=GENIE.Platform%3DDesktop&hl=en) a few years ago. Would Metamask consider adding something similar?
|
1.0
|
Feature request: undo send - Wallets in a way are much like an email client. It's a way for people to communicate, only with money instead of text.
People make mistakes. And these mistakes can be very costly. While an email gone wrong can have bad consequences, sending a huge amount of ETH because of a momentary distraction or a "fat finger" can have a devastating effect for an individual.
Gmail has introduced [undo send](https://support.google.com/mail/answer/2819488?co=GENIE.Platform%3DDesktop&hl=en) a few years ago. Would Metamask consider adding something similar?
|
non_automation
|
feature request undo send wallets in a way are much like an email client it s a way for people to communicate only with money instead of text people make mistakes and these mistakes can be very costly while an email gone wrong can have bad consequences sending a huge amount of eth because of a momentary distraction or a fat finger can have a devastating effect for an individual gmail has introduced a few years ago would metamask consider adding something similar
| 0
|
191,331
| 6,828,149,501
|
IssuesEvent
|
2017-11-08 19:25:00
|
GoogleCloudPlatform/google-cloud-ruby
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-ruby
|
closed
|
Add max_staleness/bounded_staleness to Client#snapshot method
|
api: spanner priority: p2 type: feature request
|
I'm not currently able to define a max_staleness/bounded_staleness when creating a snapshot as a parameter, but [Python](https://googlecloudplatform.github.io/google-cloud-python/latest/spanner/snapshot-api.html) for example does include this parameter in their snapshot method. This parameter is surfaced in the `Client#read` method, and I think it's better understood if surfaced as part of `Client#snapshot` method as well.
The request is to add a parameter for max_staleness/bounded_staleness to the `Client#snapshot` method.
I'm open to interpretation if there's reasons for not adding this parameter to `Client#snapshot`.
|
1.0
|
Add max_staleness/bounded_staleness to Client#snapshot method - I'm not currently able to define a max_staleness/bounded_staleness when creating a snapshot as a parameter, but [Python](https://googlecloudplatform.github.io/google-cloud-python/latest/spanner/snapshot-api.html) for example does include this parameter in their snapshot method. This parameter is surfaced in the `Client#read` method, and I think it's better understood if surfaced as part of `Client#snapshot` method as well.
The request is to add a parameter for max_staleness/bounded_staleness to the `Client#snapshot` method.
I'm open to interpretation if there's reasons for not adding this parameter to `Client#snapshot`.
|
non_automation
|
add max staleness bounded staleness to client snapshot method i m not currently able to define a max staleness bounded staleness when creating a snapshot as a parameter but for example does include this parameter in their snapshot method this parameter is surfaced in the client read method and i think it s better understood if surfaced as part of client snapshot method as well the request is to add a parameter for max staleness bounded staleness to the client snapshot method i m open to interpretation if there s reasons for not adding this parameter to client snapshot
| 0
|
86
| 3,519,422,318
|
IssuesEvent
|
2016-01-12 16:47:10
|
blackbaud/skyux
|
https://api.github.com/repos/blackbaud/skyux
|
closed
|
Clean up Selenium screenshots after visual test run
|
automation
|
There are some leftover screenshots of what appears to be the entire page after each test run. These don't serve any purpose and should be deleted after the test run.
|
1.0
|
Clean up Selenium screenshots after visual test run - There are some leftover screenshots of what appears to be the entire page after each test run. These don't serve any purpose and should be deleted after the test run.
|
automation
|
clean up selenium screenshots after visual test run there are some leftover screenshots of what appears to be the entire page after each test run these don t serve any purpose and should be deleted after the test run
| 1
|
160,790
| 6,102,586,658
|
IssuesEvent
|
2017-06-20 16:48:53
|
crowdAI/crowdai
|
https://api.github.com/repos/crowdAI/crowdai
|
closed
|
CrowdAI logo on mobile different
|
high priority v2
|
For some reason, the crowdAI samurAI loses his eyes on mobile...
<img width="402" alt="screen shot 2017-06-17 at 1 56 20 pm" src="https://user-images.githubusercontent.com/215057/27252691-e14b90e2-5364-11e7-94b8-61f0a3bd6339.png">
|
1.0
|
CrowdAI logo on mobile different - For some reason, the crowdAI samurAI loses his eyes on mobile...
<img width="402" alt="screen shot 2017-06-17 at 1 56 20 pm" src="https://user-images.githubusercontent.com/215057/27252691-e14b90e2-5364-11e7-94b8-61f0a3bd6339.png">
|
non_automation
|
crowdai logo on mobile different for some reason the crowdai samurai loses his eyes on mobile img width alt screen shot at pm src
| 0
|
232
| 4,839,540,951
|
IssuesEvent
|
2016-11-09 09:51:45
|
cf-tm-bot/openstack_cpi
|
https://api.github.com/repos/cf-tm-bot/openstack_cpi
|
closed
|
lifecycle terraform is always deleting/creating floating IP - Story Id: 133592885
|
accepted bug env-creation-automation pipeline
|
This seems to happen in every run: http://172.18.104.32:8080/teams/main/pipelines/bosh-openstack-cpi/jobs/lifecycle/builds/2485
---
Mirrors: [story 133592885](https://www.pivotaltracker.com/story/show/133592885) submitted on Nov 2, 2016 UTC
- **Requester**: Marco Voelz
- **Owners**: Tom Kiemes, Cornelius Schumacher
- **Estimate**: 0.0
|
1.0
|
lifecycle terraform is always deleting/creating floating IP - Story Id: 133592885 - This seems to happen in every run: http://172.18.104.32:8080/teams/main/pipelines/bosh-openstack-cpi/jobs/lifecycle/builds/2485
---
Mirrors: [story 133592885](https://www.pivotaltracker.com/story/show/133592885) submitted on Nov 2, 2016 UTC
- **Requester**: Marco Voelz
- **Owners**: Tom Kiemes, Cornelius Schumacher
- **Estimate**: 0.0
|
automation
|
lifecycle terraform is always deleting creating floating ip story id this seems to happen in every run mirrors submitted on nov utc requester marco voelz owners tom kiemes cornelius schumacher estimate
| 1
|
1,772
| 10,708,382,914
|
IssuesEvent
|
2019-10-24 19:34:15
|
openhab/openhab-core
|
https://api.github.com/repos/openhab/openhab-core
|
closed
|
[Automation] disable and enable rules
|
automation awaiting feedback
|
when i create a rule which disables another rule then the disabled rule first switches to uninitialized and then to disabled. after it gets enabled back again it is stated as active but does not execute any command within that rule. i have to save the rule manually and after that it starts working again.
that bug appeared with 2.4.0 and is still present within 2.5.0.
does anyone have a workaround for that? i am back on 2.3.0 because i was not able to get the rule engine to work properly.
|
1.0
|
[Automation] disable and enable rules - when i create a rule which disables another rule then the disabled rule first switches to uninitialized and then to disabled. after it gets enabled back again it is stated as active but does not execute any command within that rule. i have to save the rule manually and after that it starts working again.
that bug appeared with 2.4.0 and is still present within 2.5.0.
does anyone have a workaround for that? i am back on 2.3.0 because i was not able to get the rule engine to work properly.
|
automation
|
disable and enable rules when i create a rule which disables another rule then the disabled rule first switches to uninitialized and then to disabled after it gets enabled back again it is stated as active but does not execute any command within that rule i have to save the rule manually and after that it starts working again that bug appeared with and is still present within does anyone have a workaround for that i am back on because i was not able to get the rule engine to work properly
| 1
|
9,518
| 29,177,777,441
|
IssuesEvent
|
2023-05-19 09:20:14
|
rancher/dashboard
|
https://api.github.com/repos/rancher/dashboard
|
reopened
|
toleration `key` has header: `label key` for cluster / fleet agent customization fields. Expected only `key` or possibly `taint key`
|
kind/bug area/cluster [zube]: Done kind/bug-qa QA/dev-automation
|
<!--------- For bugs and general issues --------->
**Setup**
- Rancher version: v2.7-head(42f3b50)
- Rancher UI Extensions: n/a
- Browser type & version: chrome
**Describe the bug**
<!--A clear and concise description of what the bug is.-->
the text for a toleration key is currently set to `label key` which may be confusing, since tolerations are for taints, and affinity are for labels.
**To Reproduce**
<!--Steps to reproduce the behavior-->
* deploy an rke1 or rke2 cluster
* go to cluster or fleet agent customization and add a toleration
**Result**
key reads `label key`
**Expected Result**
<!--A clear and concise description of what you expected to happen.-->
`key` or possibly `taint key`
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem and/or errors in the browser's dev console -->
<img width="891" alt="Screen Shot 2023-05-16 at 10 27 58 PM" src="https://github.com/rancher/dashboard/assets/16691014/59746bdb-2190-4676-9a45-9ae0a21e30d6">
**Additional context**
<!--Add any other context about the problem here. -->
when adding a taint, the field reads as just `key`
|
1.0
|
toleration `key` has header: `label key` for cluster / fleet agent customization fields. Expected only `key` or possibly `taint key` - <!--------- For bugs and general issues --------->
**Setup**
- Rancher version: v2.7-head(42f3b50)
- Rancher UI Extensions: n/a
- Browser type & version: chrome
**Describe the bug**
<!--A clear and concise description of what the bug is.-->
the text for a toleration key is currently set to `label key` which may be confusing, since tolerations are for taints, and affinity are for labels.
**To Reproduce**
<!--Steps to reproduce the behavior-->
* deploy an rke1 or rke2 cluster
* go to cluster or fleet agent customization and add a toleration
**Result**
key reads `label key`
**Expected Result**
<!--A clear and concise description of what you expected to happen.-->
`key` or possibly `taint key`
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem and/or errors in the browser's dev console -->
<img width="891" alt="Screen Shot 2023-05-16 at 10 27 58 PM" src="https://github.com/rancher/dashboard/assets/16691014/59746bdb-2190-4676-9a45-9ae0a21e30d6">
**Additional context**
<!--Add any other context about the problem here. -->
when adding a taint, the field reads as just `key`
|
automation
|
toleration key has header label key for cluster fleet agent customization fields expected only key or possibly taint key setup rancher version head rancher ui extensions n a browser type version chrome describe the bug the text for a toleration key is currently set to label key which may be confusing since tolerations are for taints and affinity are for labels to reproduce deploy an or cluster go to cluster or fleet agent customization and add a toleration result key reads label key expected result key or possibly taint key screenshots img width alt screen shot at pm src additional context when adding a taint the field reads as just key
| 1
|
10,169
| 31,844,851,690
|
IssuesEvent
|
2023-09-14 19:05:05
|
Chunnyluny/Chunnyluny
|
https://api.github.com/repos/Chunnyluny/Chunnyluny
|
closed
|
fixing github action for automatically updating my README.md by using ./template/README.md.tpl
|
github_actions automation
|
fixing github action for automatically updating my README.md by using ./template/README.md.tpl by making a pull request instead of pushing directly onto the master branch, since the workflow is triggered on push.
so I will do it again, since I'm not sure if it is done correctly, I will do it on local master branch and do a pull request to the origin/master
|
1.0
|
fixing github action for automatically updating my README.md by using ./template/README.md.tpl - fixing github action for automatically updating my README.md by using ./template/README.md.tpl by making a pull request instead of pushing directly onto the master branch, since the workflow is triggered on push.
so I will do it again, since I'm not sure if it is done correctly, I will do it on local master branch and do a pull request to the origin/master
|
automation
|
fixing github action for automatically updating my readme md by using template readme md tpl fixing github action for automatically updating my readme md by using template readme md tpl by making a pull request instead of pushing directly onto the master branch since the workflow is triggered on push so i will do it again since i m not sure if it is done correctly i will do it on local master branch and do a pull request to the origin master
| 1
|
24,957
| 4,154,607,460
|
IssuesEvent
|
2016-06-16 12:19:52
|
xavierdidelot/ClonalOrigin
|
https://api.github.com/repos/xavierdidelot/ClonalOrigin
|
closed
|
can't understand the output file
|
auto-migrated Priority-Medium Type-Defect
|
```
Hi,
I am a bit confused checking the output file.
what's the meaning of iterations of each <outputFile> in the results file(XML)?
And some of the recombination events has an overlap sequence in different
iteration, how can we do a filter to make a reliable recombination events ?
Sorry about my English.
thx.
li
```
Original issue reported on code.google.com by `yisong...@gmail.com` on 23 Mar 2015 at 7:33
|
1.0
|
can't understand the output file - ```
Hi,
I am a bit confused checking the output file.
what's the meaning of iterations of each <outputFile> in the results file(XML)?
And some of the recombination events has an overlap sequence in different
iteration, how can we do a filter to make a reliable recombination events ?
Sorry about my English.
thx.
li
```
Original issue reported on code.google.com by `yisong...@gmail.com` on 23 Mar 2015 at 7:33
|
non_automation
|
can t understand the output file hi i am a bit confused checking the output file what s the meaning of iterations of each in the results file xml and some of the recombination events has an overlap sequence in different iteration how can we do a filter to make a reliable recombination events sorry about my english thx li original issue reported on code google com by yisong gmail com on mar at
| 0
|
2,315
| 11,739,359,997
|
IssuesEvent
|
2020-03-11 17:34:29
|
submariner-io/submariner
|
https://api.github.com/repos/submariner-io/submariner
|
closed
|
Remove bash-driving-kind deploys after Armada is shown solid
|
armada automation
|
Once we have https://github.com/submariner-io/submariner/pull/317 merged, per feedback on the PR, we want to remove the Bash deployment logic that is being deprecated in favor of Armada. Want to wait and remove the flag after the Armada path has been proven solid by general use in CI for a while, and after the main Armada PoC PR has been merged and an official release made. Removing the Bash logic in a commit distinct from https://github.com/submariner-io/submariner/pull/317 is also designed to allow for easier reverts, so we can get back to having both options without removing the Armada path.
|
1.0
|
Remove bash-driving-kind deploys after Armada is shown solid - Once we have https://github.com/submariner-io/submariner/pull/317 merged, per feedback on the PR, we want to remove the Bash deployment logic that is being deprecated in favor of Armada. Want to wait and remove the flag after the Armada path has been proven solid by general use in CI for a while, and after the main Armada PoC PR has been merged and an official release made. Removing the Bash logic in a commit distinct from https://github.com/submariner-io/submariner/pull/317 is also designed to allow for easier reverts, so we can get back to having both options without removing the Armada path.
|
automation
|
remove bash driving kind deploys after armada is shown solid once we have merged per feedback on the pr we want to remove the bash deployment logic that is being deprecated in favor of armada want to wait and remove the flag after the armada path has been proven solid by general use in ci for a while and after the main armada poc pr has been merged and an official release made removing the bash logic in a commit distinct from is also designed to allow for easier reverts so we can get back to having both options without removing the armada path
| 1
|
112,742
| 4,536,564,579
|
IssuesEvent
|
2016-09-08 20:49:06
|
semperfiwebdesign/all-in-one-seo-pack
|
https://api.github.com/repos/semperfiwebdesign/all-in-one-seo-pack
|
opened
|
Incorrect link in welcome panel
|
Bug PRIORITY - Medium UX
|
Reported here: https://www.facebook.com/photo.php?fbid=10154444790582071&set=o.118018874899420&type=3&theater
_"Hi. I possibly found a mistake in All in One SEO Pack 2.3.9.2.
When I click on "Submit an XML Sitemap to Google" it actually takes me to this url:
https://semperplugins.com/documentation/quality-guidelines-for-seo-titles-and-descriptions/
But I guess it should take me to something like this:
https://semperplugins.com/documentation/submitting-an-xml-sitemap-to-google/"_

|
1.0
|
Incorrect link in welcome panel - Reported here: https://www.facebook.com/photo.php?fbid=10154444790582071&set=o.118018874899420&type=3&theater
_"Hi. I possibly found a mistake in All in One SEO Pack 2.3.9.2.
When I click on "Submit an XML Sitemap to Google" it actually takes me to this url:
https://semperplugins.com/documentation/quality-guidelines-for-seo-titles-and-descriptions/
But I guess it should take me to something like this:
https://semperplugins.com/documentation/submitting-an-xml-sitemap-to-google/"_

|
non_automation
|
incorrect link in welcome panel reported here hi i possibly found a mistake in all in one seo pack when i click on submit an xml sitemap to google it actually takes me to this url but i guess it should take me to something like this
| 0
|
1,140
| 9,561,434,306
|
IssuesEvent
|
2019-05-03 23:18:20
|
askmench/mench-web-app
|
https://api.github.com/repos/askmench/mench-web-app
|
opened
|
Adjust Messenger "stop" command
|
Bot/Chat-Automation
|
TODO
- [ ] Remove ability to stop specific Action Plan intents as that function is in the Action Plan Webview
- [ ] Ask users if they want to stop all or stop a specific
- [ ] If choose all, remove all and unsubscribe
- [ ] If they say specific, guide them to open webview and do it.
- [ ] Change terminology from "choose one of the following options" to "choose one"
|
1.0
|
Adjust Messenger "stop" command - TODO
- [ ] Remove ability to stop specific Action Plan intents as that function is in the Action Plan Webview
- [ ] Ask users if they want to stop all or stop a specific
- [ ] If choose all, remove all and unsubscribe
- [ ] If they say specific, guide them to open webview and do it.
- [ ] Change terminology from "choose one of the following options" to "choose one"
|
automation
|
adjust messenger stop command todo remove ability to stop specific action plan intents as that function is in the action plan webview ask users if they want to stop all or stop a specific if choose all remove all and unsubscribe if they say specific guide them to open webview and do it change terminology from choose one of the following options to choose one
| 1
|
8,876
| 27,172,357,184
|
IssuesEvent
|
2023-02-17 20:42:38
|
OneDrive/onedrive-api-docs
|
https://api.github.com/repos/OneDrive/onedrive-api-docs
|
closed
|
Clear example missing
|
area:Picker Needs: Investigation automation:Closed
|
[Hello MS Team,
Could you please provide us a dedicated repo with all the requirements set in order for us to try it ?
It is kind of difficult to make it works locally.
A simple hello word page with a button that shows the React Component...
Especially for developers like me who do not use TypeScrit.
]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: efd8a4b3-eb11-087a-2406-5979bd3931b3
* Version Independent ID: 39e9d7ef-e826-0096-2758-f6259741cd91
* Content: [Microsoft File Browser SDK (Preview) - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/controls/file-pickers/react/?view=odsp-graph-online)
* Content Source: [docs/controls/file-pickers/react/index.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/controls/file-pickers/react/index.md)
* Product: **onedrive**
* GitHub Login: @KevinTCoughlin
* Microsoft Alias: **keco**
|
1.0
|
Clear example missing -
[Hello MS Team,
Could you please provide us a dedicated repo with all the requirements set in order for us to try it ?
It is kind of difficult to make it works locally.
A simple hello word page with a button that shows the React Component...
Especially for developers like me who do not use TypeScrit.
]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: efd8a4b3-eb11-087a-2406-5979bd3931b3
* Version Independent ID: 39e9d7ef-e826-0096-2758-f6259741cd91
* Content: [Microsoft File Browser SDK (Preview) - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/controls/file-pickers/react/?view=odsp-graph-online)
* Content Source: [docs/controls/file-pickers/react/index.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/controls/file-pickers/react/index.md)
* Product: **onedrive**
* GitHub Login: @KevinTCoughlin
* Microsoft Alias: **keco**
|
automation
|
clear example missing hello ms team could you please provide us a dedicated repo with all the requirements set in order for us to try it it is kind of difficult to make it works locally a simple hello word page with a button that shows the react component especially for developers like me who do not use typescrit document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product onedrive github login kevintcoughlin microsoft alias keco
| 1
|
11,007
| 4,128,041,026
|
IssuesEvent
|
2016-06-10 02:54:30
|
TEAMMATES/teammates
|
https://api.github.com/repos/TEAMMATES/teammates
|
closed
|
Re-organize FileHelper classes
|
a-CodeQuality m.Aspect
|
There are two `FileHelper`s, one for production (reading input stream etc.) and one for non-production (reading files etc.), but they're not very well-organized right now. Also, there are some self-defined functions that can actually fit in either one of these classes.
|
1.0
|
Re-organize FileHelper classes - There are two `FileHelper`s, one for production (reading input stream etc.) and one for non-production (reading files etc.), but they're not very well-organized right now. Also, there are some self-defined functions that can actually fit in either one of these classes.
|
non_automation
|
re organize filehelper classes there are two filehelper s one for production reading input stream etc and one for non production reading files etc but they re not very well organized right now also there are some self defined functions that can actually fit in either one of these classes
| 0
|
1,360
| 9,977,978,089
|
IssuesEvent
|
2019-07-09 18:40:19
|
elastic/apm-integration-testing
|
https://api.github.com/repos/elastic/apm-integration-testing
|
closed
|
report a proper error when docker compose is not available
|
[zube]: In Review automation
|
The following error is shown when you run compose.py without docker-compose installed, we should report a proper error and add docker-compose as a requirement on the README
```
$ ./scripts/compose.py start master --with-metricbeat --with-filebeat
Starting stack services..
Traceback (most recent call last):
File "./scripts/compose.py", line 2769, in <module>
main()
File "./scripts/compose.py", line 2765, in main
setup()
File "./scripts/compose.py", line 2174, in __call__
self.args.func()
File "./scripts/compose.py", line 2540, in start_handler
subprocess.call(docker_compose_cmd + pull_params + image_services)
File "/usr/lib/python2.7/subprocess.py", line 172, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
raise child_exception
OSError: [Errno 13] Permission denied
```
|
1.0
|
report a proper error when docker compose is not available - The following error is shown when you run compose.py without docker-compose installed, we should report a proper error and add docker-compose as a requirement on the README
```
$ ./scripts/compose.py start master --with-metricbeat --with-filebeat
Starting stack services..
Traceback (most recent call last):
File "./scripts/compose.py", line 2769, in <module>
main()
File "./scripts/compose.py", line 2765, in main
setup()
File "./scripts/compose.py", line 2174, in __call__
self.args.func()
File "./scripts/compose.py", line 2540, in start_handler
subprocess.call(docker_compose_cmd + pull_params + image_services)
File "/usr/lib/python2.7/subprocess.py", line 172, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
raise child_exception
OSError: [Errno 13] Permission denied
```
|
automation
|
report a proper error when docker compose is not available the following error is shown when you run compose py without docker compose installed we should report a proper error and add docker compose as a requirement on the readme scripts compose py start master with metricbeat with filebeat starting stack services traceback most recent call last file scripts compose py line in main file scripts compose py line in main setup file scripts compose py line in call self args func file scripts compose py line in start handler subprocess call docker compose cmd pull params image services file usr lib subprocess py line in call return popen popenargs kwargs wait file usr lib subprocess py line in init errread errwrite file usr lib subprocess py line in execute child raise child exception oserror permission denied
| 1
|
8,845
| 27,172,322,978
|
IssuesEvent
|
2023-02-17 20:40:29
|
OneDrive/onedrive-api-docs
|
https://api.github.com/repos/OneDrive/onedrive-api-docs
|
closed
|
Can not open uploaded ONENOTE file
|
Needs: Investigation area:File Storage automation:Closed
|
### Category
- [ ] Question
- [ ] Documentation issue
- [x] Bug
#### Expected or Desired Behavior
Uploaded ONENOTE file *.one can open in OneDrive.
#### Observed Behavior
Recently we found a weird problem, ONENOTE file can not be open after we uploaded it **for days.**
It shows following message:
**This section was originally created in an older version of OneNote.**

#### Steps to Reproduce
Step 1. use upload item API to upload OneNote file
`PUT /_api/v2.0/drives/d_id/items/root:/test_123_2.one:/content?%40name.conflictBehavior=replace`
```
Date: Thu, 22 Oct 2020 07:21:48 GMT
SPRequestGuid: c315869f-30f4-b000-7ffd-f6a0d1348000
request-id: c315869f-30f4-b000-7ffd-f6a0d1348000
```
Step 2. after day, the OneNote file fails to open, but works fine at begining.
#### Question
1.Should I do any process or API request after I uploaded ONENOTE file?
2.For ONENOTE file which failed to open, I can download it and then upload it via browser operations to fix it.
Thank you.
|
1.0
|
Can not open uploaded ONENOTE file - ### Category
- [ ] Question
- [ ] Documentation issue
- [x] Bug
#### Expected or Desired Behavior
Uploaded ONENOTE file *.one can open in OneDrive.
#### Observed Behavior
Recently we found a weird problem, ONENOTE file can not be open after we uploaded it **for days.**
It shows following message:
**This section was originally created in an older version of OneNote.**

#### Steps to Reproduce
Step 1. use upload item API to upload OneNote file
`PUT /_api/v2.0/drives/d_id/items/root:/test_123_2.one:/content?%40name.conflictBehavior=replace`
```
Date: Thu, 22 Oct 2020 07:21:48 GMT
SPRequestGuid: c315869f-30f4-b000-7ffd-f6a0d1348000
request-id: c315869f-30f4-b000-7ffd-f6a0d1348000
```
Step 2. after day, the OneNote file fails to open, but works fine at begining.
#### Question
1.Should I do any process or API request after I uploaded ONENOTE file?
2.For ONENOTE file which failed to open, I can download it and then upload it via browser operations to fix it.
Thank you.
|
automation
|
can not open uploaded onenote file category question documentation issue bug expected or desired behavior uploaded onenote file one can open in onedrive observed behavior recently we found a weird problem onenote file can not be open after we uploaded it for days it shows following message this section was originally created in an older version of onenote steps to reproduce step use upload item api to upload onenote file put api drives d id items root test one content conflictbehavior replace date thu oct gmt sprequestguid request id step after day the onenote file fails to open but works fine at begining question should i do any process or api request after i uploaded onenote file for onenote file which failed to open i can download it and then upload it via browser operations to fix it thank you
| 1
|
3,753
| 14,501,405,254
|
IssuesEvent
|
2020-12-11 19:27:47
|
BCDevOps/developer-experience
|
https://api.github.com/repos/BCDevOps/developer-experience
|
closed
|
Sysdig-Agent deployment to Silver
|
Sysdig automation monitoring ops
|
**Describe the issue**
DXC team to deploy the Sysdig Agent to the silver cluster using CCM.
**Additional context**
Sysdig-agent CCM configuration has been developed and deployed successfully to KLAB. This is the followup handoff to DXC for a production deployment.
**Definition of done**
- [x] Sysdig-Agent running successfully in Silver cluster (deployed by DXC)
|
1.0
|
Sysdig-Agent deployment to Silver - **Describe the issue**
DXC team to deploy the Sysdig Agent to the silver cluster using CCM.
**Additional context**
Sysdig-agent CCM configuration has been developed and deployed successfully to KLAB. This is the followup handoff to DXC for a production deployment.
**Definition of done**
- [x] Sysdig-Agent running successfully in Silver cluster (deployed by DXC)
|
automation
|
sysdig agent deployment to silver describe the issue dxc team to deploy the sysdig agent to the silver cluster using ccm additional context sysdig agent ccm configuration has been developed and deployed successfully to klab this is the followup handoff to dxc for a production deployment definition of done sysdig agent running successfully in silver cluster deployed by dxc
| 1
|
797,972
| 28,210,913,615
|
IssuesEvent
|
2023-04-05 04:13:23
|
AY2223S2-CS2103T-F12-3/tp
|
https://api.github.com/repos/AY2223S2-CS2103T-F12-3/tp
|
closed
|
Improve `edituser` command
|
priority.High type.Bug severity.Medium
|
* Documentation in help window has `INDEX` as a parameter, causing confusion
* #245
* Fixed in #203
* Command documentation is missing in UG and not immediately apparent
* #211
* #243
|
1.0
|
Improve `edituser` command - * Documentation in help window has `INDEX` as a parameter, causing confusion
* #245
* Fixed in #203
* Command documentation is missing in UG and not immediately apparent
* #211
* #243
|
non_automation
|
improve edituser command documentation in help window has index as a parameter causing confusion fixed in command documentation is missing in ug and not immediately apparent
| 0
|
57,479
| 11,756,544,094
|
IssuesEvent
|
2020-03-13 11:48:56
|
fac19/week2-hklo
|
https://api.github.com/repos/fac19/week2-hklo
|
closed
|
Chrome gives a cross site cookie warning in the console.
|
bug code review
|
It's not urgent but there will be a breaking change in a future release...
A cookie associated with a cross-site resource at http://giphy.com/ was set without the `SameSite` attribute. A future release of Chrome will only deliver cookies with cross-site requests if they are set with `SameSite=None` and `Secure`. You can review cookies in developer tools under Application>Storage>Cookies and see more details at https://www.chromestatus.com/feature/5088147346030592 and https://www.chromestatus.com/feature/5633521622188032.
|
1.0
|
Chrome gives a cross site cookie warning in the console. - It's not urgent but there will be a breaking change in a future release...
A cookie associated with a cross-site resource at http://giphy.com/ was set without the `SameSite` attribute. A future release of Chrome will only deliver cookies with cross-site requests if they are set with `SameSite=None` and `Secure`. You can review cookies in developer tools under Application>Storage>Cookies and see more details at https://www.chromestatus.com/feature/5088147346030592 and https://www.chromestatus.com/feature/5633521622188032.
|
non_automation
|
chrome gives a cross site cookie warning in the console it s not urgent but there will be a breaking change in a future release a cookie associated with a cross site resource at was set without the samesite attribute a future release of chrome will only deliver cookies with cross site requests if they are set with samesite none and secure you can review cookies in developer tools under application storage cookies and see more details at and
| 0
|
3,062
| 13,046,089,956
|
IssuesEvent
|
2020-07-29 08:25:24
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Support PRs to use PR ID in the artifact name
|
Team:Automation
|
When packaging a PR, the generated artifact is uploaded to a Google Cloud bucket with the name of the snapshot. We'd like to generate the artifact including the PR ID:
```diff
- gs://beats-ci-artifacts/snapshots/elastic-agent-8.0.0-SNAPSHOT-linux-x86_64.tar.gz
+ gs://beats-ci-artifacts/pull-requests/elastic-agent-8.0.0-SNAPSHOT-${PR_ID}-linux-x86_64.tar.gz
```
cc/ @elastic/observablt-robots @ph @EricDavisX
|
1.0
|
Support PRs to use PR ID in the artifact name - When packaging a PR, the generated artifact is uploaded to a Google Cloud bucket with the name of the snapshot. We'd like to generate the artifact including the PR ID:
```diff
- gs://beats-ci-artifacts/snapshots/elastic-agent-8.0.0-SNAPSHOT-linux-x86_64.tar.gz
+ gs://beats-ci-artifacts/pull-requests/elastic-agent-8.0.0-SNAPSHOT-${PR_ID}-linux-x86_64.tar.gz
```
cc/ @elastic/observablt-robots @ph @EricDavisX
|
automation
|
support prs to use pr id in the artifact name when packaging a pr the generated artifact is uploaded to a google cloud bucket with the name of the snapshot we d like to generate the artifact including the pr id diff gs beats ci artifacts snapshots elastic agent snapshot linux tar gz gs beats ci artifacts pull requests elastic agent snapshot pr id linux tar gz cc elastic observablt robots ph ericdavisx
| 1
|
173,480
| 14,427,446,080
|
IssuesEvent
|
2020-12-06 04:05:41
|
zparnold/terraform-cost-estimator
|
https://api.github.com/repos/zparnold/terraform-cost-estimator
|
closed
|
Introduce a documentation website
|
documentation
|
Like all the kewl kidz I think we should have a documentation website. We can stick to GH Pages and Jekyll for now but I'd like to cover more or less what the Readme.md has in a slightly more visually appealing format.
|
1.0
|
Introduce a documentation website - Like all the kewl kidz I think we should have a documentation website. We can stick to GH Pages and Jekyll for now but I'd like to cover more or less what the Readme.md has in a slightly more visually appealing format.
|
non_automation
|
introduce a documentation website like all the kewl kidz i think we should have a documentation website we can stick to gh pages and jekyll for now but i d like to cover more or less what the readme md has in a slightly more visually appealing format
| 0
|
858
| 8,419,360,869
|
IssuesEvent
|
2018-10-15 06:28:55
|
eclipse/smarthome
|
https://api.github.com/repos/eclipse/smarthome
|
closed
|
[Automation] ItemStateChangeTrigger misfires
|
Automation bug
|
I rely on nested groups with functions for much of my automation, and had essentially changed all my triggers over to GenericEventTriggers due to this issue, but I was wanting to use some item state change events. I've been chasing my tail on this for months, but I think I've gotten close to finding the cause.
With this item/group structure, the ItemStateChangeTrigger will fire 3 times when the item state changes... once for the item and once for each group. ItemStateUpdateTrigger does not have this issue. The group function seems to be related to the issue, and this is the simplest way I've found to reproduce the problem. I'm still looking into this, but thought I'd post it in case this is a known issue, or if one of the devs can spot something quicker than I can (very likely).
https://github.com/eclipse/smarthome/blob/master/bundles/automation/org.eclipse.smarthome.automation.module.core/src/main/java/org/eclipse/smarthome/automation/module/core/handler/ItemStateTriggerHandler.java
```
Group:Switch:OR(ON,OFF) gTest_Parent "Test Parent Group [%s]" <none> (gTest)
Group:Switch:OR(ON,OFF) gChild_1 "Test Child Group 1 [%s]" <none> (gTest_Parent)
Switch Test_Switch_1 "Test Switch 1 [%s]" <switch> (gChild_1,gTest_Parent)
```
```
'use strict';
var log = Java.type("org.slf4j.LoggerFactory").getLogger("org.eclipse.smarthome.model.script.Rules");
scriptExtension.importPreset("RuleSupport");
scriptExtension.importPreset("RuleSimple");
var testRule1 = new SimpleRule() {
execute: function( module, input) {
log.debug("JSR223: JS: Test Rule 1: [" + input['event'].itemName + "]: [" + input['event'].itemState + "]");
}
};
testRule1.setTriggers([
TriggerBuilder.create()
.withId("TestTrigger1")
.withTypeUID("core.ItemStateChangeTrigger")
.withConfiguration(
new Configuration({
"itemName": "Test_Switch_1"
})).build()
]);
automationManager.addRule(testRule1);
```
**Log result:**
```
2018-10-05 06:45:46.755 [DEBUG] [org.eclipse.smarthome.model.script.Rules] - JSR223: JS: Test Rule 1: [Test_Switch_1]: [OFF]
2018-10-05 06:45:46.760 [DEBUG] [org.eclipse.smarthome.model.script.Rules] - JSR223: JS: Test Rule 1: [gChild_1]: [OFF]
2018-10-05 06:45:46.780 [DEBUG] [org.eclipse.smarthome.model.script.Rules] - JSR223: JS: Test Rule 1: [gTest_Parent]: [OFF]
```
**Event.log:**
```
2018-10-05 06:45:46.740 [INFO ] [smarthome.event.ItemCommandEvent] - Item 'Test_Switch_1' received command OFF
2018-10-05 06:45:46.744 [INFO ] [smarthome.event.ItemStateEvent] - Test_Switch_1 updated to OFF
2018-10-05 06:45:46.746 [INFO ] [smarthome.event.ItemStateChangedEvent] - Test_Switch_1 changed from ON to OFF
2018-10-05 06:45:46.757 [INFO ] [smarthome.event.GroupItemStateChangedEvent] - gChild_1 changed from ON to OFF through Test_Switch_1
2018-10-05 06:45:46.771 [INFO ] [smarthome.event.GroupItemStateChangedEvent] - gTest_Parent changed from ON to OFF through Test_Switch_1
```
**Relevant lines from org.eclipse.smarthome.automation:**
```
2018-10-05 06:45:46.752 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - ->FILTER: smarthome/items/Test_Switch_1/statechanged:Test_Switch_1
2018-10-05 06:45:46.752 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - Received Event: Source: null Topic: smarthome/items/Test_Switch_1/statechanged Type: ItemStateChangedEvent Payload: {"type":"OnOff","value":"OFF","oldType":"OnOff","oldValue":"ON"}
2018-10-05 06:45:46.753 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The trigger 'TestTrigger1' of rule '56a33ba0-0304-4232-b420-36f64302be3a' is triggered.
2018-10-05 06:45:46.755 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The rule '56a33ba0-0304-4232-b420-36f64302be3a' is executed.
2018-10-05 06:45:46.760 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - ->FILTER: smarthome/items/gChild_1/Test_Switch_1/statechanged:Test_Switch_1
2018-10-05 06:45:46.760 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - Received Event: Source: null Topic: smarthome/items/gChild_1/Test_Switch_1/statechanged Type: GroupItemStateChangedEvent Payload: {"type":"OnOff","value":"OFF","oldType":"OnOff","oldValue":"ON"}
2018-10-05 06:45:46.760 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The trigger 'TestTrigger1' of rule '56a33ba0-0304-4232-b420-36f64302be3a' is triggered.
2018-10-05 06:45:46.761 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The rule '56a33ba0-0304-4232-b420-36f64302be3a' is executed.
2018-10-05 06:45:46.779 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - ->FILTER: smarthome/items/gTest_Parent/Test_Switch_1/statechanged:Test_Switch_1
2018-10-05 06:45:46.779 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - Received Event: Source: null Topic: smarthome/items/gTest_Parent/Test_Switch_1/statechanged Type: GroupItemStateChangedEvent Payload: {"type":"OnOff","value":"OFF","oldType":"OnOff","oldValue":"ON"}
2018-10-05 06:45:46.779 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The trigger 'TestTrigger1' of rule '56a33ba0-0304-4232-b420-36f64302be3a' is triggered.
2018-10-05 06:45:46.780 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The rule '56a33ba0-0304-4232-b420-36f64302be3a' is executed.
```
|
1.0
|
[Automation] ItemStateChangeTrigger misfires - I rely on nested groups with functions for much of my automation, and had essentially changed all my triggers over to GenericEventTriggers due to this issue, but I was wanting to use some item state change events. I've been chasing my tail on this for months, but I think I've gotten close to finding the cause.
With this item/group structure, the ItemStateChangeTrigger will fire 3 times when the item state changes... once for the item and once for each group. ItemStateUpdateTrigger does not have this issue. The group function seems to be related to the issue, and this is the simplest way I've found to reproduce the problem. I'm still looking into this, but thought I'd post it in case this is a known issue, or if one of the devs can spot something quicker than I can (very likely).
https://github.com/eclipse/smarthome/blob/master/bundles/automation/org.eclipse.smarthome.automation.module.core/src/main/java/org/eclipse/smarthome/automation/module/core/handler/ItemStateTriggerHandler.java
```
Group:Switch:OR(ON,OFF) gTest_Parent "Test Parent Group [%s]" <none> (gTest)
Group:Switch:OR(ON,OFF) gChild_1 "Test Child Group 1 [%s]" <none> (gTest_Parent)
Switch Test_Switch_1 "Test Switch 1 [%s]" <switch> (gChild_1,gTest_Parent)
```
```
'use strict';
var log = Java.type("org.slf4j.LoggerFactory").getLogger("org.eclipse.smarthome.model.script.Rules");
scriptExtension.importPreset("RuleSupport");
scriptExtension.importPreset("RuleSimple");
var testRule1 = new SimpleRule() {
execute: function( module, input) {
log.debug("JSR223: JS: Test Rule 1: [" + input['event'].itemName + "]: [" + input['event'].itemState + "]");
}
};
testRule1.setTriggers([
TriggerBuilder.create()
.withId("TestTrigger1")
.withTypeUID("core.ItemStateChangeTrigger")
.withConfiguration(
new Configuration({
"itemName": "Test_Switch_1"
})).build()
]);
automationManager.addRule(testRule1);
```
**Log result:**
```
2018-10-05 06:45:46.755 [DEBUG] [org.eclipse.smarthome.model.script.Rules] - JSR223: JS: Test Rule 1: [Test_Switch_1]: [OFF]
2018-10-05 06:45:46.760 [DEBUG] [org.eclipse.smarthome.model.script.Rules] - JSR223: JS: Test Rule 1: [gChild_1]: [OFF]
2018-10-05 06:45:46.780 [DEBUG] [org.eclipse.smarthome.model.script.Rules] - JSR223: JS: Test Rule 1: [gTest_Parent]: [OFF]
```
**Event.log:**
```
2018-10-05 06:45:46.740 [INFO ] [smarthome.event.ItemCommandEvent] - Item 'Test_Switch_1' received command OFF
2018-10-05 06:45:46.744 [INFO ] [smarthome.event.ItemStateEvent] - Test_Switch_1 updated to OFF
2018-10-05 06:45:46.746 [INFO ] [smarthome.event.ItemStateChangedEvent] - Test_Switch_1 changed from ON to OFF
2018-10-05 06:45:46.757 [INFO ] [smarthome.event.GroupItemStateChangedEvent] - gChild_1 changed from ON to OFF through Test_Switch_1
2018-10-05 06:45:46.771 [INFO ] [smarthome.event.GroupItemStateChangedEvent] - gTest_Parent changed from ON to OFF through Test_Switch_1
```
**Relevant lines from org.eclipse.smarthome.automation:**
```
2018-10-05 06:45:46.752 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - ->FILTER: smarthome/items/Test_Switch_1/statechanged:Test_Switch_1
2018-10-05 06:45:46.752 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - Received Event: Source: null Topic: smarthome/items/Test_Switch_1/statechanged Type: ItemStateChangedEvent Payload: {"type":"OnOff","value":"OFF","oldType":"OnOff","oldValue":"ON"}
2018-10-05 06:45:46.753 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The trigger 'TestTrigger1' of rule '56a33ba0-0304-4232-b420-36f64302be3a' is triggered.
2018-10-05 06:45:46.755 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The rule '56a33ba0-0304-4232-b420-36f64302be3a' is executed.
2018-10-05 06:45:46.760 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - ->FILTER: smarthome/items/gChild_1/Test_Switch_1/statechanged:Test_Switch_1
2018-10-05 06:45:46.760 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - Received Event: Source: null Topic: smarthome/items/gChild_1/Test_Switch_1/statechanged Type: GroupItemStateChangedEvent Payload: {"type":"OnOff","value":"OFF","oldType":"OnOff","oldValue":"ON"}
2018-10-05 06:45:46.760 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The trigger 'TestTrigger1' of rule '56a33ba0-0304-4232-b420-36f64302be3a' is triggered.
2018-10-05 06:45:46.761 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The rule '56a33ba0-0304-4232-b420-36f64302be3a' is executed.
2018-10-05 06:45:46.779 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - ->FILTER: smarthome/items/gTest_Parent/Test_Switch_1/statechanged:Test_Switch_1
2018-10-05 06:45:46.779 [TRACE] [org.eclipse.smarthome.automation.module.core.handler.ItemStateTriggerHandler] - Received Event: Source: null Topic: smarthome/items/gTest_Parent/Test_Switch_1/statechanged Type: GroupItemStateChangedEvent Payload: {"type":"OnOff","value":"OFF","oldType":"OnOff","oldValue":"ON"}
2018-10-05 06:45:46.779 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The trigger 'TestTrigger1' of rule '56a33ba0-0304-4232-b420-36f64302be3a' is triggered.
2018-10-05 06:45:46.780 [DEBUG] [org.eclipse.smarthome.automation.core.internal.RuleEngineImpl] - The rule '56a33ba0-0304-4232-b420-36f64302be3a' is executed.
```
|
automation
|
itemstatechangetrigger misfires i rely on nested groups with functions for much of my automation and had essentially changed all my triggers over to genericeventtriggers due to this issue but i was wanting to use some item state change events i ve been chasing my tail on this for months but i think i ve gotten close to finding the cause with this item group structure the itemstatechangetrigger will fire times when the item state changes once for the item and once for each group itemstateupdatetrigger does not have this issue the group function seems to be related to the issue and this is the simplest way i ve found to reproduce the problem i m still looking into this but thought i d post it in case this is a known issue or if one of the devs can spot something quicker than i can very likely group switch or on off gtest parent test parent group gtest group switch or on off gchild test child group gtest parent switch test switch test switch gchild gtest parent use strict var log java type org loggerfactory getlogger org eclipse smarthome model script rules scriptextension importpreset rulesupport scriptextension importpreset rulesimple var new simplerule execute function module input log debug js test rule itemname itemstate settriggers triggerbuilder create withid withtypeuid core itemstatechangetrigger withconfiguration new configuration itemname test switch build automationmanager addrule log result js test rule js test rule js test rule event log item test switch received command off test switch updated to off test switch changed from on to off gchild changed from on to off through test switch gtest parent changed from on to off through test switch relevant lines from org eclipse smarthome automation filter smarthome items test switch statechanged test switch received event source null topic smarthome items test switch statechanged type itemstatechangedevent payload type onoff value off oldtype onoff oldvalue on the trigger of rule is triggered the rule is executed filter smarthome items gchild test switch statechanged test switch received event source null topic smarthome items gchild test switch statechanged type groupitemstatechangedevent payload type onoff value off oldtype onoff oldvalue on the trigger of rule is triggered the rule is executed filter smarthome items gtest parent test switch statechanged test switch received event source null topic smarthome items gtest parent test switch statechanged type groupitemstatechangedevent payload type onoff value off oldtype onoff oldvalue on the trigger of rule is triggered the rule is executed
| 1
|
38,139
| 12,528,269,401
|
IssuesEvent
|
2020-06-04 09:18:12
|
ckauhaus/nixpkgs
|
https://api.github.com/repos/ckauhaus/nixpkgs
|
opened
|
Vulnerability roundup 4: brackets-1.9: 1 advisory
|
1.severity: security
|
[search](https://search.nix.gsc.io/?q=brackets&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=brackets+in%3Apath&type=Code)
* [ ] [CVE-2019-8255](https://nvd.nist.gov/vuln/detail/CVE-2019-8255) CVSSv3=9.8 (nixos-19.03)
Scanned versions: nixos-19.03: 34c7eb7545d. May contain false positives.
|
True
|
Vulnerability roundup 4: brackets-1.9: 1 advisory - [search](https://search.nix.gsc.io/?q=brackets&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=brackets+in%3Apath&type=Code)
* [ ] [CVE-2019-8255](https://nvd.nist.gov/vuln/detail/CVE-2019-8255) CVSSv3=9.8 (nixos-19.03)
Scanned versions: nixos-19.03: 34c7eb7545d. May contain false positives.
|
non_automation
|
vulnerability roundup brackets advisory nixos scanned versions nixos may contain false positives
| 0
|
53,261
| 6,306,486,066
|
IssuesEvent
|
2017-07-21 21:08:32
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Can't change Server/TLS info for Active Directory
|
area/access-control kind/bug status/resolved status/to-test
|
**Rancher versions:** v1.6.6-rc1
**Steps to Reproduce:**
1. Log into AD using tad.rancher.io info with TLS on
2. Disable access
3. Change the server to the IP and turn off TLS
**Results:**
The info comes back as hostname with TLS on
|
1.0
|
Can't change Server/TLS info for Active Directory - **Rancher versions:** v1.6.6-rc1
**Steps to Reproduce:**
1. Log into AD using tad.rancher.io info with TLS on
2. Disable access
3. Change the server to the IP and turn off TLS
**Results:**
The info comes back as hostname with TLS on
|
non_automation
|
can t change server tls info for active directory rancher versions steps to reproduce log into ad using tad rancher io info with tls on disable access change the server to the ip and turn off tls results the info comes back as hostname with tls on
| 0
|
3,567
| 13,994,880,236
|
IssuesEvent
|
2020-10-28 01:52:22
|
elastic/e2e-testing
|
https://api.github.com/repos/elastic/e2e-testing
|
closed
|
Support tuning the wait times out of the feature file
|
automation metricbeat
|
As per https://github.com/elastic/metricbeat-tests-poc/pull/76#discussion_r346216594, we'd like to avoid exposing a setting in a when step which, although configurable, makes it more difficult to a user to change it in a dynamic manner.
|
1.0
|
Support tuning the wait times out of the feature file - As per https://github.com/elastic/metricbeat-tests-poc/pull/76#discussion_r346216594, we'd like to avoid exposing a setting in a when step which, although configurable, makes it more difficult to a user to change it in a dynamic manner.
|
automation
|
support tuning the wait times out of the feature file as per we d like to avoid exposing a setting in a when step which although configurable makes it more difficult to a user to change it in a dynamic manner
| 1
|
640
| 7,668,879,794
|
IssuesEvent
|
2018-05-14 07:48:00
|
DevExpress/testcafe
|
https://api.github.com/repos/DevExpress/testcafe
|
closed
|
Wrong handling of key pressing in input
|
AREA: client SYSTEM: automations TYPE: bug
|
### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
Input value changed after click and press "Up" button
### What is the expected behavior?
Input value should not be changed
### How would you reproduce the current behavior (if this is a bug)?
```js
import { Selector, ClientFunction } from 'testcafe';
fixture `fixture`
.page `http://dolzhikov-w8/172/RegressionTestsSite/ASPxEditors/ASPxDateEdit/T187651.aspx`;
test('test', async t => {
const input = Selector("#ASPxDateEdit1_I");
await t.click(input);
var oldValue = await input.value;
await t
.pressKey("up")
.expect(input.value).eql(oldValue);
});
```
### Specify your
* testcafe version: 0.18.6-dev20171222
|
1.0
|
Wrong handling of key pressing in input - ### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
Input value changed after click and press "Up" button
### What is the expected behavior?
Input value should not be changed
### How would you reproduce the current behavior (if this is a bug)?
```js
import { Selector, ClientFunction } from 'testcafe';
fixture `fixture`
.page `http://dolzhikov-w8/172/RegressionTestsSite/ASPxEditors/ASPxDateEdit/T187651.aspx`;
test('test', async t => {
const input = Selector("#ASPxDateEdit1_I");
await t.click(input);
var oldValue = await input.value;
await t
.pressKey("up")
.expect(input.value).eql(oldValue);
});
```
### Specify your
* testcafe version: 0.18.6-dev20171222
|
automation
|
wrong handling of key pressing in input are you requesting a feature or reporting a bug bug what is the current behavior input value changed after click and press up button what is the expected behavior input value should not be changed how would you reproduce the current behavior if this is a bug js import selector clientfunction from testcafe fixture fixture page test test async t const input selector i await t click input var oldvalue await input value await t presskey up expect input value eql oldvalue specify your testcafe version
| 1
|
361,396
| 10,708,077,990
|
IssuesEvent
|
2019-10-24 18:52:37
|
microsoft/terminal
|
https://api.github.com/repos/microsoft/terminal
|
closed
|
ConPTY: Extended Attributes only ever get turned on once, and are forever after turned off
|
Area-Rendering In-PR Issue-Bug Priority-3 Product-Conpty
|
When you `printf "\e[3;5;9mWhatever\e[m", repeatedly, what you get is this.
```
\e[3m\e[5m\e[9mwhatever\e[23m\e[25m\e[29m
\e[23m\e[25m\e[29mwhatever\e[23m\e[25m\e[29m
\e[23m\e[25m\e[29mwhatever\e[23m\e[25m\e[29m
```
|
1.0
|
ConPTY: Extended Attributes only ever get turned on once, and are forever after turned off - When you `printf "\e[3;5;9mWhatever\e[m", repeatedly, what you get is this.
```
\e[3m\e[5m\e[9mwhatever\e[23m\e[25m\e[29m
\e[23m\e[25m\e[29mwhatever\e[23m\e[25m\e[29m
\e[23m\e[25m\e[29mwhatever\e[23m\e[25m\e[29m
```
|
non_automation
|
conpty extended attributes only ever get turned on once and are forever after turned off when you printf e e m repeatedly what you get is this e e e e e e e e e e e e e e e e e e
| 0
|
6,965
| 24,064,864,675
|
IssuesEvent
|
2022-09-17 10:33:09
|
smcnab1/op-question-mark
|
https://api.github.com/repos/smcnab1/op-question-mark
|
closed
|
[BUG] Sleep Sensors with Mrs
|
✔️Status: Confirmed 🐛Type: Bug 🏔Priority: High 🚗For: Automations
|
Mrs very light. Look at other way of triggering sleeping Boolean like time and phone state
|
1.0
|
[BUG] Sleep Sensors with Mrs - Mrs very light. Look at other way of triggering sleeping Boolean like time and phone state
|
automation
|
sleep sensors with mrs mrs very light look at other way of triggering sleeping boolean like time and phone state
| 1
|
6,416
| 23,116,948,638
|
IssuesEvent
|
2022-07-27 17:36:42
|
keycloak/keycloak-benchmark
|
https://api.github.com/repos/keycloak/keycloak-benchmark
|
closed
|
Deploy Cockroach DB in Minikube
|
enhancement provision automation
|
### Description
For now, go with a single instance.
### Discussion
_No response_
### Motivation
_No response_
### Details
_No response_
|
1.0
|
Deploy Cockroach DB in Minikube - ### Description
For now, go with a single instance.
### Discussion
_No response_
### Motivation
_No response_
### Details
_No response_
|
automation
|
deploy cockroach db in minikube description for now go with a single instance discussion no response motivation no response details no response
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.