Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8,800
| 11,908,256,757
|
IssuesEvent
|
2020-03-31 00:25:35
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Define how many features to read in a processing model
|
Feature Request Processing
|
Author Name: **Magnus Nilsson** (Magnus Nilsson)
Original Redmine Issue: [20952](https://issues.qgis.org/issues/20952)
Redmine category:processing/modeller
---
In some cases, I have large data sets (millions of features or more) and when creating a processing model, I would like to be able to set a (temporary) limit of the number of features to read. That way, polishing the model will go quicker, when I don´t have to wait for QGIS to read all of the features. When the work is finished, I can remove the limit and let all of the features be processed.
|
1.0
|
Define how many features to read in a processing model - Author Name: **Magnus Nilsson** (Magnus Nilsson)
Original Redmine Issue: [20952](https://issues.qgis.org/issues/20952)
Redmine category:processing/modeller
---
In some cases, I have large data sets (millions of features or more) and when creating a processing model, I would like to be able to set a (temporary) limit of the number of features to read. That way, polishing the model will go quicker, when I don´t have to wait for QGIS to read all of the features. When the work is finished, I can remove the limit and let all of the features be processed.
|
process
|
define how many features to read in a processing model author name magnus nilsson magnus nilsson original redmine issue redmine category processing modeller in some cases i have large data sets millions of features or more and when creating a processing model i would like to be able to set a temporary limit of the number of features to read that way polishing the model will go quicker when i don´t have to wait for qgis to read all of the features when the work is finished i can remove the limit and let all of the features be processed
| 1
|
4,532
| 7,372,788,466
|
IssuesEvent
|
2018-03-13 15:35:26
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Resizing root partition
|
cxp in-process product-question triaged virtual-machines-linux
|
How do I resize the size of /dev/sda1 mounted on /? I can't easily umount it, since it is always being used when the VM is started.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 79e96564-edb3-8f68-63a4-960954f5d25b
* Version Independent ID: bc037676-9288-ac7d-3751-c2f30cfa608f
* Content: [Expand virtual hard disks on a Linux VM in Azure | Microsoft Docs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks)
* Content Source: [articles/virtual-machines/linux/expand-disks.md](https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machines/linux/expand-disks.md)
* Service: **virtual-machines-linux**
* GitHub Login: @iainfoulds
* Microsoft Alias: **iainfou**
|
1.0
|
Resizing root partition - How do I resize the size of /dev/sda1 mounted on /? I can't easily umount it, since it is always being used when the VM is started.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 79e96564-edb3-8f68-63a4-960954f5d25b
* Version Independent ID: bc037676-9288-ac7d-3751-c2f30cfa608f
* Content: [Expand virtual hard disks on a Linux VM in Azure | Microsoft Docs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks)
* Content Source: [articles/virtual-machines/linux/expand-disks.md](https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machines/linux/expand-disks.md)
* Service: **virtual-machines-linux**
* GitHub Login: @iainfoulds
* Microsoft Alias: **iainfou**
|
process
|
resizing root partition how do i resize the size of dev mounted on i can t easily umount it since it is always being used when the vm is started document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service virtual machines linux github login iainfoulds microsoft alias iainfou
| 1
|
1,690
| 2,517,618,217
|
IssuesEvent
|
2015-01-16 16:04:07
|
pydio/pydio-sync
|
https://api.github.com/repos/pydio/pydio-sync
|
closed
|
Console does not close on Ctrl-C
|
priority:high
|
Running:
python -m pydio.main -s http://foo.bar -d c:\foo\bar -w workspace -u user -p pass
After a while (when sync threads start) process does not allow to close it'self from console, have to be killed via process manager.
|
1.0
|
Console does not close on Ctrl-C - Running:
python -m pydio.main -s http://foo.bar -d c:\foo\bar -w workspace -u user -p pass
After a while (when sync threads start) process does not allow to close it'self from console, have to be killed via process manager.
|
non_process
|
console does not close on ctrl c running python m pydio main s d c foo bar w workspace u user p pass after a while when sync threads start process does not allow to close it self from console have to be killed via process manager
| 0
|
595,529
| 18,068,140,054
|
IssuesEvent
|
2021-09-20 21:48:42
|
cormas/cormas
|
https://api.github.com/repos/cormas/cormas
|
closed
|
Apply scenario settings button should provide feedback
|
enhancement UI priority 2
|
Once the Apply button was hit, it would be nice to have some kind of visual confirmation that the settings were applied. The information dialog should not be modal or require user interaction.
|
1.0
|
Apply scenario settings button should provide feedback - Once the Apply button was hit, it would be nice to have some kind of visual confirmation that the settings were applied. The information dialog should not be modal or require user interaction.
|
non_process
|
apply scenario settings button should provide feedback once the apply button was hit it would be nice to have some kind of visual confirmation that the settings were applied the information dialog should not be modal or require user interaction
| 0
|
126,593
| 17,947,249,657
|
IssuesEvent
|
2021-09-12 02:52:27
|
corbantjoyce/website
|
https://api.github.com/repos/corbantjoyce/website
|
closed
|
CVE-2020-28498 (Medium) detected in elliptic-6.5.2.tgz - autoclosed
|
security vulnerability
|
## CVE-2020-28498 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.2.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz</a></p>
<p>Path to dependency file: website/package.json</p>
<p>Path to vulnerable library: website/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- webpack-4.42.0.tgz
- node-libs-browser-2.2.1.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.1.0.tgz
- :x: **elliptic-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/corbantjoyce/website/commit/2d41f06ec8faa6317e843654af85f7dacef9b46e">2d41f06ec8faa6317e843654af85f7dacef9b46e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package elliptic before 6.5.4 are vulnerable to Cryptographic Issues via the secp256k1 implementation in elliptic/ec/key.js. There is no check to confirm that the public key point passed into the derive function actually exists on the secp256k1 curve. This results in the potential for the private key used in this implementation to be revealed after a number of ECDH operations are performed.
<p>Publish Date: 2021-02-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28498>CVE-2020-28498</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498</a></p>
<p>Release Date: 2021-02-02</p>
<p>Fix Resolution: v6.5.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28498 (Medium) detected in elliptic-6.5.2.tgz - autoclosed - ## CVE-2020-28498 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.2.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz</a></p>
<p>Path to dependency file: website/package.json</p>
<p>Path to vulnerable library: website/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- webpack-4.42.0.tgz
- node-libs-browser-2.2.1.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.1.0.tgz
- :x: **elliptic-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/corbantjoyce/website/commit/2d41f06ec8faa6317e843654af85f7dacef9b46e">2d41f06ec8faa6317e843654af85f7dacef9b46e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package elliptic before 6.5.4 are vulnerable to Cryptographic Issues via the secp256k1 implementation in elliptic/ec/key.js. There is no check to confirm that the public key point passed into the derive function actually exists on the secp256k1 curve. This results in the potential for the private key used in this implementation to be revealed after a number of ECDH operations are performed.
<p>Publish Date: 2021-02-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28498>CVE-2020-28498</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498</a></p>
<p>Release Date: 2021-02-02</p>
<p>Fix Resolution: v6.5.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in elliptic tgz autoclosed cve medium severity vulnerability vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file website package json path to vulnerable library website node modules elliptic package json dependency hierarchy react scripts tgz root library webpack tgz node libs browser tgz crypto browserify tgz browserify sign tgz x elliptic tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package elliptic before are vulnerable to cryptographic issues via the implementation in elliptic ec key js there is no check to confirm that the public key point passed into the derive function actually exists on the curve this results in the potential for the private key used in this implementation to be revealed after a number of ecdh operations are performed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
7,428
| 10,546,438,245
|
IssuesEvent
|
2019-10-02 21:29:01
|
openPMD/openPMD-projects
|
https://api.github.com/repos/openPMD/openPMD-projects
|
opened
|
Add GAPD
|
data source post-processing
|
Hi @franzpoeschel & @ejcjason,
do you want to provide a PR to add GAPD to the list of openPMD-enabled projects in this repo? :sparkles: :rocket:
All the best,
Axel
|
1.0
|
Add GAPD - Hi @franzpoeschel & @ejcjason,
do you want to provide a PR to add GAPD to the list of openPMD-enabled projects in this repo? :sparkles: :rocket:
All the best,
Axel
|
process
|
add gapd hi franzpoeschel ejcjason do you want to provide a pr to add gapd to the list of openpmd enabled projects in this repo sparkles rocket all the best axel
| 1
|
12,446
| 9,661,140,817
|
IssuesEvent
|
2019-05-20 17:12:12
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Is this supported with the REST version of Speech-to-text ?
|
cognitive-services/svc cxp product-question speech-service/subsvc triaged
|
This page only lists examples for the SDK version and there are no mentions of this feature on the REST API page. I really need this feature with the REST version, is it supported ?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6c3e4d66-e6fd-3ed1-f505-875064467f00
* Version Independent ID: 345501ca-4487-ff5a-b681-449508433cff
* Content: [Phrase Lists - Speech Services](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-phrase-lists#feedback)
* Content Source: [articles/cognitive-services/Speech-Service/how-to-phrase-lists.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/Speech-Service/how-to-phrase-lists.md)
* Service: **cognitive-services**
* Sub-service: **speech-service**
* GitHub Login: @rhurey
* Microsoft Alias: **rhurey**
|
2.0
|
Is this supported with the REST version of Speech-to-text ? - This page only lists examples for the SDK version and there are no mentions of this feature on the REST API page. I really need this feature with the REST version, is it supported ?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6c3e4d66-e6fd-3ed1-f505-875064467f00
* Version Independent ID: 345501ca-4487-ff5a-b681-449508433cff
* Content: [Phrase Lists - Speech Services](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-phrase-lists#feedback)
* Content Source: [articles/cognitive-services/Speech-Service/how-to-phrase-lists.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/Speech-Service/how-to-phrase-lists.md)
* Service: **cognitive-services**
* Sub-service: **speech-service**
* GitHub Login: @rhurey
* Microsoft Alias: **rhurey**
|
non_process
|
is this supported with the rest version of speech to text this page only lists examples for the sdk version and there are no mentions of this feature on the rest api page i really need this feature with the rest version is it supported document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cognitive services sub service speech service github login rhurey microsoft alias rhurey
| 0
|
88,922
| 3,787,539,497
|
IssuesEvent
|
2016-03-21 11:06:27
|
kubernetes/dashboard
|
https://api.github.com/repos/kubernetes/dashboard
|
closed
|
Inconsistent behavior on logs page for different k8s master versions
|
area/usability kind/bug priority/P2
|
#### Issue details
I'm using `local-up-cluster` with newest kubernetes HEAD. After deploying some card with wrong `containerImage` pod stays in pending state with errors. Logs page show `Internal error(500)`. On older cluster we can see normal log entry saying that pod is not started and is in state pending.
##### Environment
<!-- Describe how do you run Kubernetes and Dashboard.
Versions of Node.js, Go etc. are needed only from developers. To get them use console:
$ node --version
$ go version
-->
```
Dashboard version: HEAD
Kubernetes version: v.1.2.0-alpha-8 (Newest HEAD)
```
##### Steps to reproduce
<!-- Describe all steps needed to reproduce the issue. It is a good place to use numbered list. -->
1. Deploy app with not existing `containerImage`.
2. Wait for card to show error status.
3. Go to logs page.
4. Check same page on older kubernetes version.
##### Observed result
<!-- Describe observed result as precisely as possible. -->
Inconsistent behavior for logs page between different kubernetes versions.
##### Comments
<!-- If you have any comments or more details, put them here. -->
It's hard to say what is the expected behavior here. Pod is actually in pending state so the older version is correct to show this entry but there are some errors related to this pod and newer version actually displays internal error with event error message. For me newest behavior seems more convenient.
Here are some screens.
Newest k8s:

Older k8s:

What do you think? @bryk @maciaszczykm @cheld
|
1.0
|
Inconsistent behavior on logs page for different k8s master versions - #### Issue details
I'm using `local-up-cluster` with newest kubernetes HEAD. After deploying some card with wrong `containerImage` pod stays in pending state with errors. Logs page show `Internal error(500)`. On older cluster we can see normal log entry saying that pod is not started and is in state pending.
##### Environment
<!-- Describe how do you run Kubernetes and Dashboard.
Versions of Node.js, Go etc. are needed only from developers. To get them use console:
$ node --version
$ go version
-->
```
Dashboard version: HEAD
Kubernetes version: v.1.2.0-alpha-8 (Newest HEAD)
```
##### Steps to reproduce
<!-- Describe all steps needed to reproduce the issue. It is a good place to use numbered list. -->
1. Deploy app with not existing `containerImage`.
2. Wait for card to show error status.
3. Go to logs page.
4. Check same page on older kubernetes version.
##### Observed result
<!-- Describe observed result as precisely as possible. -->
Inconsistent behavior for logs page between different kubernetes versions.
##### Comments
<!-- If you have any comments or more details, put them here. -->
It's hard to say what is the expected behavior here. Pod is actually in pending state so the older version is correct to show this entry but there are some errors related to this pod and newer version actually displays internal error with event error message. For me newest behavior seems more convenient.
Here are some screens.
Newest k8s:

Older k8s:

What do you think? @bryk @maciaszczykm @cheld
|
non_process
|
inconsistent behavior on logs page for different master versions issue details i m using local up cluster with newest kubernetes head after deploying some card with wrong containerimage pod stays in pending state with errors logs page show internal error on older cluster we can see normal log entry saying that pod is not started and is in state pending environment describe how do you run kubernetes and dashboard versions of node js go etc are needed only from developers to get them use console node version go version dashboard version head kubernetes version v alpha newest head steps to reproduce deploy app with not existing containerimage wait for card to show error status go to logs page check same page on older kubernetes version observed result inconsistent behavior for logs page between different kubernetes versions comments it s hard to say what is the expected behavior here pod is actually in pending state so the older version is correct to show this entry but there are some errors related to this pod and newer version actually displays internal error with event error message for me newest behavior seems more convenient here are some screens newest older what do you think bryk maciaszczykm cheld
| 0
|
18,133
| 24,174,691,436
|
IssuesEvent
|
2022-09-22 23:22:47
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
Outputs of processing algorithms should honor default setting
|
Processing Feedback Feature Request Stale
|
### Feature description
It seems that in all processing algorithms the output is stated as a default 'temporary layer' , for instance check the buffer part
https://docs.qgis.org/testing/en/docs/user_manual/processing_algs/qgis/vectorgeometry.html#buffer
Now in theory when an optional parameter is not provided the default value is to be used, or at least that is how it works for the other defaults. So if the output is not defined then temporary layers should be created automatically without the need of specifying that a temporary output is to be used.
### Additional context
Error raised

I added the default as per the documentation now no error shown and works ok

|
1.0
|
Outputs of processing algorithms should honor default setting - ### Feature description
It seems that in all processing algorithms the output is stated as a default 'temporary layer' , for instance check the buffer part
https://docs.qgis.org/testing/en/docs/user_manual/processing_algs/qgis/vectorgeometry.html#buffer
Now in theory when an optional parameter is not provided the default value is to be used, or at least that is how it works for the other defaults. So if the output is not defined then temporary layers should be created automatically without the need of specifying that a temporary output is to be used.
### Additional context
Error raised

I added the default as per the documentation now no error shown and works ok

|
process
|
outputs of processing algorithms should honor default setting feature description it seems that in all processing algorithms the output is stated as a default temporary layer for instance check the buffer part now in theory when an optional parameter is not provided the default value is to be used or at least that is how it works for the other defaults so if the output is not defined then temporary layers should be created automatically without the need of specifying that a temporary output is to be used additional context error raised i added the default as per the documentation now no error shown and works ok
| 1
|
21,261
| 28,434,726,608
|
IssuesEvent
|
2023-04-15 06:48:43
|
GoogleCloudPlatform/pgadapter
|
https://api.github.com/repos/GoogleCloudPlatform/pgadapter
|
closed
|
Test failure for random date tests
|
type: process priority: p3
|
```
AbortedMockServerTest.testRandomResults:876->assertEqual:931 expected:<1582-10-12> but was:<1582-10-22>
```
The random results tests can fail if a random date during the julian/gregorian cut-off is selected.
|
1.0
|
Test failure for random date tests - ```
AbortedMockServerTest.testRandomResults:876->assertEqual:931 expected:<1582-10-12> but was:<1582-10-22>
```
The random results tests can fail if a random date during the julian/gregorian cut-off is selected.
|
process
|
test failure for random date tests abortedmockservertest testrandomresults assertequal expected but was the random results tests can fail if a random date during the julian gregorian cut off is selected
| 1
|
14,451
| 2,812,163,966
|
IssuesEvent
|
2015-05-18 06:27:27
|
minux/go-tour
|
https://api.github.com/repos/minux/go-tour
|
closed
|
ui: block page change on tab change in Firefox
|
auto-migrated Priority-Medium Type-Defect
|
```
>What steps will reproduce the problem?
1. Start go-tour in a tabbed browser with at least one other tab open
2. Press Ctrl+page up to switch to a different tab
3. Press Ctrl+page down to switch back
>What is the expected output? What do you see instead?
Expect to stay on same slide in go-tour; instead, go-tour progresses to next
slide when I return to the tab.
>What version of the product are you using? On what operating system?
Unknown; gotour does not respond to --version. I installed it earlier today
with `go get code.google.com/p/go-tour/gotour`
>Please provide any additional information below.
Replicated in Firefox 12.0 on Ubuntu 12.04. Curiously, does not happen on
Chromium 18.0.1025.151 Ubuntu 12.04.
I believe the patch below may address it, but I don't see build or test
instructions anywhere obvious, and as a go novice, I can't invest the time to
figure this out right now.
Come to think of it, it's somewhat dubious of Firefox to send these events to
the page at all, but working around the issue would make for a better go user
experience.
```
Original issue reported on code.google.com by `m.sakre...@gmail.com` on 10 Jun 2012 at 2:16
Attachments:
* [no-page-change-on-tab-change.patch](https://storage.googleapis.com/google-code-attachments/go-tour/issue-37/comment-0/no-page-change-on-tab-change.patch)
|
1.0
|
ui: block page change on tab change in Firefox - ```
>What steps will reproduce the problem?
1. Start go-tour in a tabbed browser with at least one other tab open
2. Press Ctrl+page up to switch to a different tab
3. Press Ctrl+page down to switch back
>What is the expected output? What do you see instead?
Expect to stay on same slide in go-tour; instead, go-tour progresses to next
slide when I return to the tab.
>What version of the product are you using? On what operating system?
Unknown; gotour does not respond to --version. I installed it earlier today
with `go get code.google.com/p/go-tour/gotour`
>Please provide any additional information below.
Replicated in Firefox 12.0 on Ubuntu 12.04. Curiously, does not happen on
Chromium 18.0.1025.151 Ubuntu 12.04.
I believe the patch below may address it, but I don't see build or test
instructions anywhere obvious, and as a go novice, I can't invest the time to
figure this out right now.
Come to think of it, it's somewhat dubious of Firefox to send these events to
the page at all, but working around the issue would make for a better go user
experience.
```
Original issue reported on code.google.com by `m.sakre...@gmail.com` on 10 Jun 2012 at 2:16
Attachments:
* [no-page-change-on-tab-change.patch](https://storage.googleapis.com/google-code-attachments/go-tour/issue-37/comment-0/no-page-change-on-tab-change.patch)
|
non_process
|
ui block page change on tab change in firefox what steps will reproduce the problem start go tour in a tabbed browser with at least one other tab open press ctrl page up to switch to a different tab press ctrl page down to switch back what is the expected output what do you see instead expect to stay on same slide in go tour instead go tour progresses to next slide when i return to the tab what version of the product are you using on what operating system unknown gotour does not respond to version i installed it earlier today with go get code google com p go tour gotour please provide any additional information below replicated in firefox on ubuntu curiously does not happen on chromium ubuntu i believe the patch below may address it but i don t see build or test instructions anywhere obvious and as a go novice i can t invest the time to figure this out right now come to think of it it s somewhat dubious of firefox to send these events to the page at all but working around the issue would make for a better go user experience original issue reported on code google com by m sakre gmail com on jun at attachments
| 0
|
252,835
| 19,073,632,805
|
IssuesEvent
|
2021-11-27 11:02:34
|
arcanaktepe/swe5732021fall
|
https://api.github.com/repos/arcanaktepe/swe5732021fall
|
closed
|
Requirements validation questions
|
documentation enhancement
|
Requirements validation questions should be updated with new ones
|
1.0
|
Requirements validation questions - Requirements validation questions should be updated with new ones
|
non_process
|
requirements validation questions requirements validation questions should be updated with new ones
| 0
|
25,149
| 12,501,513,604
|
IssuesEvent
|
2020-06-02 01:35:38
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
opened
|
Model fit incredibly slow
|
type:performance
|
Hi, I've been trying to fit the following model for the last 3 hours and the the only output displayed by the model is 'Epoch 1/5'. I noticed when others fitted their models, the output would also display 'Train on X samples, Validate on X samples' and thought maybe the lack of seeing that display and the model hanging are related
```
#Creating the first layer
model = Sequential()
model.add(Conv1D(2,0, activation = 'relu', input_shape = X_train[0].shape ))
model.add(BatchNormalization())
model.add(MaxPool1D())
model.add(Dropout(0.4))
#Adding second layer
model.add(Conv1D(4, 0, activation = 'relu'))
model.add(BatchNormalization())
model.add(MaxPool1D())
model.add(Dropout(0.4))
#Adding third layer
model.add(Conv1D(8, 0, activation = 'relu'))
model.add(BatchNormalization())
model.add(MaxPool1D())
model.add(Dropout(0.4))
#Adding fourth layer
model.add(Conv1D(16, 0, activation = 'relu'))
model.add(BatchNormalization())
model.add(MaxPool1D())
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(16, activation = 'relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(16, activation = 'relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(4, activation = 'softmax'))
model.summary()
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
history = model.fit(X_train, y_train, epochs = 5, batch_size = 5, validation_data = (X_test, y_test), verbose = 1)
```
Below is the result of model.summary():
```
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 2, 2) 2
_________________________________________________________________
batch_normalization (BatchNo (None, 2, 2) 8
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 1, 2) 0
_________________________________________________________________
dropout (Dropout) (None, 1, 2) 0
_________________________________________________________________
conv1d_1 (Conv1D) (None, 2, 4) 4
_________________________________________________________________
batch_normalization_1 (Batch (None, 2, 4) 16
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 1, 4) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 1, 4) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 2, 8) 8
_________________________________________________________________
batch_normalization_2 (Batch (None, 2, 8) 32
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 1, 8) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 1, 8) 0
_________________________________________________________________
conv1d_3 (Conv1D) (None, 2, 16) 16
_________________________________________________________________
batch_normalization_3 (Batch (None, 2, 16) 64
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 1, 16) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 1, 16) 0
_________________________________________________________________
flatten (Flatten) (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
batch_normalization_4 (Batch (None, 16) 64
_________________________________________________________________
dropout_4 (Dropout) (None, 16) 0
_________________________________________________________________
dense_1 (Dense) (None, 16) 272
_________________________________________________________________
batch_normalization_5 (Batch (None, 16) 64
_________________________________________________________________
dropout_5 (Dropout) (None, 16) 0
_________________________________________________________________
dense_2 (Dense) (None, 4) 68
=================================================================
Total params: 890
Trainable params: 766
Non-trainable params: 124
_________________________________________________________________
```
Given the size of the model, I find it hard to believe that it would take over 3 hours for anything to happen. Any insight is appreciated and happy to provide any additional information.
|
True
|
Model fit incredibly slow - Hi, I've been trying to fit the following model for the last 3 hours and the the only output displayed by the model is 'Epoch 1/5'. I noticed when others fitted their models, the output would also display 'Train on X samples, Validate on X samples' and thought maybe the lack of seeing that display and the model hanging are related
```
#Creating the first layer
model = Sequential()
model.add(Conv1D(2,0, activation = 'relu', input_shape = X_train[0].shape ))
model.add(BatchNormalization())
model.add(MaxPool1D())
model.add(Dropout(0.4))
#Adding second layer
model.add(Conv1D(4, 0, activation = 'relu'))
model.add(BatchNormalization())
model.add(MaxPool1D())
model.add(Dropout(0.4))
#Adding third layer
model.add(Conv1D(8, 0, activation = 'relu'))
model.add(BatchNormalization())
model.add(MaxPool1D())
model.add(Dropout(0.4))
#Adding fourth layer
model.add(Conv1D(16, 0, activation = 'relu'))
model.add(BatchNormalization())
model.add(MaxPool1D())
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(16, activation = 'relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(16, activation = 'relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(4, activation = 'softmax'))
model.summary()
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
history = model.fit(X_train, y_train, epochs = 5, batch_size = 5, validation_data = (X_test, y_test), verbose = 1)
```
Below is the result of model.summary():
```
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 2, 2) 2
_________________________________________________________________
batch_normalization (BatchNo (None, 2, 2) 8
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 1, 2) 0
_________________________________________________________________
dropout (Dropout) (None, 1, 2) 0
_________________________________________________________________
conv1d_1 (Conv1D) (None, 2, 4) 4
_________________________________________________________________
batch_normalization_1 (Batch (None, 2, 4) 16
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 1, 4) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 1, 4) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 2, 8) 8
_________________________________________________________________
batch_normalization_2 (Batch (None, 2, 8) 32
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 1, 8) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 1, 8) 0
_________________________________________________________________
conv1d_3 (Conv1D) (None, 2, 16) 16
_________________________________________________________________
batch_normalization_3 (Batch (None, 2, 16) 64
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 1, 16) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 1, 16) 0
_________________________________________________________________
flatten (Flatten) (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
batch_normalization_4 (Batch (None, 16) 64
_________________________________________________________________
dropout_4 (Dropout) (None, 16) 0
_________________________________________________________________
dense_1 (Dense) (None, 16) 272
_________________________________________________________________
batch_normalization_5 (Batch (None, 16) 64
_________________________________________________________________
dropout_5 (Dropout) (None, 16) 0
_________________________________________________________________
dense_2 (Dense) (None, 4) 68
=================================================================
Total params: 890
Trainable params: 766
Non-trainable params: 124
_________________________________________________________________
```
Given the size of the model, I find it hard to believe that it would take over 3 hours for anything to happen. Any insight is appreciated and happy to provide any additional information.
|
non_process
|
model fit incredibly slow hi i ve been trying to fit the following model for the last hours and the the only output displayed by the model is epoch i noticed when others fitted their models the output would also display train on x samples validate on x samples and thought maybe the lack of seeing that display and the model hanging are related creating the first layer model sequential model add activation relu input shape x train shape model add batchnormalization model add model add dropout adding second layer model add activation relu model add batchnormalization model add model add dropout adding third layer model add activation relu model add batchnormalization model add model add dropout adding fourth layer model add activation relu model add batchnormalization model add model add dropout model add flatten model add dense activation relu model add batchnormalization model add dropout model add dense activation relu model add batchnormalization model add dropout model add dense activation softmax model summary model compile optimizer adam loss binary crossentropy metrics history model fit x train y train epochs batch size validation data x test y test verbose below is the result of model summary layer type output shape param none batch normalization batchno none max none dropout dropout none none batch normalization batch none max none dropout dropout none none batch normalization batch none max none dropout dropout none none batch normalization batch none max none dropout dropout none flatten flatten none dense dense none batch normalization batch none dropout dropout none dense dense none batch normalization batch none dropout dropout none dense dense none total params trainable params non trainable params given the size of the model i find it hard to believe that it would take over hours for anything to happen any insight is appreciated and happy to provide any additional information
| 0
|
8,282
| 7,324,877,288
|
IssuesEvent
|
2018-03-03 01:41:44
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
[uap] CoreFx build is failing since 'RemotelyInvokable' does not contain a definition for 'LongWait'
|
area-Infrastructure
|
https://devdiv.visualstudio.com/DefaultCollection/DevDiv/_build?_a=summary&buildId=1436514
```text
2018-03-02T21:25:47.5255290Z Build FAILED.
2018-03-02T21:25:47.5268140Z
2018-03-02T21:25:47.5269301Z ProcessTestBase.Uap.cs(68,57): error CS0117: 'RemotelyInvokable' does not contain a definition for 'LongWait' [E:\A\_work\36\s\corefx\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
2018-03-02T21:25:47.5272048Z 0 Warning(s)
2018-03-02T21:25:47.5273015Z 1 Error(s)
```
cc @jkotas, @stephentoub
|
1.0
|
[uap] CoreFx build is failing since 'RemotelyInvokable' does not contain a definition for 'LongWait' - https://devdiv.visualstudio.com/DefaultCollection/DevDiv/_build?_a=summary&buildId=1436514
```text
2018-03-02T21:25:47.5255290Z Build FAILED.
2018-03-02T21:25:47.5268140Z
2018-03-02T21:25:47.5269301Z ProcessTestBase.Uap.cs(68,57): error CS0117: 'RemotelyInvokable' does not contain a definition for 'LongWait' [E:\A\_work\36\s\corefx\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
2018-03-02T21:25:47.5272048Z 0 Warning(s)
2018-03-02T21:25:47.5273015Z 1 Error(s)
```
cc @jkotas, @stephentoub
|
non_process
|
corefx build is failing since remotelyinvokable does not contain a definition for longwait text build failed processtestbase uap cs error remotelyinvokable does not contain a definition for longwait warning s error s cc jkotas stephentoub
| 0
|
136,929
| 5,290,518,337
|
IssuesEvent
|
2017-02-08 20:05:45
|
urbit/urbit
|
https://api.github.com/repos/urbit/urbit
|
closed
|
eyre crashes with no ;head;
|
%eyre difficulty low priority low
|
With
```
;html
;body
;pre:'''
hey
kids
'''
==
==
```
as a `hymn.hook` I produce a crash in eyre:
```
/~bud/home/~2015.6.3..20.24.40..d933/arvo/eyre/:<[190 3].[194 68]>
```
Which by the look of it is where we inject the charset. I guess this is right because a well formed html document should have a ;head;?
|
1.0
|
eyre crashes with no ;head; - With
```
;html
;body
;pre:'''
hey
kids
'''
==
==
```
as a `hymn.hook` I produce a crash in eyre:
```
/~bud/home/~2015.6.3..20.24.40..d933/arvo/eyre/:<[190 3].[194 68]>
```
Which by the look of it is where we inject the charset. I guess this is right because a well formed html document should have a ;head;?
|
non_process
|
eyre crashes with no head with html body pre hey kids as a hymn hook i produce a crash in eyre bud home arvo eyre which by the look of it is where we inject the charset i guess this is right because a well formed html document should have a head
| 0
|
275,321
| 8,575,571,830
|
IssuesEvent
|
2018-11-12 17:39:11
|
aowen87/TicketTester
|
https://api.github.com/repos/aowen87/TicketTester
|
closed
|
Exodus reader not serving up quadratic elements when present.
|
Bug Likelihood: 2 - Rare Priority: Normal Severity: 4 - Crash / Wrong Results
|
A customer has an exodus file with a 10-node tet element. The Exodus reader serves it up as a VTK_TETRA instead of VTK_QUADRATIC_TETRA.
Standard plots seem to render okay, but the Slice operator causes havoc.
I've attached the provided sample file.
Looking at the reader's source it should be a simple enough change to check the number of nodes when converting the Exodus element type to a vtk cell type.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1597
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: Exodus reader not serving up quadratic elements when present.
Assigned to: Kathleen Biagas
Category:
Target version: 2.7
Author: Kathleen Biagas
Start: 09/09/2013
Due date:
% Done: 0
Estimated time: 2.0
Created: 09/09/2013 06:26 pm
Updated: 09/10/2013 09:22 pm
Likelihood: 2 - Rare
Severity: 4 - Crash / Wrong Results
Found in version: 2.6.3
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
A customer has an exodus file with a 10-node tet element. The Exodus reader serves it up as a VTK_TETRA instead of VTK_QUADRATIC_TETRA.
Standard plots seem to render okay, but the Slice operator causes havoc.
I've attached the provided sample file.
Looking at the reader's source it should be a simple enough change to check the number of nodes when converting the Exodus element type to a vtk cell type.
Comments:
Updated reader to specify quadratic cells as such when encountered. Borrowed some knowledge from vtkExodusReader regarding exodus cell type specifications.M /src/databases/Exodus/avtExodusFileFormat.C
|
1.0
|
Exodus reader not serving up quadratic elements when present. - A customer has an exodus file with a 10-node tet element. The Exodus reader serves it up as a VTK_TETRA instead of VTK_QUADRATIC_TETRA.
Standard plots seem to render okay, but the Slice operator causes havoc.
I've attached the provided sample file.
Looking at the reader's source it should be a simple enough change to check the number of nodes when converting the Exodus element type to a vtk cell type.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1597
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: Exodus reader not serving up quadratic elements when present.
Assigned to: Kathleen Biagas
Category:
Target version: 2.7
Author: Kathleen Biagas
Start: 09/09/2013
Due date:
% Done: 0
Estimated time: 2.0
Created: 09/09/2013 06:26 pm
Updated: 09/10/2013 09:22 pm
Likelihood: 2 - Rare
Severity: 4 - Crash / Wrong Results
Found in version: 2.6.3
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
A customer has an exodus file with a 10-node tet element. The Exodus reader serves it up as a VTK_TETRA instead of VTK_QUADRATIC_TETRA.
Standard plots seem to render okay, but the Slice operator causes havoc.
I've attached the provided sample file.
Looking at the reader's source it should be a simple enough change to check the number of nodes when converting the Exodus element type to a vtk cell type.
Comments:
Updated reader to specify quadratic cells as such when encountered. Borrowed some knowledge from vtkExodusReader regarding exodus cell type specifications.M /src/databases/Exodus/avtExodusFileFormat.C
|
non_process
|
exodus reader not serving up quadratic elements when present a customer has an exodus file with a node tet element the exodus reader serves it up as a vtk tetra instead of vtk quadratic tetra standard plots seem to render okay but the slice operator causes havoc i ve attached the provided sample file looking at the reader s source it should be a simple enough change to check the number of nodes when converting the exodus element type to a vtk cell type redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject exodus reader not serving up quadratic elements when present assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created pm updated pm likelihood rare severity crash wrong results found in version impact expected use os all support group any description a customer has an exodus file with a node tet element the exodus reader serves it up as a vtk tetra instead of vtk quadratic tetra standard plots seem to render okay but the slice operator causes havoc i ve attached the provided sample file looking at the reader s source it should be a simple enough change to check the number of nodes when converting the exodus element type to a vtk cell type comments updated reader to specify quadratic cells as such when encountered borrowed some knowledge from vtkexodusreader regarding exodus cell type specifications m src databases exodus avtexodusfileformat c
| 0
|
19,379
| 10,416,896,137
|
IssuesEvent
|
2019-09-14 17:10:52
|
BattletechModders/ModTek
|
https://api.github.com/repos/BattletechModders/ModTek
|
closed
|
Request: Improve loading performance by not reading un-munged files
|
performance
|
InnerSphereMap loads more than 3K .json files. This adds significant time to the system load, due to the ExpandManifest() calls. When this line is invoked:
` var childModEntry = new ModEntry(modEntry, path, InferIDFromFile(filePath));`
The JSON files are parsed and loaded into a cache. This takes between 30-200 ms each, which is what adds significant time to the load.
In discussions, we talked about:
`InferIdFromFile can effectively be replaced by Path.FilenameWithoutExtension and then only try to read the file if needed
If that's the speed up`
Please consider this change for a future version.
|
True
|
Request: Improve loading performance by not reading un-munged files - InnerSphereMap loads more than 3K .json files. This adds significant time to the system load, due to the ExpandManifest() calls. When this line is invoked:
` var childModEntry = new ModEntry(modEntry, path, InferIDFromFile(filePath));`
The JSON files are parsed and loaded into a cache. This takes between 30-200 ms each, which is what adds significant time to the load.
In discussions, we talked about:
`InferIdFromFile can effectively be replaced by Path.FilenameWithoutExtension and then only try to read the file if needed
If that's the speed up`
Please consider this change for a future version.
|
non_process
|
request improve loading performance by not reading un munged files innerspheremap loads more than json files this adds significant time to the system load due to the expandmanifest calls when this line is invoked var childmodentry new modentry modentry path inferidfromfile filepath the json files are parsed and loaded into a cache this takes between ms each which is what adds significant time to the load in discussions we talked about inferidfromfile can effectively be replaced by path filenamewithoutextension and then only try to read the file if needed if that s the speed up please consider this change for a future version
| 0
|
15,461
| 19,678,442,243
|
IssuesEvent
|
2022-01-11 14:39:10
|
plazi/community
|
https://api.github.com/repos/plazi/community
|
closed
|
to be processed
|
process request
|
* please process this including holotype, and GBIF
* please remove first (researchgate) page.
*
[Jouault_Nel_2021_Tyrannomecia_Menat.pdf](https://github.com/plazi/community/files/7835665/Jouault_Nel_2021_Tyrannomecia_Menat.pdf)
e
|
1.0
|
to be processed - * please process this including holotype, and GBIF
* please remove first (researchgate) page.
*
[Jouault_Nel_2021_Tyrannomecia_Menat.pdf](https://github.com/plazi/community/files/7835665/Jouault_Nel_2021_Tyrannomecia_Menat.pdf)
e
|
process
|
to be processed please process this including holotype and gbif please remove first researchgate page e
| 1
|
1,007
| 3,474,612,828
|
IssuesEvent
|
2015-12-25 00:16:23
|
DoSomething/MessageBroker-PHP
|
https://api.github.com/repos/DoSomething/MessageBroker-PHP
|
closed
|
Process loggingQueue to create loggingGatewayQueue entries
|
mbc-logging-processor
|
Create new script `mbc-logging-processot_userTransactionals` to process messages in `loggingQueue` to create entries in `loggingGatewayQueue`.
- [ ] Add `log-type` = `transactional`
**Related**:
- https://github.com/DoSomething/MessageBroker-PHP/issues/13
**Note**:
This is a transition to ultimately point transactional messages to the `loggingGatewayQueue` from the `transactionalExchange` via the `loggingGatewayExchange`. See Exchange to Exchange binding via `exchangeBind($ticket = 0, $destination, $source, $routing_key = '', $nowait = false, $arguments = array())`
|
1.0
|
Process loggingQueue to create loggingGatewayQueue entries - Create new script `mbc-logging-processot_userTransactionals` to process messages in `loggingQueue` to create entries in `loggingGatewayQueue`.
- [ ] Add `log-type` = `transactional`
**Related**:
- https://github.com/DoSomething/MessageBroker-PHP/issues/13
**Note**:
This is a transition to ultimately point transactional messages to the `loggingGatewayQueue` from the `transactionalExchange` via the `loggingGatewayExchange`. See Exchange to Exchange binding via `exchangeBind($ticket = 0, $destination, $source, $routing_key = '', $nowait = false, $arguments = array())`
|
process
|
process loggingqueue to create logginggatewayqueue entries create new script mbc logging processot usertransactionals to process messages in loggingqueue to create entries in logginggatewayqueue add log type transactional related note this is a transition to ultimately point transactional messages to the logginggatewayqueue from the transactionalexchange via the logginggatewayexchange see exchange to exchange binding via exchangebind ticket destination source routing key nowait false arguments array
| 1
|
440,126
| 12,693,648,805
|
IssuesEvent
|
2020-06-22 04:07:33
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
opened
|
Improvement to customise the auth failure message headers description
|
Priority/Normal Type/Improvement
|
### Describe your problem(s)
Needs to have an improvement while we are getting auth failures by customizing the error response message.
### How will you implement it
This improvement is already available in the latest WUM updated APIM 2.6.0
|
1.0
|
Improvement to customise the auth failure message headers description - ### Describe your problem(s)
Needs to have an improvement while we are getting auth failures by customizing the error response message.
### How will you implement it
This improvement is already available in the latest WUM updated APIM 2.6.0
|
non_process
|
improvement to customise the auth failure message headers description describe your problem s needs to have an improvement while we are getting auth failures by customizing the error response message how will you implement it this improvement is already available in the latest wum updated apim
| 0
|
21,468
| 29,503,166,069
|
IssuesEvent
|
2023-06-03 02:03:28
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Unable to import `PreprocessorOptions` from types
|
type: typings npm: @cypress/webpack-preprocessor stale
|
Regression from https://github.com/cypress-io/cypress-webpack-preprocessor/pull/83/files
### Current behavior:
```js
const webpack = require('@cypress/webpack-preprocessor');
// Error:
// Namespace '_default' has no exported member 'PreprocessorOptions'
/** @type {webpack. PreprocessorOptions} */
const options = {
webpackOptions: {}
}
```
### Desired behavior:
No error
### How to reproduce:
See code above.
|
1.0
|
Unable to import `PreprocessorOptions` from types - Regression from https://github.com/cypress-io/cypress-webpack-preprocessor/pull/83/files
### Current behavior:
```js
const webpack = require('@cypress/webpack-preprocessor');
// Error:
// Namespace '_default' has no exported member 'PreprocessorOptions'
/** @type {webpack. PreprocessorOptions} */
const options = {
webpackOptions: {}
}
```
### Desired behavior:
No error
### How to reproduce:
See code above.
|
process
|
unable to import preprocessoroptions from types regression from current behavior js const webpack require cypress webpack preprocessor error namespace default has no exported member preprocessoroptions type webpack preprocessoroptions const options webpackoptions desired behavior no error how to reproduce see code above
| 1
|
2,482
| 5,255,760,923
|
IssuesEvent
|
2017-02-02 16:18:49
|
CGAL/cgal
|
https://api.github.com/repos/CGAL/cgal
|
closed
|
Polyhedron demo: bug in point selection tool
|
bug CGAL 3D demo Pkg::Point_set_processing_3
|
## Issue Details
When doing a rectangle selection of N points the N point are drawn in red.
But when I create a point set item from this selection it contains N-1 points.
## Environment
master
|
1.0
|
Polyhedron demo: bug in point selection tool - ## Issue Details
When doing a rectangle selection of N points the N point are drawn in red.
But when I create a point set item from this selection it contains N-1 points.
## Environment
master
|
process
|
polyhedron demo bug in point selection tool issue details when doing a rectangle selection of n points the n point are drawn in red but when i create a point set item from this selection it contains n points environment master
| 1
|
111,512
| 9,533,436,911
|
IssuesEvent
|
2019-04-29 21:17:25
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
sql: speed-of-light benchmark for lower-level projections
|
A-testing C-enhancement
|
- less copies between C++ memory and Go - how much does that help?
- less deserialization from bytes into `roachpb.KeyValue` - how much does that help?
- what's the quickest we can read just one column or column family out of many?
- compare speed between an MVCCScan that retrieves 2 needed and 8 unneeded columns vs 2 needed and 0 unneeded columns
|
1.0
|
sql: speed-of-light benchmark for lower-level projections - - less copies between C++ memory and Go - how much does that help?
- less deserialization from bytes into `roachpb.KeyValue` - how much does that help?
- what's the quickest we can read just one column or column family out of many?
- compare speed between an MVCCScan that retrieves 2 needed and 8 unneeded columns vs 2 needed and 0 unneeded columns
|
non_process
|
sql speed of light benchmark for lower level projections less copies between c memory and go how much does that help less deserialization from bytes into roachpb keyvalue how much does that help what s the quickest we can read just one column or column family out of many compare speed between an mvccscan that retrieves needed and unneeded columns vs needed and unneeded columns
| 0
|
184,414
| 31,885,045,208
|
IssuesEvent
|
2023-09-16 21:08:22
|
pulumi/ci-mgmt
|
https://api.github.com/repos/pulumi/ci-mgmt
|
closed
|
Workflow failure: Update GH workflows, native providers (auto-pr)
|
p1 kind/engineering resolution/by-design
|
## Workflow Failure
[Update GH workflows, native providers (auto-pr)](https://github.com/pulumi/ci-mgmt/blob/master/.github/workflows/update-native-provider-workflows.yml) has failed. See the list of failures below:
- [2023-09-16T05:02:03.000Z](https://github.com/pulumi/ci-mgmt/actions/runs/6205404124)
|
1.0
|
Workflow failure: Update GH workflows, native providers (auto-pr) - ## Workflow Failure
[Update GH workflows, native providers (auto-pr)](https://github.com/pulumi/ci-mgmt/blob/master/.github/workflows/update-native-provider-workflows.yml) has failed. See the list of failures below:
- [2023-09-16T05:02:03.000Z](https://github.com/pulumi/ci-mgmt/actions/runs/6205404124)
|
non_process
|
workflow failure update gh workflows native providers auto pr workflow failure has failed see the list of failures below
| 0
|
13,972
| 16,744,831,307
|
IssuesEvent
|
2021-06-11 14:19:46
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Push notifications are not triggered to iOS on disabling and re-enabling the 'Receive Push Notifications?' option
|
Bug P2 Process: Fixed Process: Tested QA Process: Tested dev iOS
|
**Steps:**
1. Login/Signup into mobile
2. Navigate to my account
3. Disable 'Receive Push Notifications?'
4. Send push notification from SB. Push notifications are not triggered to iOS
5. Enable the option 'Receive Push Notifications?' again
6. Send push notification from SB and Observe
**Actual:** Push notifications are not triggered to iOS on disabling and enabling the 'Receive Push Notifications?' option
**Expected:** Push notifications should be triggered.
1. Issue not observed in Android.
2. Issue not observed once user logs out and logs in again after step 5
|
3.0
|
[iOS] Push notifications are not triggered to iOS on disabling and re-enabling the 'Receive Push Notifications?' option - **Steps:**
1. Login/Signup into mobile
2. Navigate to my account
3. Disable 'Receive Push Notifications?'
4. Send push notification from SB. Push notifications are not triggered to iOS
5. Enable the option 'Receive Push Notifications?' again
6. Send push notification from SB and Observe
**Actual:** Push notifications are not triggered to iOS on disabling and enabling the 'Receive Push Notifications?' option
**Expected:** Push notifications should be triggered.
1. Issue not observed in Android.
2. Issue not observed once user logs out and logs in again after step 5
|
process
|
push notifications are not triggered to ios on disabling and re enabling the receive push notifications option steps login signup into mobile navigate to my account disable receive push notifications send push notification from sb push notifications are not triggered to ios enable the option receive push notifications again send push notification from sb and observe actual push notifications are not triggered to ios on disabling and enabling the receive push notifications option expected push notifications should be triggered issue not observed in android issue not observed once user logs out and logs in again after step
| 1
|
1,920
| 4,756,474,355
|
IssuesEvent
|
2016-10-24 14:06:42
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
doc: `process` formatting issues
|
doc good first contribution process
|
* **Version**: master, v6.x
* **Platform**: n/a
* **Subsystem**: doc
I noticed there are some markdown formatting issues with the `process` documentation:
* There is a missing backtick in the description for [`process.setuid()`](https://github.com/nodejs/node/blob/50be885285c602c2aa1eb9c6010cb26fe7d186ff/doc/api/process.md#processsetuidid) after `process.setuid(id)`.
* There are some missing link references: [here](https://github.com/nodejs/node/blob/50be885285c602c2aa1eb9c6010cb26fe7d186ff/doc/api/process.md#event-beforeexit) (`process.exitCode`), [here](https://github.com/nodejs/node/blob/50be885285c602c2aa1eb9c6010cb26fe7d186ff/doc/api/process.md#event-exit) (`process.exitCode`), and [here](https://github.com/nodejs/node/blob/50be885285c602c2aa1eb9c6010cb26fe7d186ff/doc/api/process.md#processrelease) (`LTS`).
* In the [`process.release` section](https://github.com/nodejs/node/blob/50be885285c602c2aa1eb9c6010cb26fe7d186ff/doc/api/process.md#processrelease), there are some instances of markdown underscores for italicization showing up literally in file names/extensions: `_.tar.gz_` (2) and `_node.lib_` (1). I think if the underscores are moved right outside the backticks, the italics *should* render correctly.
|
1.0
|
doc: `process` formatting issues - * **Version**: master, v6.x
* **Platform**: n/a
* **Subsystem**: doc
I noticed there are some markdown formatting issues with the `process` documentation:
* There is a missing backtick in the description for [`process.setuid()`](https://github.com/nodejs/node/blob/50be885285c602c2aa1eb9c6010cb26fe7d186ff/doc/api/process.md#processsetuidid) after `process.setuid(id)`.
* There are some missing link references: [here](https://github.com/nodejs/node/blob/50be885285c602c2aa1eb9c6010cb26fe7d186ff/doc/api/process.md#event-beforeexit) (`process.exitCode`), [here](https://github.com/nodejs/node/blob/50be885285c602c2aa1eb9c6010cb26fe7d186ff/doc/api/process.md#event-exit) (`process.exitCode`), and [here](https://github.com/nodejs/node/blob/50be885285c602c2aa1eb9c6010cb26fe7d186ff/doc/api/process.md#processrelease) (`LTS`).
* In the [`process.release` section](https://github.com/nodejs/node/blob/50be885285c602c2aa1eb9c6010cb26fe7d186ff/doc/api/process.md#processrelease), there are some instances of markdown underscores for italicization showing up literally in file names/extensions: `_.tar.gz_` (2) and `_node.lib_` (1). I think if the underscores are moved right outside the backticks, the italics *should* render correctly.
|
process
|
doc process formatting issues version master x platform n a subsystem doc i noticed there are some markdown formatting issues with the process documentation there is a missing backtick in the description for after process setuid id there are some missing link references process exitcode process exitcode and lts in the there are some instances of markdown underscores for italicization showing up literally in file names extensions tar gz and node lib i think if the underscores are moved right outside the backticks the italics should render correctly
| 1
|
8,768
| 11,885,536,640
|
IssuesEvent
|
2020-03-27 19:49:32
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Consider: Which CI tests should run with -fno-exceptions
|
type: process
|
We [decided](https://github.com/googleapis/google-cloud-cpp/blob/master/doc/adr/2019-01-04-error-reporting-with-statusor.md) that our libraries will not require exceptions to work. That is, a user should be able to compile with `-fno-exceptions` and still be able to use our library just fine. All of our error reporting is done via `Status` or `StatusOr<T>` objects instead of throwing exceptions. This decision does not *prohibit* our users from using exceptions themselves, it simply gives them the choice about whether or not they use exceptions.
So since our library must work whether it's compiled with `-fexceptions` (the default) or with `-fno-exceptions` we should think about what flags our tests should use. The options I see are:
1. (Status Quo) **Compile and run most tests with `-fexceptions`, and one (or a small few) with `-fno-exceptions`**. This is the default setting for compilers, it is the setting used by most C++ users ([50+%](https://isocpp.org/blog/2018/03/results-summary-cpp-foundation-developer-survey-lite-2018-02) of users compile with exceptions enabled), and it is the setting required for proper/standard C++ (i.e., C++ is only defined **with** exceptions enabled; disabling exceptions is a compiler-supported non-standard dialect of C++).
2. **Compile and run most tests with `-fno-exceptions` and one (or a small few) with `-fexceptions`**. This puts most of our testing in the no-exceptions world, which we claim our library supports. One could argue that if the library works with `-fno-exceptions`, then it will work with exceptions because that's just adding features (throw, try, catch, etc) that we're not using. To a rough approximation, that reasoning makes some sense, but there are cases that it doesn't cover.
3. **Run *all tests* both with and without exceptions**. This is the most thorough testing and likely the ideal situation. However, it will close to double our testing requirements. However, if we can "afford" to run all these tests, it may be worth it.
|
1.0
|
Consider: Which CI tests should run with -fno-exceptions - We [decided](https://github.com/googleapis/google-cloud-cpp/blob/master/doc/adr/2019-01-04-error-reporting-with-statusor.md) that our libraries will not require exceptions to work. That is, a user should be able to compile with `-fno-exceptions` and still be able to use our library just fine. All of our error reporting is done via `Status` or `StatusOr<T>` objects instead of throwing exceptions. This decision does not *prohibit* our users from using exceptions themselves, it simply gives them the choice about whether or not they use exceptions.
So since our library must work whether it's compiled with `-fexceptions` (the default) or with `-fno-exceptions` we should think about what flags our tests should use. The options I see are:
1. (Status Quo) **Compile and run most tests with `-fexceptions`, and one (or a small few) with `-fno-exceptions`**. This is the default setting for compilers, it is the setting used by most C++ users ([50+%](https://isocpp.org/blog/2018/03/results-summary-cpp-foundation-developer-survey-lite-2018-02) of users compile with exceptions enabled), and it is the setting required for proper/standard C++ (i.e., C++ is only defined **with** exceptions enabled; disabling exceptions is a compiler-supported non-standard dialect of C++).
2. **Compile and run most tests with `-fno-exceptions` and one (or a small few) with `-fexceptions`**. This puts most of our testing in the no-exceptions world, which we claim our library supports. One could argue that if the library works with `-fno-exceptions`, then it will work with exceptions because that's just adding features (throw, try, catch, etc) that we're not using. To a rough approximation, that reasoning makes some sense, but there are cases that it doesn't cover.
3. **Run *all tests* both with and without exceptions**. This is the most thorough testing and likely the ideal situation. However, it will close to double our testing requirements. However, if we can "afford" to run all these tests, it may be worth it.
|
process
|
consider which ci tests should run with fno exceptions we that our libraries will not require exceptions to work that is a user should be able to compile with fno exceptions and still be able to use our library just fine all of our error reporting is done via status or statusor objects instead of throwing exceptions this decision does not prohibit our users from using exceptions themselves it simply gives them the choice about whether or not they use exceptions so since our library must work whether it s compiled with fexceptions the default or with fno exceptions we should think about what flags our tests should use the options i see are status quo compile and run most tests with fexceptions and one or a small few with fno exceptions this is the default setting for compilers it is the setting used by most c users of users compile with exceptions enabled and it is the setting required for proper standard c i e c is only defined with exceptions enabled disabling exceptions is a compiler supported non standard dialect of c compile and run most tests with fno exceptions and one or a small few with fexceptions this puts most of our testing in the no exceptions world which we claim our library supports one could argue that if the library works with fno exceptions then it will work with exceptions because that s just adding features throw try catch etc that we re not using to a rough approximation that reasoning makes some sense but there are cases that it doesn t cover run all tests both with and without exceptions this is the most thorough testing and likely the ideal situation however it will close to double our testing requirements however if we can afford to run all these tests it may be worth it
| 1
|
18,993
| 24,986,710,305
|
IssuesEvent
|
2022-11-02 15:33:51
|
nextflow-io/nextflow
|
https://api.github.com/repos/nextflow-io/nextflow
|
closed
|
List of length 1 is converted to path
|
lang/processes
|
## Bug report
When creating a list with a single element containing a path, this list gets automatically converted to path object.
### Expected behavior and actual behavior
I would expect that list will always be passed to processes as list regardless of number of elements stored in it.
### Steps to reproduce the problem
``` groovy
process C {
input:
tuple val(id), path(reads)
script:
println reads.size()
}
workflow {
ch_read_pairs = channel.fromFilePairs( params.reads, checkIfExists: true, size: -1 )
ch_read_pairs.view()
C( ch_read_pairs )
}
```
### Program output
`nextflow run [test.nf](http://test.nf/) --reads *_\{R1,R2\}.fastq` returns number of elements
```
[temp, [/data/temp_R1.fastq, /data/temp_R2.fastq]]
2
```
`nextflow run [test.nf](http://test.nf/) --reads *_\{R1,\}.fastq` returns file size
```
[temp, [/data/temp_R1.fastq]]
62
```
### Environment
* Nextflow version: 22.04.5
* Java version: openjdk 17.0.3
* Operating system: linux
* Bash version: GNU bash, 4.4.23
### Additional context
Same happens when defining channel with ` Channel.of( [path('temp_R1.fastq')] )`
|
1.0
|
List of length 1 is converted to path - ## Bug report
When creating a list with a single element containing a path, this list gets automatically converted to path object.
### Expected behavior and actual behavior
I would expect that list will always be passed to processes as list regardless of number of elements stored in it.
### Steps to reproduce the problem
``` groovy
process C {
input:
tuple val(id), path(reads)
script:
println reads.size()
}
workflow {
ch_read_pairs = channel.fromFilePairs( params.reads, checkIfExists: true, size: -1 )
ch_read_pairs.view()
C( ch_read_pairs )
}
```
### Program output
`nextflow run [test.nf](http://test.nf/) --reads *_\{R1,R2\}.fastq` returns number of elements
```
[temp, [/data/temp_R1.fastq, /data/temp_R2.fastq]]
2
```
`nextflow run [test.nf](http://test.nf/) --reads *_\{R1,\}.fastq` returns file size
```
[temp, [/data/temp_R1.fastq]]
62
```
### Environment
* Nextflow version: 22.04.5
* Java version: openjdk 17.0.3
* Operating system: linux
* Bash version: GNU bash, 4.4.23
### Additional context
Same happens when defining channel with ` Channel.of( [path('temp_R1.fastq')] )`
|
process
|
list of length is converted to path bug report when creating a list with a single element containing a path this list gets automatically converted to path object expected behavior and actual behavior i would expect that list will always be passed to processes as list regardless of number of elements stored in it steps to reproduce the problem groovy process c input tuple val id path reads script println reads size workflow ch read pairs channel fromfilepairs params reads checkifexists true size ch read pairs view c ch read pairs program output nextflow run reads fastq returns number of elements nextflow run reads fastq returns file size environment nextflow version java version openjdk operating system linux bash version gnu bash additional context same happens when defining channel with channel of
| 1
|
18,185
| 24,235,793,156
|
IssuesEvent
|
2022-09-26 23:04:51
|
ethereum/EIPs
|
https://api.github.com/repos/ethereum/EIPs
|
closed
|
Rollback EIP-712 from "final" to "draft" OR deprecate EIP-712 with a new one
|
w-stale question r-process
|
EIP-1 mandates a "final" status to be terminal. But the current snapshot of EIP-712 is flawed in various ways, as discussed in #5457
More over, it has two "specification“ sections.
We need to make the following decision:
Option 1: rollback EIP-712 status from FINAL back to DRAFT to allow massive update, a one time exception to EIP-1 mandate
Option 2: encourage original authors of EIP-712 to write a new EIP with new number that "updates" EI-712(thus considered EIP-712 deprecated) .
Also feel free to suggest other options.
@Pandapip1 @lightclient @SamWilsn @axic @MicahZoltu @gcolvin for thoughts
|
1.0
|
Rollback EIP-712 from "final" to "draft" OR deprecate EIP-712 with a new one - EIP-1 mandates a "final" status to be terminal. But the current snapshot of EIP-712 is flawed in various ways, as discussed in #5457
More over, it has two "specification“ sections.
We need to make the following decision:
Option 1: rollback EIP-712 status from FINAL back to DRAFT to allow massive update, a one time exception to EIP-1 mandate
Option 2: encourage original authors of EIP-712 to write a new EIP with new number that "updates" EI-712(thus considered EIP-712 deprecated) .
Also feel free to suggest other options.
@Pandapip1 @lightclient @SamWilsn @axic @MicahZoltu @gcolvin for thoughts
|
process
|
rollback eip from final to draft or deprecate eip with a new one eip mandates a final status to be terminal but the current snapshot of eip is flawed in various ways as discussed in more over it has two specification“ sections we need to make the following decision option rollback eip status from final back to draft to allow massive update a one time exception to eip mandate option encourage original authors of eip to write a new eip with new number that updates ei thus considered eip deprecated also feel free to suggest other options lightclient samwilsn axic micahzoltu gcolvin for thoughts
| 1
|
24,211
| 7,467,424,155
|
IssuesEvent
|
2018-04-02 15:15:01
|
zurb/foundation-sites
|
https://api.github.com/repos/zurb/foundation-sites
|
closed
|
build: use Husky for Git hooks
|
build
|
It makes sense to run some checks when working with the develop branch.
We should use [husky](https://github.com/typicode/husky) for this.
* [ ] eslint
* [ ] sasslint / stylelint
* [x] unit tests
* [ ] ...
Can we adapt airbnb coding guidelines for Foundation Sites? https://github.com/zurb/foundation-sites/pull/9333#issuecomment-259777119
|
1.0
|
build: use Husky for Git hooks - It makes sense to run some checks when working with the develop branch.
We should use [husky](https://github.com/typicode/husky) for this.
* [ ] eslint
* [ ] sasslint / stylelint
* [x] unit tests
* [ ] ...
Can we adapt airbnb coding guidelines for Foundation Sites? https://github.com/zurb/foundation-sites/pull/9333#issuecomment-259777119
|
non_process
|
build use husky for git hooks it makes sense to run some checks when working with the develop branch we should use for this eslint sasslint stylelint unit tests can we adapt airbnb coding guidelines for foundation sites
| 0
|
54,696
| 13,886,161,125
|
IssuesEvent
|
2020-10-18 23:22:37
|
ascott18/TellMeWhen
|
https://api.github.com/repos/ascott18/TellMeWhen
|
closed
|
[Shadowlands] [Bug] Crash when trying to edit text icons' displays
|
S: cantfix T: defect V: retail
|
**What version of TellMeWhen are you using? **
<!-- Found in-game at the top of TMW's configuration window. "The latest" is not a version. -->
9.0 PTR
**What steps will reproduce the problem?**
1.When the mouse moves to the box under the text display scheme, the game will exit directly
2.
3.
<!-- Add more steps if needed -->
**What do you expect to happen? What happens instead?**
**Screenshots and Export Strings**
<!-- If your issue pertains to a specific icon or group, please post the relevant export string(s).
To get an export string, open the icon editor, and click the button labeled "Import/Export/Backup". Select the "To String" option for the appropriate export type (icon, group, or profile), and then press CTRL+C to copy it to your clipboard.
Additionally, if applicable, add screenshots to help explain your problem. You can paste images directly into GitHub issues, or you can upload files as well. -->
**Additional Info**
<!-- Please add any additional information you think will be useful in reproducing and/or solving the issue. -->
|
1.0
|
[Shadowlands] [Bug] Crash when trying to edit text icons' displays - **What version of TellMeWhen are you using? **
<!-- Found in-game at the top of TMW's configuration window. "The latest" is not a version. -->
9.0 PTR
**What steps will reproduce the problem?**
1.When the mouse moves to the box under the text display scheme, the game will exit directly
2.
3.
<!-- Add more steps if needed -->
**What do you expect to happen? What happens instead?**
**Screenshots and Export Strings**
<!-- If your issue pertains to a specific icon or group, please post the relevant export string(s).
To get an export string, open the icon editor, and click the button labeled "Import/Export/Backup". Select the "To String" option for the appropriate export type (icon, group, or profile), and then press CTRL+C to copy it to your clipboard.
Additionally, if applicable, add screenshots to help explain your problem. You can paste images directly into GitHub issues, or you can upload files as well. -->
**Additional Info**
<!-- Please add any additional information you think will be useful in reproducing and/or solving the issue. -->
|
non_process
|
crash when trying to edit text icons displays what version of tellmewhen are you using ptr what steps will reproduce the problem when the mouse moves to the box under the text display scheme the game will exit directly what do you expect to happen what happens instead screenshots and export strings if your issue pertains to a specific icon or group please post the relevant export string s to get an export string open the icon editor and click the button labeled import export backup select the to string option for the appropriate export type icon group or profile and then press ctrl c to copy it to your clipboard additionally if applicable add screenshots to help explain your problem you can paste images directly into github issues or you can upload files as well additional info
| 0
|
4,042
| 6,973,256,188
|
IssuesEvent
|
2017-12-11 19:50:46
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Error filtering on time buckets on redshift.
|
Bug Database/Redshift Priority/P2 Query Processor
|
A count of rows by week limited to the past 30 days blows up on our redshift CI instance.

|
1.0
|
Error filtering on time buckets on redshift. - A count of rows by week limited to the past 30 days blows up on our redshift CI instance.

|
process
|
error filtering on time buckets on redshift a count of rows by week limited to the past days blows up on our redshift ci instance
| 1
|
544,862
| 15,930,393,674
|
IssuesEvent
|
2021-04-14 00:46:04
|
Xmetalfanx/website
|
https://api.github.com/repos/Xmetalfanx/website
|
closed
|
loading="lazy" change
|
Priority enhancement
|
start doing this on SELECTED images ... keep the lazysizes.js file in place but un"link" it from the template
|
1.0
|
loading="lazy" change - start doing this on SELECTED images ... keep the lazysizes.js file in place but un"link" it from the template
|
non_process
|
loading lazy change start doing this on selected images keep the lazysizes js file in place but un link it from the template
| 0
|
4,337
| 7,243,838,267
|
IssuesEvent
|
2018-02-14 13:15:58
|
bounswe/bounswe2018group1
|
https://api.github.com/repos/bounswe/bounswe2018group1
|
closed
|
Searching Good Repositories
|
Position: In-Process Priority: High Who: Personal
|
Due Date: 12.02.2018
The first week homework is about git and some repositories that we like. After determining repos, everyone can create a google docs file, and update it. Then everyone should send the link of the file that he/she have created.
* [x] Cemal Aytekin
* [x] Ahmet Yasin Alp
* [x] Ece Ata
* [x] Hatice Melike Ecevit
* [ ] Akın İlerle
* [x] Öncel Keleş
* [x] Volkan Yılmaz
* [x] Halil Samed Çıldır
* [x] Deniz Etkar
|
1.0
|
Searching Good Repositories - Due Date: 12.02.2018
The first week homework is about git and some repositories that we like. After determining repos, everyone can create a google docs file, and update it. Then everyone should send the link of the file that he/she have created.
* [x] Cemal Aytekin
* [x] Ahmet Yasin Alp
* [x] Ece Ata
* [x] Hatice Melike Ecevit
* [ ] Akın İlerle
* [x] Öncel Keleş
* [x] Volkan Yılmaz
* [x] Halil Samed Çıldır
* [x] Deniz Etkar
|
process
|
searching good repositories due date the first week homework is about git and some repositories that we like after determining repos everyone can create a google docs file and update it then everyone should send the link of the file that he she have created cemal aytekin ahmet yasin alp ece ata hatice melike ecevit akın i̇lerle öncel keleş volkan yılmaz halil samed çıldır deniz etkar
| 1
|
22,691
| 31,994,001,792
|
IssuesEvent
|
2023-09-21 08:03:31
|
dotnet/csharpstandard
|
https://api.github.com/repos/dotnet/csharpstandard
|
closed
|
TC49-TG2 dead link to HTML version of C# standard
|
type: process
|
**Describe the bug**
At <https://www.ecma-international.org/task-groups/tc49-tg2/?tab=activities>, there is a dead link to an HTML version of C#.
**Example**
```HTML
<p>Jon Jagger has made an <a href="http://www.jaggersoft.com/csharp_standard/index.html">html</a> version of C#.</p>
```
**Expected behavior**
Delete the <http://www.jaggersoft.com/csharp_standard/index.html> link. Perhaps link to this GitHub repository instead.
**Additional context**
The link redirects to <http://jonjagger.blogspot.com/>. That blog has five posts tagged `c#`, but I don't see the C# standard there.
|
1.0
|
TC49-TG2 dead link to HTML version of C# standard - **Describe the bug**
At <https://www.ecma-international.org/task-groups/tc49-tg2/?tab=activities>, there is a dead link to an HTML version of C#.
**Example**
```HTML
<p>Jon Jagger has made an <a href="http://www.jaggersoft.com/csharp_standard/index.html">html</a> version of C#.</p>
```
**Expected behavior**
Delete the <http://www.jaggersoft.com/csharp_standard/index.html> link. Perhaps link to this GitHub repository instead.
**Additional context**
The link redirects to <http://jonjagger.blogspot.com/>. That blog has five posts tagged `c#`, but I don't see the C# standard there.
|
process
|
dead link to html version of c standard describe the bug at there is a dead link to an html version of c example html jon jagger has made an expected behavior delete the link perhaps link to this github repository instead additional context the link redirects to that blog has five posts tagged c but i don t see the c standard there
| 1
|
202,558
| 23,077,509,360
|
IssuesEvent
|
2022-07-26 02:06:46
|
idmarinas/lotgd-modules
|
https://api.github.com/repos/idmarinas/lotgd-modules
|
opened
|
CVE-2021-35065 (High) detected in glob-parent-5.1.2.tgz, glob-parent-3.1.0.tgz
|
security vulnerability
|
## CVE-2021-35065 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-5.1.2.tgz</b>, <b>glob-parent-3.1.0.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-5.1.2.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- del-6.0.0.tgz (Root Library)
- globby-11.0.4.tgz
- fast-glob-3.2.7.tgz
- :x: **glob-parent-5.1.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/chokidar/node_modules/glob-parent/package.json,/node_modules/glob-stream/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- gulp-4.0.2.tgz (Root Library)
- glob-watcher-5.0.5.tgz
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS)
<p>Publish Date: 2021-06-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065>CVE-2021-35065</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cj88-88mr-972w">https://github.com/advisories/GHSA-cj88-88mr-972w</a></p>
<p>Release Date: 2021-06-22</p>
<p>Fix Resolution (glob-parent): 6.0.1</p>
<p>Direct dependency fix Resolution (del): 6.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-35065 (High) detected in glob-parent-5.1.2.tgz, glob-parent-3.1.0.tgz - ## CVE-2021-35065 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-5.1.2.tgz</b>, <b>glob-parent-3.1.0.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-5.1.2.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- del-6.0.0.tgz (Root Library)
- globby-11.0.4.tgz
- fast-glob-3.2.7.tgz
- :x: **glob-parent-5.1.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/chokidar/node_modules/glob-parent/package.json,/node_modules/glob-stream/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- gulp-4.0.2.tgz (Root Library)
- glob-watcher-5.0.5.tgz
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS)
<p>Publish Date: 2021-06-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065>CVE-2021-35065</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cj88-88mr-972w">https://github.com/advisories/GHSA-cj88-88mr-972w</a></p>
<p>Release Date: 2021-06-22</p>
<p>Fix Resolution (glob-parent): 6.0.1</p>
<p>Direct dependency fix Resolution (del): 6.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in glob parent tgz glob parent tgz cve high severity vulnerability vulnerable libraries glob parent tgz glob parent tgz glob parent tgz extract the non magic parent path from a glob string library home page a href path to dependency file package json path to vulnerable library node modules glob parent package json dependency hierarchy del tgz root library globby tgz fast glob tgz x glob parent tgz vulnerable library glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file package json path to vulnerable library node modules chokidar node modules glob parent package json node modules glob stream node modules glob parent package json dependency hierarchy gulp tgz root library glob watcher tgz chokidar tgz x glob parent tgz vulnerable library found in base branch master vulnerability details the package glob parent before are vulnerable to regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent direct dependency fix resolution del step up your open source security game with mend
| 0
|
115,180
| 4,652,809,470
|
IssuesEvent
|
2016-10-03 15:02:53
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
opened
|
blacklist
|
Priority-Critical
|
Nobody seems to be watching blacklist objection emails when DLM is out. There are lots, including from traveling UAM Curators, someone with a unm.edu email, etc.
|
1.0
|
blacklist - Nobody seems to be watching blacklist objection emails when DLM is out. There are lots, including from traveling UAM Curators, someone with a unm.edu email, etc.
|
non_process
|
blacklist nobody seems to be watching blacklist objection emails when dlm is out there are lots including from traveling uam curators someone with a unm edu email etc
| 0
|
11,870
| 14,672,562,363
|
IssuesEvent
|
2020-12-30 10:55:31
|
heim-rs/heim
|
https://api.github.com/repos/heim-rs/heim
|
closed
|
Replace bundled Duration::as_secs_f64 implementation with a real one
|
A-cpu A-process C-enhancement C-good-first-issue P-low
|
This one code block at `heim-process` crate: https://github.com/heim-rs/heim/blob/58d7110f803f94fb186a95a55000edea49ace7bd/heim-process/src/process/cpu_usage.rs#L27-L31 and its copy introduced in #295 (see ` heim-cpu/src/usage.rs`) should be replaced with `Duration::as_secs_f64`.
MSRV was bumped to 1.45 and it is safe to do it now.
|
1.0
|
Replace bundled Duration::as_secs_f64 implementation with a real one - This one code block at `heim-process` crate: https://github.com/heim-rs/heim/blob/58d7110f803f94fb186a95a55000edea49ace7bd/heim-process/src/process/cpu_usage.rs#L27-L31 and its copy introduced in #295 (see ` heim-cpu/src/usage.rs`) should be replaced with `Duration::as_secs_f64`.
MSRV was bumped to 1.45 and it is safe to do it now.
|
process
|
replace bundled duration as secs implementation with a real one this one code block at heim process crate and its copy introduced in see heim cpu src usage rs should be replaced with duration as secs msrv was bumped to and it is safe to do it now
| 1
|
2,457
| 5,240,680,192
|
IssuesEvent
|
2017-01-31 13:51:28
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
closed
|
Normalize takeda and gsk identifiers where possible
|
bug Launch? Processors
|
# Overview
See corresponding re:dash query. Also for identifiers outside of the core set (`nct/euctr/jprn/isrctn/actrn/who`) should be improved regexps in `get_clean_identifiers` function.
|
1.0
|
Normalize takeda and gsk identifiers where possible - # Overview
See corresponding re:dash query. Also for identifiers outside of the core set (`nct/euctr/jprn/isrctn/actrn/who`) should be improved regexps in `get_clean_identifiers` function.
|
process
|
normalize takeda and gsk identifiers where possible overview see corresponding re dash query also for identifiers outside of the core set nct euctr jprn isrctn actrn who should be improved regexps in get clean identifiers function
| 1
|
14,112
| 17,012,004,104
|
IssuesEvent
|
2021-07-02 06:39:13
|
beyondhb1079/s4us
|
https://api.github.com/repos/beyondhb1079/s4us
|
opened
|
PSA Email: Site is in beta
|
process
|
The site is polished up and we've added a bunch of scholarships we're aware. Please click around, file bugs, and add any scholarships we missed!
|
1.0
|
PSA Email: Site is in beta - The site is polished up and we've added a bunch of scholarships we're aware. Please click around, file bugs, and add any scholarships we missed!
|
process
|
psa email site is in beta the site is polished up and we ve added a bunch of scholarships we re aware please click around file bugs and add any scholarships we missed
| 1
|
330,949
| 28,497,385,692
|
IssuesEvent
|
2023-04-18 15:02:59
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix gradients.test_random_uniform
|
Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4320387980/jobs/7540583857" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|None
|numpy|None
|jax|None
<details>
<summary>Not found</summary>
Not found
</details>
|
1.0
|
Fix gradients.test_random_uniform - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4320387980/jobs/7540583857" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|None
|numpy|None
|jax|None
<details>
<summary>Not found</summary>
Not found
</details>
|
non_process
|
fix gradients test random uniform tensorflow img src torch none numpy none jax none not found not found
| 0
|
100,358
| 16,489,865,849
|
IssuesEvent
|
2021-05-25 01:02:55
|
billmcchesney1/rcloud
|
https://api.github.com/repos/billmcchesney1/rcloud
|
opened
|
CVE-2020-28500 (Medium) detected in lodash-4.17.14.tgz
|
security vulnerability
|
## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.14.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.14.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.14.tgz</a></p>
<p>Path to dependency file: rcloud/package.json</p>
<p>Path to vulnerable library: rcloud/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- eslint-5.16.0.tgz (Root Library)
- :x: **lodash-4.17.14.tgz** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash-4.17.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.14","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"eslint:5.16.0;lodash:4.17.14","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.21"}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2020-28500","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-28500 (Medium) detected in lodash-4.17.14.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.14.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.14.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.14.tgz</a></p>
<p>Path to dependency file: rcloud/package.json</p>
<p>Path to vulnerable library: rcloud/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- eslint-5.16.0.tgz (Root Library)
- :x: **lodash-4.17.14.tgz** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash-4.17.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.14","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"eslint:5.16.0;lodash:4.17.14","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.21"}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2020-28500","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file rcloud package json path to vulnerable library rcloud node modules lodash package json dependency hierarchy eslint tgz root library x lodash tgz vulnerable library found in base branch develop vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree eslint lodash isminimumfixversionavailable true minimumfixversion lodash basebranches vulnerabilityidentifier cve vulnerabilitydetails lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions vulnerabilityurl
| 0
|
100,958
| 21,560,087,782
|
IssuesEvent
|
2022-05-01 03:01:01
|
DS-13-Dev-Team/DS13
|
https://api.github.com/repos/DS-13-Dev-Team/DS13
|
closed
|
Suggestions and Ideas for Outbreak Escalation IV
|
Suggestion Type: Code
|
Suggestion:
Make necros softer in health and damage but give them more planning tools like vents, hidding spots and/or some sort of "dead disguise".
What do you think it'd add:
The necros actually are playing swarm tactics in the game, they need to play more clever and will add more intensity to the game. Also the escalation will go slower and more balanced.
|
1.0
|
Suggestions and Ideas for Outbreak Escalation IV - Suggestion:
Make necros softer in health and damage but give them more planning tools like vents, hidding spots and/or some sort of "dead disguise".
What do you think it'd add:
The necros actually are playing swarm tactics in the game, they need to play more clever and will add more intensity to the game. Also the escalation will go slower and more balanced.
|
non_process
|
suggestions and ideas for outbreak escalation iv suggestion make necros softer in health and damage but give them more planning tools like vents hidding spots and or some sort of dead disguise what do you think it d add the necros actually are playing swarm tactics in the game they need to play more clever and will add more intensity to the game also the escalation will go slower and more balanced
| 0
|
4,603
| 7,451,432,214
|
IssuesEvent
|
2018-03-29 02:55:46
|
shobrook/BitVision
|
https://api.github.com/repos/shobrook/BitVision
|
closed
|
Run some checks
|
low priority model preprocessing
|
Firstly, we should check Quandl for other candidate Blockchain-related datasets and, if we find any, test them out. Secondly, I'm not sure if the specificity and sensitivity calculations are correct in the analysis module––this needs to be checked. Lastly, let's check a random sample from each TA-Lib output to ensure the technical indicators are being calculated correctly.
Oh and I'm using a TA-Lib wrapper that Mike built and it doesn't support all of our desired indicators. So that needs to be figured out eventually.
|
1.0
|
Run some checks - Firstly, we should check Quandl for other candidate Blockchain-related datasets and, if we find any, test them out. Secondly, I'm not sure if the specificity and sensitivity calculations are correct in the analysis module––this needs to be checked. Lastly, let's check a random sample from each TA-Lib output to ensure the technical indicators are being calculated correctly.
Oh and I'm using a TA-Lib wrapper that Mike built and it doesn't support all of our desired indicators. So that needs to be figured out eventually.
|
process
|
run some checks firstly we should check quandl for other candidate blockchain related datasets and if we find any test them out secondly i m not sure if the specificity and sensitivity calculations are correct in the analysis module––this needs to be checked lastly let s check a random sample from each ta lib output to ensure the technical indicators are being calculated correctly oh and i m using a ta lib wrapper that mike built and it doesn t support all of our desired indicators so that needs to be figured out eventually
| 1
|
146,340
| 5,619,444,618
|
IssuesEvent
|
2017-04-04 01:38:12
|
smartcatdev/support-system
|
https://api.github.com/repos/smartcatdev/support-system
|
closed
|
Filters return tickets for multiple users on non privileged accounts
|
bug Important Priority
|
## Steps to Reproduce
- Create a user as either a subscriber or a customer
- Create some tickets in the admin interface (As an admin user)
- As the newly created user, change the filters so that they will return the newly created ticket
## Expected Result
The filter should only return tickets that have been created by the current user or none at all
## Actual Result
All tickets that match the criteria are returned and read permissions are ignored. Users can also comment on the issue
|
1.0
|
Filters return tickets for multiple users on non privileged accounts - ## Steps to Reproduce
- Create a user as either a subscriber or a customer
- Create some tickets in the admin interface (As an admin user)
- As the newly created user, change the filters so that they will return the newly created ticket
## Expected Result
The filter should only return tickets that have been created by the current user or none at all
## Actual Result
All tickets that match the criteria are returned and read permissions are ignored. Users can also comment on the issue
|
non_process
|
filters return tickets for multiple users on non privileged accounts steps to reproduce create a user as either a subscriber or a customer create some tickets in the admin interface as an admin user as the newly created user change the filters so that they will return the newly created ticket expected result the filter should only return tickets that have been created by the current user or none at all actual result all tickets that match the criteria are returned and read permissions are ignored users can also comment on the issue
| 0
|
63,075
| 3,193,931,190
|
IssuesEvent
|
2015-09-30 09:06:41
|
fusioninventory/fusioninventory-for-glpi
|
https://api.github.com/repos/fusioninventory/fusioninventory-for-glpi
|
closed
|
Dynamic agent crash for netdiscovery
|
Category: Tasks Component: For junior contributor Component: Found in version Priority: High Status: Closed Tracker: Bug
|
---
Author Name: **David Durieux** (@ddurieux)
Original Redmine Issue: 1560, http://forge.fusioninventory.org/issues/1560
Original Date: 2012-04-10
Original Assignee: David Durieux
---
None
|
1.0
|
Dynamic agent crash for netdiscovery - ---
Author Name: **David Durieux** (@ddurieux)
Original Redmine Issue: 1560, http://forge.fusioninventory.org/issues/1560
Original Date: 2012-04-10
Original Assignee: David Durieux
---
None
|
non_process
|
dynamic agent crash for netdiscovery author name david durieux ddurieux original redmine issue original date original assignee david durieux none
| 0
|
13,913
| 16,674,232,623
|
IssuesEvent
|
2021-06-07 14:26:18
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
opened
|
cf-units=2.1.5 for OSX still preserves older version behaviour (<2.1.4)
|
preprocessor testing
|
`cf-units=2.1.5` installed in the esmvaltool conda env on OSX as seen from the [GA test](https://github.com/ESMValGroup/ESMValCore/runs/2759882084?check_suite_focus=true) preserves the behavior of `num2date` from an older version, <2.1.4. We need to raise this with the `cf-units` develsm @bjlittle this is a SciTools package, would you be able to look into it maybe? :beer:
|
1.0
|
cf-units=2.1.5 for OSX still preserves older version behaviour (<2.1.4) - `cf-units=2.1.5` installed in the esmvaltool conda env on OSX as seen from the [GA test](https://github.com/ESMValGroup/ESMValCore/runs/2759882084?check_suite_focus=true) preserves the behavior of `num2date` from an older version, <2.1.4. We need to raise this with the `cf-units` develsm @bjlittle this is a SciTools package, would you be able to look into it maybe? :beer:
|
process
|
cf units for osx still preserves older version behaviour cf units installed in the esmvaltool conda env on osx as seen from the preserves the behavior of from an older version we need to raise this with the cf units develsm bjlittle this is a scitools package would you be able to look into it maybe beer
| 1
|
8,528
| 11,704,841,582
|
IssuesEvent
|
2020-03-07 12:12:48
|
3wcircus/DamnThePotHolesBackend
|
https://api.github.com/repos/3wcircus/DamnThePotHolesBackend
|
opened
|
Setup/enforce Git Team activities
|
Process
|
Setup Git Team Skeleton and share with Team
- Enforce pull/merge requests
- Add users to project
- Educate others
|
1.0
|
Setup/enforce Git Team activities - Setup Git Team Skeleton and share with Team
- Enforce pull/merge requests
- Add users to project
- Educate others
|
process
|
setup enforce git team activities setup git team skeleton and share with team enforce pull merge requests add users to project educate others
| 1
|
159,864
| 20,085,918,962
|
IssuesEvent
|
2022-02-05 01:12:20
|
DavidSpek/kale
|
https://api.github.com/repos/DavidSpek/kale
|
opened
|
CVE-2021-41216 (High) detected in tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl, tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl
|
security vulnerability
|
## CVE-2021-41216 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b>, <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>
<details><summary><b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /examples/dog-breed-classification/requirements/requirements.txt</p>
<p>Path to vulnerable library: /kale/examples/dog-breed-classification/requirements/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</details>
<details><summary><b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /examples/taxi-cab-classification/requirements.txt</p>
<p>Path to vulnerable library: /examples/taxi-cab-classification/requirements.txt</p>
<p>
Dependency Hierarchy:
- tfx_bsl-0.21.4-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library)
- :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. In affected versions the shape inference function for `Transpose` is vulnerable to a heap buffer overflow. This occurs whenever `perm` contains negative elements. The shape inference function does not validate that the indices in `perm` are all valid. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-11-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41216>CVE-2021-41216</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-3ff2-r28g-w7h9">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-3ff2-r28g-w7h9</a></p>
<p>Release Date: 2021-11-05</p>
<p>Fix Resolution: tensorflow - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-cpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-gpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-41216 (High) detected in tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl, tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2021-41216 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b>, <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>
<details><summary><b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /examples/dog-breed-classification/requirements/requirements.txt</p>
<p>Path to vulnerable library: /kale/examples/dog-breed-classification/requirements/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</details>
<details><summary><b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /examples/taxi-cab-classification/requirements.txt</p>
<p>Path to vulnerable library: /examples/taxi-cab-classification/requirements.txt</p>
<p>
Dependency Hierarchy:
- tfx_bsl-0.21.4-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library)
- :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. In affected versions the shape inference function for `Transpose` is vulnerable to a heap buffer overflow. This occurs whenever `perm` contains negative elements. The shape inference function does not validate that the indices in `perm` are all valid. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-11-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41216>CVE-2021-41216</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-3ff2-r28g-w7h9">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-3ff2-r28g-w7h9</a></p>
<p>Release Date: 2021-11-05</p>
<p>Fix Resolution: tensorflow - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-cpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-gpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tensorflow whl tensorflow whl cve high severity vulnerability vulnerable libraries tensorflow whl tensorflow whl tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file examples dog breed classification requirements requirements txt path to vulnerable library kale examples dog breed classification requirements requirements txt dependency hierarchy x tensorflow whl vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file examples taxi cab classification requirements txt path to vulnerable library examples taxi cab classification requirements txt dependency hierarchy tfx bsl whl root library x tensorflow whl vulnerable library found in base branch master vulnerability details tensorflow is an open source platform for machine learning in affected versions the shape inference function for transpose is vulnerable to a heap buffer overflow this occurs whenever perm contains negative elements the shape inference function does not validate that the indices in perm are all valid the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
| 0
|
4,369
| 5,025,493,888
|
IssuesEvent
|
2016-12-15 09:23:43
|
asciidoctor/asciidoctor
|
https://api.github.com/repos/asciidoctor/asciidoctor
|
closed
|
Upgrade to Nokogiri 1.6.x
|
infrastructure
|
Upgrade to Nokogiri 1.6.x to coincide with dropping support for Ruby 1.8.
|
1.0
|
Upgrade to Nokogiri 1.6.x - Upgrade to Nokogiri 1.6.x to coincide with dropping support for Ruby 1.8.
|
non_process
|
upgrade to nokogiri x upgrade to nokogiri x to coincide with dropping support for ruby
| 0
|
17,171
| 22,745,012,944
|
IssuesEvent
|
2022-07-07 08:24:56
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
opened
|
Clean up StreamingPlatform / Engine
|
kind/toil team/distributed team/process-automation
|
**Description**
There a multiple things we can remove or clean up during #9600, some of these might be only possible at the end, but some are already possible at the begin.
**Todo:**
- [ ] Remove the position from the TypedRecordProcessor#process see #9602
- [ ] Remove ProxyWriter from the ProcessingContext see #9602
- [ ] Remove writers from processing interface after #9724
- [ ] Clean up ProcessingContext, remove unused classes/objects
|
1.0
|
Clean up StreamingPlatform / Engine - **Description**
There a multiple things we can remove or clean up during #9600, some of these might be only possible at the end, but some are already possible at the begin.
**Todo:**
- [ ] Remove the position from the TypedRecordProcessor#process see #9602
- [ ] Remove ProxyWriter from the ProcessingContext see #9602
- [ ] Remove writers from processing interface after #9724
- [ ] Clean up ProcessingContext, remove unused classes/objects
|
process
|
clean up streamingplatform engine description there a multiple things we can remove or clean up during some of these might be only possible at the end but some are already possible at the begin todo remove the position from the typedrecordprocessor process see remove proxywriter from the processingcontext see remove writers from processing interface after clean up processingcontext remove unused classes objects
| 1
|
232,859
| 7,681,065,920
|
IssuesEvent
|
2018-05-16 05:40:30
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.xfinity.com - Xfinity splash screen is not displayed
|
Re-triaged browser-firefox priority-normal
|
<!-- @browser: Firefox Mobile Nightly 55.0a1 (2017-04-07) -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0 -->
<!-- @reported_with: addon-reporter-firefox -->
**URL**: www.xfinity.com
**Browser / Version**: Firefox Mobile Nightly 55.0a1 (2017-04-07)
**Operating System**: Android 6.0.1
**Problem type**: Something else - I'll add details below
**Steps to Reproduce**
1. Navigate to: www.xfinity.com
2. Observe behavior.
**Expected Behavior:**
Xfinity splash screen is displayed.
**Actual Behavior:**
Xfinity splash screen is not displayed.
**Note:**
1. Reproducible also on the Firefox 52.0 Release and Chrome (Mobile) 57.0.2987.132.
2. Not reproducible on Opera 42.7.2246.114996.
3. Error (Android Studio):
```
4-07 14:21:37.059 5698-5739/? E/GeckoConsole: [JavaScript Error: "XML Parsing Error: not well-formed
Location: https://www.xfinity.com/ShoppingCartController.cajax
Line Number 1, Column 1:" {file: "https://www.xfinity.com/ShoppingCartController.cajax" line: 1 column: 1 source: "{"Details":"CheckSavedCart,True,True,10,3000,","HtmlContent":[],"OmnitureData":[],"ResponseType":99,"ValidationSummary":null}"}]
04-07 14:21:37.449 5698-5698/? D/SwitchBoard: compact-tabs = true
```
4. Screenshot attached.
**Watchers:**
@softvision-sergiulogigan
@softvision-oana-arbuzov
sv; country: us

_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.xfinity.com - Xfinity splash screen is not displayed - <!-- @browser: Firefox Mobile Nightly 55.0a1 (2017-04-07) -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0 -->
<!-- @reported_with: addon-reporter-firefox -->
**URL**: www.xfinity.com
**Browser / Version**: Firefox Mobile Nightly 55.0a1 (2017-04-07)
**Operating System**: Android 6.0.1
**Problem type**: Something else - I'll add details below
**Steps to Reproduce**
1. Navigate to: www.xfinity.com
2. Observe behavior.
**Expected Behavior:**
Xfinity splash screen is displayed.
**Actual Behavior:**
Xfinity splash screen is not displayed.
**Note:**
1. Reproducible also on the Firefox 52.0 Release and Chrome (Mobile) 57.0.2987.132.
2. Not reproducible on Opera 42.7.2246.114996.
3. Error (Android Studio):
```
4-07 14:21:37.059 5698-5739/? E/GeckoConsole: [JavaScript Error: "XML Parsing Error: not well-formed
Location: https://www.xfinity.com/ShoppingCartController.cajax
Line Number 1, Column 1:" {file: "https://www.xfinity.com/ShoppingCartController.cajax" line: 1 column: 1 source: "{"Details":"CheckSavedCart,True,True,10,3000,","HtmlContent":[],"OmnitureData":[],"ResponseType":99,"ValidationSummary":null}"}]
04-07 14:21:37.449 5698-5698/? D/SwitchBoard: compact-tabs = true
```
4. Screenshot attached.
**Watchers:**
@softvision-sergiulogigan
@softvision-oana-arbuzov
sv; country: us

_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
xfinity splash screen is not displayed url browser version firefox mobile nightly operating system android problem type something else i ll add details below steps to reproduce navigate to observe behavior expected behavior xfinity splash screen is displayed actual behavior xfinity splash screen is not displayed note reproducible also on the firefox release and chrome mobile not reproducible on opera error android studio e geckoconsole javascript error xml parsing error not well formed location line number column file line column source details checksavedcart true true htmlcontent omnituredata responsetype validationsummary null d switchboard compact tabs true screenshot attached watchers softvision sergiulogigan softvision oana arbuzov sv country us from with ❤️
| 0
|
33,744
| 12,216,887,480
|
IssuesEvent
|
2020-05-01 16:03:16
|
tomdgl397/goof
|
https://api.github.com/repos/tomdgl397/goof
|
opened
|
CVE-2020-11022 (Medium) detected in jquery-1.7.1.min.js
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/goof/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: /goof/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tomdgl397/goof/commit/3baa97602953cf7ff32d76181e857410ab0cfae9">3baa97602953cf7ff32d76181e857410ab0cfae9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-11022 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/goof/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: /goof/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tomdgl397/goof/commit/3baa97602953cf7ff32d76181e857410ab0cfae9">3baa97602953cf7ff32d76181e857410ab0cfae9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm goof node modules vm browserify example run index html path to vulnerable library goof node modules vm browserify example run index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details in jquery before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
5,058
| 7,861,207,729
|
IssuesEvent
|
2018-06-21 23:01:42
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
GABLS2 rtm, rtp2, thlm, and thlp2 are set to zero when plotgen is run manually (but not for the nightly tests) (Trac #24)
|
Migrated from Trac enhancement post_processing senkbeil@uwm.edu
|
Some time ago, in order to test CLUBB's scalars, Brandon changed plotgen so that it outputs scalars in place of rtm and thlm. The nightly plots work great.
However, if CLUBB is run manually without outputting scalars, and then plotgen is executed manually, then rtm, thlm, rtp2, and thlp2 are set to zero. For manual runs, typically we don't want to check scalars; we just want to plot standard versions of rtm, thlm, rtp2, and thlp2. I probably forgot to mention this earlier.
Is it feasible to insert some nightly flags or re-arrange some code so that the nightly plots test the scalars, but the manual plots simply plot rtm, thlm, rtp2, and thlp2? I believe that this is what is done for other specialized nightly tests, e.g. the restart test and some of the altered grid tests. Perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs.
However, we have a deadline on the TWP-ICE case, so don't bother with this until TWP-ICE is submitted, unless it is trivial to fix.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/24
```json
{
"status": "closed",
"changetime": "2009-09-02T20:37:37",
"description": "Some time ago, in order to test CLUBB's scalars, Brandon changed plotgen so that it outputs scalars in place of rtm and thlm. The nightly plots work great.\n\nHowever, if CLUBB is run manually without outputting scalars, and then plotgen is executed manually, then rtm, thlm, rtp2, and thlp2 are set to zero. For manual runs, typically we don't want to check scalars; we just want to plot standard versions of rtm, thlm, rtp2, and thlp2. I probably forgot to mention this earlier.\n\nIs it feasible to insert some nightly flags or re-arrange some code so that the nightly plots test the scalars, but the manual plots simply plot rtm, thlm, rtp2, and thlp2? I believe that this is what is done for other specialized nightly tests, e.g. the restart test and some of the altered grid tests. Perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs.\n\nHowever, we have a deadline on the TWP-ICE case, so don't bother with this until TWP-ICE is submitted, unless it is trivial to fix.",
"reporter": "vlarson@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1251923857000000",
"component": "post_processing",
"summary": "GABLS2 rtm, rtp2, thlm, and thlp2 are set to zero when plotgen is run manually (but not for the nightly tests)",
"priority": "minor",
"keywords": "scalars, gabls2, nightly plots, rtm, thlm, rtp2, thlp2",
"time": "2009-05-13T14:26:46",
"milestone": "Plotgen 3.0",
"owner": "senkbeil@uwm.edu",
"type": "enhancement"
}
```
|
1.0
|
GABLS2 rtm, rtp2, thlm, and thlp2 are set to zero when plotgen is run manually (but not for the nightly tests) (Trac #24) - Some time ago, in order to test CLUBB's scalars, Brandon changed plotgen so that it outputs scalars in place of rtm and thlm. The nightly plots work great.
However, if CLUBB is run manually without outputting scalars, and then plotgen is executed manually, then rtm, thlm, rtp2, and thlp2 are set to zero. For manual runs, typically we don't want to check scalars; we just want to plot standard versions of rtm, thlm, rtp2, and thlp2. I probably forgot to mention this earlier.
Is it feasible to insert some nightly flags or re-arrange some code so that the nightly plots test the scalars, but the manual plots simply plot rtm, thlm, rtp2, and thlp2? I believe that this is what is done for other specialized nightly tests, e.g. the restart test and some of the altered grid tests. Perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs.
However, we have a deadline on the TWP-ICE case, so don't bother with this until TWP-ICE is submitted, unless it is trivial to fix.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/24
```json
{
"status": "closed",
"changetime": "2009-09-02T20:37:37",
"description": "Some time ago, in order to test CLUBB's scalars, Brandon changed plotgen so that it outputs scalars in place of rtm and thlm. The nightly plots work great.\n\nHowever, if CLUBB is run manually without outputting scalars, and then plotgen is executed manually, then rtm, thlm, rtp2, and thlp2 are set to zero. For manual runs, typically we don't want to check scalars; we just want to plot standard versions of rtm, thlm, rtp2, and thlp2. I probably forgot to mention this earlier.\n\nIs it feasible to insert some nightly flags or re-arrange some code so that the nightly plots test the scalars, but the manual plots simply plot rtm, thlm, rtp2, and thlp2? I believe that this is what is done for other specialized nightly tests, e.g. the restart test and some of the altered grid tests. Perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs.\n\nHowever, we have a deadline on the TWP-ICE case, so don't bother with this until TWP-ICE is submitted, unless it is trivial to fix.",
"reporter": "vlarson@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1251923857000000",
"component": "post_processing",
"summary": "GABLS2 rtm, rtp2, thlm, and thlp2 are set to zero when plotgen is run manually (but not for the nightly tests)",
"priority": "minor",
"keywords": "scalars, gabls2, nightly plots, rtm, thlm, rtp2, thlp2",
"time": "2009-05-13T14:26:46",
"milestone": "Plotgen 3.0",
"owner": "senkbeil@uwm.edu",
"type": "enhancement"
}
```
|
process
|
rtm thlm and are set to zero when plotgen is run manually but not for the nightly tests trac some time ago in order to test clubb s scalars brandon changed plotgen so that it outputs scalars in place of rtm and thlm the nightly plots work great however if clubb is run manually without outputting scalars and then plotgen is executed manually then rtm thlm and are set to zero for manual runs typically we don t want to check scalars we just want to plot standard versions of rtm thlm and i probably forgot to mention this earlier is it feasible to insert some nightly flags or re arrange some code so that the nightly plots test the scalars but the manual plots simply plot rtm thlm and i believe that this is what is done for other specialized nightly tests e g the restart test and some of the altered grid tests perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs however we have a deadline on the twp ice case so don t bother with this until twp ice is submitted unless it is trivial to fix attachments migrated from json status closed changetime description some time ago in order to test clubb s scalars brandon changed plotgen so that it outputs scalars in place of rtm and thlm the nightly plots work great n nhowever if clubb is run manually without outputting scalars and then plotgen is executed manually then rtm thlm and are set to zero for manual runs typically we don t want to check scalars we just want to plot standard versions of rtm thlm and i probably forgot to mention this earlier n nis it feasible to insert some nightly flags or re arrange some code so that the nightly plots test the scalars but the manual plots simply plot rtm thlm and i believe that this is what is done for other specialized nightly tests e g the restart test and some of the altered grid tests perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs n nhowever we have a deadline on the twp ice case so don t bother with this until twp ice is submitted unless it is trivial to fix reporter vlarson uwm edu cc resolution verified by v larson ts component post processing summary rtm thlm and are set to zero when plotgen is run manually but not for the nightly tests priority minor keywords scalars nightly plots rtm thlm time milestone plotgen owner senkbeil uwm edu type enhancement
| 1
|
77,690
| 27,109,984,831
|
IssuesEvent
|
2023-02-15 14:43:15
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
closed
|
I cant login from FtueAuthCombinedLoginFragment!
|
T-Defect
|
### Steps to reproduce
I am changing this line `true -> FtueAuthSplashCarouselFragment::class.java` to `true -> FtueAuthCombinedLoginFragment::class.java` in `FtueAuthVariant class` but when I am using from app get me bellow error:
Invalid username or password
But I login from `FtueAuthSplashCarouselFragment` good and don't have any problem!
What can I do?
### Outcome
#### What did you expect?
#### What happened instead?
### Your phone model
Samsung S6
### Operating system version
Android Version 7.0
### Application version and app store
_No response_
### Homeserver
_No response_
### Will you send logs?
Yes
### Are you willing to provide a PR?
Yes
|
1.0
|
I cant login from FtueAuthCombinedLoginFragment! - ### Steps to reproduce
I am changing this line `true -> FtueAuthSplashCarouselFragment::class.java` to `true -> FtueAuthCombinedLoginFragment::class.java` in `FtueAuthVariant class` but when I am using from app get me bellow error:
Invalid username or password
But I login from `FtueAuthSplashCarouselFragment` good and don't have any problem!
What can I do?
### Outcome
#### What did you expect?
#### What happened instead?
### Your phone model
Samsung S6
### Operating system version
Android Version 7.0
### Application version and app store
_No response_
### Homeserver
_No response_
### Will you send logs?
Yes
### Are you willing to provide a PR?
Yes
|
non_process
|
i cant login from ftueauthcombinedloginfragment steps to reproduce i am changing this line true ftueauthsplashcarouselfragment class java to true ftueauthcombinedloginfragment class java in ftueauthvariant class but when i am using from app get me bellow error invalid username or password but i login from ftueauthsplashcarouselfragment good and don t have any problem what can i do outcome what did you expect what happened instead your phone model samsung operating system version android version application version and app store no response homeserver no response will you send logs yes are you willing to provide a pr yes
| 0
|
392,412
| 11,590,972,334
|
IssuesEvent
|
2020-02-24 08:21:46
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
support.mozilla.org - see bug description
|
browser-firefox engine-gecko form-v2-experiment priority-important
|
<!-- @browser: Firefox 74.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/48826 -->
<!-- @extra_labels: form-v2-experiment -->
**URL**: https://support.mozilla.org/en-US/kb/refresh-firefox-reset-add-ons-and-settings?redirectlocale=en-US&redirectslug=reset-firefox-easily-fix-most-problems
**Browser / Version**: Firefox 74.0
**Operating System**: Windows 10
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: scroll down problem
**Steps to Reproduce**:
scroll down not coming in this browser
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/2/bc5f9a0f-6d14-44a5-be56-f50a3e3bf02e.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200221001238</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/2/02e9ad5e-7628-4e00-a790-f1715beeaa06)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
support.mozilla.org - see bug description - <!-- @browser: Firefox 74.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/48826 -->
<!-- @extra_labels: form-v2-experiment -->
**URL**: https://support.mozilla.org/en-US/kb/refresh-firefox-reset-add-ons-and-settings?redirectlocale=en-US&redirectslug=reset-firefox-easily-fix-most-problems
**Browser / Version**: Firefox 74.0
**Operating System**: Windows 10
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: scroll down problem
**Steps to Reproduce**:
scroll down not coming in this browser
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/2/bc5f9a0f-6d14-44a5-be56-f50a3e3bf02e.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200221001238</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/2/02e9ad5e-7628-4e00-a790-f1715beeaa06)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
support mozilla org see bug description url browser version firefox operating system windows tested another browser no problem type something else description scroll down problem steps to reproduce scroll down not coming in this browser view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
19,581
| 25,904,878,355
|
IssuesEvent
|
2022-12-15 09:27:16
|
UserOfficeProject/user-office-project-issue-tracker
|
https://api.github.com/repos/UserOfficeProject/user-office-project-issue-tracker
|
closed
|
Extract PIs and Co-Is to SharePoint Experiment Team list for ISIS Direct
|
origin: project type: process ops: comms area: uop/stfc
|
We need to run a one time export of PIs and Co-Is to the SharePoint Experiment Team list for ISIS Direct. This is to allow time for the scheduler team to do https://github.com/isisbusapps/ERA/issues/1352.
We should update S&O once this is released so that they can update any messages.
|
1.0
|
Extract PIs and Co-Is to SharePoint Experiment Team list for ISIS Direct - We need to run a one time export of PIs and Co-Is to the SharePoint Experiment Team list for ISIS Direct. This is to allow time for the scheduler team to do https://github.com/isisbusapps/ERA/issues/1352.
We should update S&O once this is released so that they can update any messages.
|
process
|
extract pis and co is to sharepoint experiment team list for isis direct we need to run a one time export of pis and co is to the sharepoint experiment team list for isis direct this is to allow time for the scheduler team to do we should update s o once this is released so that they can update any messages
| 1
|
172,826
| 21,054,870,119
|
IssuesEvent
|
2022-04-01 01:24:39
|
peterwkc85/Spring_petclinic
|
https://api.github.com/repos/peterwkc85/Spring_petclinic
|
opened
|
CVE-2022-22965 (High) detected in spring-beans-5.1.6.RELEASE.jar
|
security vulnerability
|
## CVE-2022-22965 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-beans-5.1.6.RELEASE.jar</b></p></summary>
<p>Spring Beans</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /spring-petclinic-master/spring-petclinic-master/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/springframework/spring-beans/5.1.6.RELEASE/spring-beans-5.1.6.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-cache-2.1.4.RELEASE.jar (Root Library)
- spring-context-support-5.1.6.RELEASE.jar
- :x: **spring-beans-5.1.6.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework before 5.2.20 and 5.3.x before 5.3.18 are vulnerable due to a vulnerability in Spring-beans which allows attackers under certain circumstances to achieve remote code execution, this vulnerability is also known as ״Spring4Shell״ or ״SpringShell״. The current POC related to the attack is done by creating a specially crafted request which manipulates ClassLoader to successfully achieve RCE (Remote Code Execution). Please note that the ease of exploitation may diverge by the code implementation.Currently, the exploit requires JDK 9 or higher, Apache Tomcat as the Servlet container, the application Packaged as WAR, and dependency on spring-webmvc or spring-webflux. Spring Framework 5.3.18 and 5.2.20 have already been released. WhiteSource's research team is carefully observing developments and researching the case. We will keep updating this page and our WhiteSource resources with updates.
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22965>CVE-2022-22965</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement">https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-beans:5.2.20.RELEASE,5.3.18</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-22965 (High) detected in spring-beans-5.1.6.RELEASE.jar - ## CVE-2022-22965 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-beans-5.1.6.RELEASE.jar</b></p></summary>
<p>Spring Beans</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /spring-petclinic-master/spring-petclinic-master/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/springframework/spring-beans/5.1.6.RELEASE/spring-beans-5.1.6.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-cache-2.1.4.RELEASE.jar (Root Library)
- spring-context-support-5.1.6.RELEASE.jar
- :x: **spring-beans-5.1.6.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework before 5.2.20 and 5.3.x before 5.3.18 are vulnerable due to a vulnerability in Spring-beans which allows attackers under certain circumstances to achieve remote code execution, this vulnerability is also known as ״Spring4Shell״ or ״SpringShell״. The current POC related to the attack is done by creating a specially crafted request which manipulates ClassLoader to successfully achieve RCE (Remote Code Execution). Please note that the ease of exploitation may diverge by the code implementation.Currently, the exploit requires JDK 9 or higher, Apache Tomcat as the Servlet container, the application Packaged as WAR, and dependency on spring-webmvc or spring-webflux. Spring Framework 5.3.18 and 5.2.20 have already been released. WhiteSource's research team is carefully observing developments and researching the case. We will keep updating this page and our WhiteSource resources with updates.
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22965>CVE-2022-22965</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement">https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-beans:5.2.20.RELEASE,5.3.18</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in spring beans release jar cve high severity vulnerability vulnerable library spring beans release jar spring beans library home page a href path to dependency file spring petclinic master spring petclinic master pom xml path to vulnerable library root repository org springframework spring beans release spring beans release jar dependency hierarchy spring boot starter cache release jar root library spring context support release jar x spring beans release jar vulnerable library vulnerability details spring framework before and x before are vulnerable due to a vulnerability in spring beans which allows attackers under certain circumstances to achieve remote code execution this vulnerability is also known as ״ ״ or ״springshell״ the current poc related to the attack is done by creating a specially crafted request which manipulates classloader to successfully achieve rce remote code execution please note that the ease of exploitation may diverge by the code implementation currently the exploit requires jdk or higher apache tomcat as the servlet container the application packaged as war and dependency on spring webmvc or spring webflux spring framework and have already been released whitesource s research team is carefully observing developments and researching the case we will keep updating this page and our whitesource resources with updates publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring beans release step up your open source security game with whitesource
| 0
|
20,982
| 27,846,576,202
|
IssuesEvent
|
2023-03-20 15:51:03
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Snyk code check failing
|
bug security process
|
### Description
The snyk code workflow is [failing](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4452221619/jobs/7819675560).
```
✗ [High] Cross-site Scripting (XSS)
Path: hedera-mirror-rest/monitoring/monitor_apis/server.js, line 86
Info: Unsanitized input from an HTTP parameter flows into send, where it is used to render an HTML page returned to the user. This may result in a Cross-Site Scripting attack (XSS).
✗ [High] Use of Hardcoded, Security-relevant Constants
Path: hedera-mirror-monitor/src/main/java/com/hedera/mirror/monitor/OperatorProperties.java, line 36
Info: Avoid hardcoding values that are meant to be secret. Found hardcoded secret.
```
Once these code issues are fixed or suppressed, the workflow fails because it expects the snyk report to always exist. But Snyk does not generate a report at all if no issues exceed the high severity threshold.
### Steps to reproduce
Run snyk
### Additional context
_No response_
### Hedera network
other
### Version
main
### Operating system
None
|
1.0
|
Snyk code check failing - ### Description
The snyk code workflow is [failing](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4452221619/jobs/7819675560).
```
✗ [High] Cross-site Scripting (XSS)
Path: hedera-mirror-rest/monitoring/monitor_apis/server.js, line 86
Info: Unsanitized input from an HTTP parameter flows into send, where it is used to render an HTML page returned to the user. This may result in a Cross-Site Scripting attack (XSS).
✗ [High] Use of Hardcoded, Security-relevant Constants
Path: hedera-mirror-monitor/src/main/java/com/hedera/mirror/monitor/OperatorProperties.java, line 36
Info: Avoid hardcoding values that are meant to be secret. Found hardcoded secret.
```
Once these code issues are fixed or suppressed, the workflow fails because it expects the snyk report to always exist. But Snyk does not generate a report at all if no issues exceed the high severity threshold.
### Steps to reproduce
Run snyk
### Additional context
_No response_
### Hedera network
other
### Version
main
### Operating system
None
|
process
|
snyk code check failing description the snyk code workflow is ✗ cross site scripting xss path hedera mirror rest monitoring monitor apis server js line info unsanitized input from an http parameter flows into send where it is used to render an html page returned to the user this may result in a cross site scripting attack xss ✗ use of hardcoded security relevant constants path hedera mirror monitor src main java com hedera mirror monitor operatorproperties java line info avoid hardcoding values that are meant to be secret found hardcoded secret once these code issues are fixed or suppressed the workflow fails because it expects the snyk report to always exist but snyk does not generate a report at all if no issues exceed the high severity threshold steps to reproduce run snyk additional context no response hedera network other version main operating system none
| 1
|
8,194
| 11,393,928,941
|
IssuesEvent
|
2020-01-30 08:08:17
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
ntr: effector-mediated modulation of host process by symbiont
|
multi-species process
|
I also need an additional parent to
GO:0140404 JSON
effector-mediated modulation of host immune response by symbiont
effector-mediated modulation of host process by symbiont
my collaborators pointed out that many effectors are not manipulating defenses (for example, effectors inducing ribotoxic stress)
|
1.0
|
ntr: effector-mediated modulation of host process by symbiont -
I also need an additional parent to
GO:0140404 JSON
effector-mediated modulation of host immune response by symbiont
effector-mediated modulation of host process by symbiont
my collaborators pointed out that many effectors are not manipulating defenses (for example, effectors inducing ribotoxic stress)
|
process
|
ntr effector mediated modulation of host process by symbiont i also need an additional parent to go json effector mediated modulation of host immune response by symbiont effector mediated modulation of host process by symbiont my collaborators pointed out that many effectors are not manipulating defenses for example effectors inducing ribotoxic stress
| 1
|
11,331
| 14,144,527,413
|
IssuesEvent
|
2020-11-10 16:33:25
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Dynamically evaluating conditions
|
Pri2 devops-cicd-process/tech devops/prod product-question
|
Hi,
No issue with the article, just a question regarding the evaluation of conditions.
In Azure Devops you have conditions like the following:
`condition: and(contains(variables['build.sourceBranch'], 'refs/heads/master'), succeeded())`
I'm looking to implement something similar to this in a project and wondered if this is a standard syntax?
Is this part of a library that can be re-used elsewhere to dynamically evaluate conditions or is this something unique to Azure Devops?
It sort of looks like Powershell, but not sure how the condition and variables are evaluated.
Anyone mind providing some insight if allowed?
Thanks.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3f151218-9a11-0078-e038-f96198a76143
* Version Independent ID: 09c4d032-62f3-d97c-79d7-6fbfd89910e9
* Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/conditions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Dynamically evaluating conditions - Hi,
No issue with the article, just a question regarding the evaluation of conditions.
In Azure Devops you have conditions like the following:
`condition: and(contains(variables['build.sourceBranch'], 'refs/heads/master'), succeeded())`
I'm looking to implement something similar to this in a project and wondered if this is a standard syntax?
Is this part of a library that can be re-used elsewhere to dynamically evaluate conditions or is this something unique to Azure Devops?
It sort of looks like Powershell, but not sure how the condition and variables are evaluated.
Anyone mind providing some insight if allowed?
Thanks.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3f151218-9a11-0078-e038-f96198a76143
* Version Independent ID: 09c4d032-62f3-d97c-79d7-6fbfd89910e9
* Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/conditions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
dynamically evaluating conditions hi no issue with the article just a question regarding the evaluation of conditions in azure devops you have conditions like the following condition and contains variables refs heads master succeeded i m looking to implement something similar to this in a project and wondered if this is a standard syntax is this part of a library that can be re used elsewhere to dynamically evaluate conditions or is this something unique to azure devops it sort of looks like powershell but not sure how the condition and variables are evaluated anyone mind providing some insight if allowed thanks document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
31,958
| 7,470,353,267
|
IssuesEvent
|
2018-04-03 04:25:37
|
City-Bureau/city-scrapers
|
https://api.github.com/repos/City-Bureau/city-scrapers
|
closed
|
Board of Education Description Field
|
code: bug report good first issue help wanted
|
The Chicago Board of Education spider currently doesn't have description info coming through—can we add this text to that field?
"The Chicago Board of Education is responsible for the governance, organizational and financial oversight of Chicago Public Schools (CPS), the third largest school district in the United States of America."
|
1.0
|
Board of Education Description Field - The Chicago Board of Education spider currently doesn't have description info coming through—can we add this text to that field?
"The Chicago Board of Education is responsible for the governance, organizational and financial oversight of Chicago Public Schools (CPS), the third largest school district in the United States of America."
|
non_process
|
board of education description field the chicago board of education spider currently doesn t have description info coming through—can we add this text to that field the chicago board of education is responsible for the governance organizational and financial oversight of chicago public schools cps the third largest school district in the united states of america
| 0
|
20,080
| 26,576,186,448
|
IssuesEvent
|
2023-01-21 21:10:43
|
serai-dex/serai
|
https://api.github.com/repos/serai-dex/serai
|
opened
|
Use Ethereum as a checkpointing system
|
feature improvement cryptography processor node
|
Right now, for a multisig key K, K is supposed to be chain-bound by the processor. Ideally, we do the chain binding before publication to Substrate so clients don't need to replicate it locally.
If after we publish it to Substrate, we then apply an additive offset of the block confirming the key, publishing that to the connected chains (or just Ethereum), we implicitly checkpoint our block hash onto Ethereum. To acquire the same key history in the contract, for a different block hash, would require a key which is `K + hG = K2 + h2G` (`K + hG - h2G = K2`). The issue is choosing K2 changes h2, making such a task impossible. The only way to do it would be to replay the same block, which would have no meaningful effect.
There's two parts to this.
1) Offsetting the key.
2) Writing a a way for the Substrate node to verify synced blocks against the Ethereum transitions.
The first is trivial. The second will be put off till a later date.
This is possible on any connected with chain with sufficient security. Ethereum is simply the easiest to observe historical transitions from, not to mention incredibly secure. This prevents long-range and eclipse attacks, decreasing the time required to unbond, while greatly improving light client safety.
|
1.0
|
Use Ethereum as a checkpointing system - Right now, for a multisig key K, K is supposed to be chain-bound by the processor. Ideally, we do the chain binding before publication to Substrate so clients don't need to replicate it locally.
If after we publish it to Substrate, we then apply an additive offset of the block confirming the key, publishing that to the connected chains (or just Ethereum), we implicitly checkpoint our block hash onto Ethereum. To acquire the same key history in the contract, for a different block hash, would require a key which is `K + hG = K2 + h2G` (`K + hG - h2G = K2`). The issue is choosing K2 changes h2, making such a task impossible. The only way to do it would be to replay the same block, which would have no meaningful effect.
There's two parts to this.
1) Offsetting the key.
2) Writing a a way for the Substrate node to verify synced blocks against the Ethereum transitions.
The first is trivial. The second will be put off till a later date.
This is possible on any connected with chain with sufficient security. Ethereum is simply the easiest to observe historical transitions from, not to mention incredibly secure. This prevents long-range and eclipse attacks, decreasing the time required to unbond, while greatly improving light client safety.
|
process
|
use ethereum as a checkpointing system right now for a multisig key k k is supposed to be chain bound by the processor ideally we do the chain binding before publication to substrate so clients don t need to replicate it locally if after we publish it to substrate we then apply an additive offset of the block confirming the key publishing that to the connected chains or just ethereum we implicitly checkpoint our block hash onto ethereum to acquire the same key history in the contract for a different block hash would require a key which is k hg k hg the issue is choosing changes making such a task impossible the only way to do it would be to replay the same block which would have no meaningful effect there s two parts to this offsetting the key writing a a way for the substrate node to verify synced blocks against the ethereum transitions the first is trivial the second will be put off till a later date this is possible on any connected with chain with sufficient security ethereum is simply the easiest to observe historical transitions from not to mention incredibly secure this prevents long range and eclipse attacks decreasing the time required to unbond while greatly improving light client safety
| 1
|
164,997
| 12,825,259,737
|
IssuesEvent
|
2020-07-06 14:42:35
|
Oldes/Rebol-issues
|
https://api.github.com/repos/Oldes/Rebol-issues
|
closed
|
RANDOM/only returns NONE for series not at the HEAD
|
CC.resolved Status.important Test.written Type.bug
|
_Submitted by:_ **abolka**
RANDOM/only erratically returns NONE when passed a series which is not positioned at the head.
Originally discovered by Gregg Irwin.
``` rebol
;; showing the general effect
>> unique collect [loop 100 [keep random/only next [1 2 3]]]
== [none 3]
;; a minimal demonstration, which always returns a wrong value:
>> random/only next [1 2]
== none
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=1875)** [ Version: alpha 111 Type: Bug Platform: All Category: Native Reproduce: Always Fixed-in:r3 master ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/1875</sup>
Comments:
---
> **Rebolbot** commented on Dec 17, 2012:
_Submitted by:_ **DanLee**
Fixed in my GitHub repo. Will send a pull request.
---
> **Rebolbot** commented on Dec 18, 2012:
_Submitted by:_ **abolka**
Test added to the core-tests suite.
---
> **Rebolbot** commented on Dec 23, 2012:
_Submitted by:_ **abolka**
Fixed in af37b35ac362dbcefd484936b874b30217b3346d, merged to mainline in 404dd93a37164826791a3a7a68a21cb8005df413.
---
> **Rebolbot** commented on Dec 23, 2012:
_Submitted by:_ **abolka**
Tested on OSX, Win32, Linux.
---
> **Rebolbot** added on Jan 12, 2016
---
|
1.0
|
RANDOM/only returns NONE for series not at the HEAD - _Submitted by:_ **abolka**
RANDOM/only erratically returns NONE when passed a series which is not positioned at the head.
Originally discovered by Gregg Irwin.
``` rebol
;; showing the general effect
>> unique collect [loop 100 [keep random/only next [1 2 3]]]
== [none 3]
;; a minimal demonstration, which always returns a wrong value:
>> random/only next [1 2]
== none
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=1875)** [ Version: alpha 111 Type: Bug Platform: All Category: Native Reproduce: Always Fixed-in:r3 master ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/1875</sup>
Comments:
---
> **Rebolbot** commented on Dec 17, 2012:
_Submitted by:_ **DanLee**
Fixed in my GitHub repo. Will send a pull request.
---
> **Rebolbot** commented on Dec 18, 2012:
_Submitted by:_ **abolka**
Test added to the core-tests suite.
---
> **Rebolbot** commented on Dec 23, 2012:
_Submitted by:_ **abolka**
Fixed in af37b35ac362dbcefd484936b874b30217b3346d, merged to mainline in 404dd93a37164826791a3a7a68a21cb8005df413.
---
> **Rebolbot** commented on Dec 23, 2012:
_Submitted by:_ **abolka**
Tested on OSX, Win32, Linux.
---
> **Rebolbot** added on Jan 12, 2016
---
|
non_process
|
random only returns none for series not at the head submitted by abolka random only erratically returns none when passed a series which is not positioned at the head originally discovered by gregg irwin rebol showing the general effect unique collect a minimal demonstration which always returns a wrong value random only next none imported from imported from comments rebolbot commented on dec submitted by danlee fixed in my github repo will send a pull request rebolbot commented on dec submitted by abolka test added to the core tests suite rebolbot commented on dec submitted by abolka fixed in merged to mainline in rebolbot commented on dec submitted by abolka tested on osx linux rebolbot added on jan
| 0
|
4,791
| 7,674,626,119
|
IssuesEvent
|
2018-05-15 05:16:36
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
State of layer reprojection operations before using Processing algorithms
|
Processing User Manual question
|
https://docs.qgis.org/testing/en/docs/user_manual/processing/toolbox.html#a-note-on-projections explains that layers should be reprojected to the same crs before processing any algorithm.
I remember some discussion (or PR) that mentions that some algs no more needs it.
As I could not find any information on this in the repo i'd like to know if it's really the case. Does it apply to all algs or only some of them (what then makes the difference: alg providers, datasource... )?
IOW Should we keep this note in the doc, remove it or add some precision ?
Any information is more than welcome.. Thanks
@nyalldawson @alexbruy @ghtmtt... ?
|
1.0
|
State of layer reprojection operations before using Processing algorithms - https://docs.qgis.org/testing/en/docs/user_manual/processing/toolbox.html#a-note-on-projections explains that layers should be reprojected to the same crs before processing any algorithm.
I remember some discussion (or PR) that mentions that some algs no more needs it.
As I could not find any information on this in the repo i'd like to know if it's really the case. Does it apply to all algs or only some of them (what then makes the difference: alg providers, datasource... )?
IOW Should we keep this note in the doc, remove it or add some precision ?
Any information is more than welcome.. Thanks
@nyalldawson @alexbruy @ghtmtt... ?
|
process
|
state of layer reprojection operations before using processing algorithms explains that layers should be reprojected to the same crs before processing any algorithm i remember some discussion or pr that mentions that some algs no more needs it as i could not find any information on this in the repo i d like to know if it s really the case does it apply to all algs or only some of them what then makes the difference alg providers datasource iow should we keep this note in the doc remove it or add some precision any information is more than welcome thanks nyalldawson alexbruy ghtmtt
| 1
|
69,730
| 9,330,228,890
|
IssuesEvent
|
2019-03-28 06:06:25
|
s-fleck/lgr
|
https://api.github.com/repos/s-fleck/lgr
|
closed
|
logger threshold per logger
|
documentation invalid
|
Sorry, may be I'm missing something again, but I think it will be beneficial to be able to specify per logger thresholds. Now it seems root logger amend individual loggers threshold.
Reproduce:
```r
# remotes::install_github('dselivanov/rsparse')
library(rsparse)
logger = lgr::get_logger('rsparse')
x = matrix(rnorm(100 * 100))
res = soft_svd(x, 10)
```
Works (reduce verbosity):
```r
logger$set_threshold('warn')
res = soft_svd(x, 10)
```
Doesn't work (doesn't increase verbosity):
```r
logger$set_threshold('trace')
res = soft_svd(x, 10)
```
|
1.0
|
logger threshold per logger - Sorry, may be I'm missing something again, but I think it will be beneficial to be able to specify per logger thresholds. Now it seems root logger amend individual loggers threshold.
Reproduce:
```r
# remotes::install_github('dselivanov/rsparse')
library(rsparse)
logger = lgr::get_logger('rsparse')
x = matrix(rnorm(100 * 100))
res = soft_svd(x, 10)
```
Works (reduce verbosity):
```r
logger$set_threshold('warn')
res = soft_svd(x, 10)
```
Doesn't work (doesn't increase verbosity):
```r
logger$set_threshold('trace')
res = soft_svd(x, 10)
```
|
non_process
|
logger threshold per logger sorry may be i m missing something again but i think it will be beneficial to be able to specify per logger thresholds now it seems root logger amend individual loggers threshold reproduce r remotes install github dselivanov rsparse library rsparse logger lgr get logger rsparse x matrix rnorm res soft svd x works reduce verbosity r logger set threshold warn res soft svd x doesn t work doesn t increase verbosity r logger set threshold trace res soft svd x
| 0
|
67,070
| 3,265,968,961
|
IssuesEvent
|
2015-10-22 18:34:23
|
der-On/XPlane2Blender
|
https://api.github.com/repos/der-On/XPlane2Blender
|
closed
|
Fix Bone matrices
|
priority high
|
here are the steps to get the simple bone animation test case working:
`python tests.py --filter bone_animations --debug`
Resulting `test_bone_animations.obj` will be written to `/tests/tmp`
Source blender file is under `/tests/animations/bone_animations.test.blend`
Relevant code can be found from here on:
https://github.com/der-On/XPlane2Blender/blob/3e3ed50de62c62baf03519e4417ee768d9d1dba2/io_xplane2blender/xplane_types/xplane_bone.py#L192
|
1.0
|
Fix Bone matrices - here are the steps to get the simple bone animation test case working:
`python tests.py --filter bone_animations --debug`
Resulting `test_bone_animations.obj` will be written to `/tests/tmp`
Source blender file is under `/tests/animations/bone_animations.test.blend`
Relevant code can be found from here on:
https://github.com/der-On/XPlane2Blender/blob/3e3ed50de62c62baf03519e4417ee768d9d1dba2/io_xplane2blender/xplane_types/xplane_bone.py#L192
|
non_process
|
fix bone matrices here are the steps to get the simple bone animation test case working python tests py filter bone animations debug resulting test bone animations obj will be written to tests tmp source blender file is under tests animations bone animations test blend relevant code can be found from here on
| 0
|
16,380
| 21,102,548,788
|
IssuesEvent
|
2022-04-04 15:38:32
|
googleapis/python-bigquery-pandas
|
https://api.github.com/repos/googleapis/python-bigquery-pandas
|
closed
|
document pandas-gbq vision and roadmap
|
type: process api: bigquery
|
Both pandas-gbq and [google-cloud-bigquery](https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigquery) are doing many of the same things, and increasingly so (e.g. `.to_dataframe()` in google-cloud-bigquery)
- Are there different use cases? Can we define those?
- Should we focus development on one and wrap the other? Even if not wholly, for a subset of functionality?
- Is there some direction from Google? @tswast spends a lot of time on both libraries so he is probably best placed to offer guidance
|
1.0
|
document pandas-gbq vision and roadmap - Both pandas-gbq and [google-cloud-bigquery](https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigquery) are doing many of the same things, and increasingly so (e.g. `.to_dataframe()` in google-cloud-bigquery)
- Are there different use cases? Can we define those?
- Should we focus development on one and wrap the other? Even if not wholly, for a subset of functionality?
- Is there some direction from Google? @tswast spends a lot of time on both libraries so he is probably best placed to offer guidance
|
process
|
document pandas gbq vision and roadmap both pandas gbq and are doing many of the same things and increasingly so e g to dataframe in google cloud bigquery are there different use cases can we define those should we focus development on one and wrap the other even if not wholly for a subset of functionality is there some direction from google tswast spends a lot of time on both libraries so he is probably best placed to offer guidance
| 1
|
46,322
| 9,923,253,881
|
IssuesEvent
|
2019-07-01 06:40:02
|
petershirley/raytracinginoneweekend
|
https://api.github.com/repos/petershirley/raytracinginoneweekend
|
closed
|
Unused variable p in main.cc
|
book code
|
p is never used in main.cc
89: vec3 p = r.point_at_parameter(2.0);
90: col += color(r, world,0);
should the r in 90: be p ?
|
1.0
|
Unused variable p in main.cc - p is never used in main.cc
89: vec3 p = r.point_at_parameter(2.0);
90: col += color(r, world,0);
should the r in 90: be p ?
|
non_process
|
unused variable p in main cc p is never used in main cc p r point at parameter col color r world should the r in be p
| 0
|
5,384
| 8,211,414,690
|
IssuesEvent
|
2018-09-04 13:46:17
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
Work around volumedriver lockup when removing MDS slaves of removed volumes
|
process_wontfix
|
https://github.com/openvstorage/volumedriver-ee/issues/79
After volume removal MDS slaves self-destruct once the absence of the backend namespace is detected (during the periodic backend poll). If at the same time an MDS slave is removed via the Python API there's a potential for a lockup.
|
1.0
|
Work around volumedriver lockup when removing MDS slaves of removed volumes - https://github.com/openvstorage/volumedriver-ee/issues/79
After volume removal MDS slaves self-destruct once the absence of the backend namespace is detected (during the periodic backend poll). If at the same time an MDS slave is removed via the Python API there's a potential for a lockup.
|
process
|
work around volumedriver lockup when removing mds slaves of removed volumes after volume removal mds slaves self destruct once the absence of the backend namespace is detected during the periodic backend poll if at the same time an mds slave is removed via the python api there s a potential for a lockup
| 1
|
10,996
| 13,786,885,453
|
IssuesEvent
|
2020-10-09 03:12:17
|
nion-software/nionswift
|
https://api.github.com/repos/nion-software/nionswift
|
opened
|
Automatically disambiguate source graphics with label #'s and/or colors
|
f - graphics f - processing stage - planning type - enhancement
|
If multiple picks are placed on a source image, it would be helpful to have the different pick regions labeled Pick 1, Pick 2, etc. Or maybe use a different color for each that is also seen in the line plot somewhere.
|
1.0
|
Automatically disambiguate source graphics with label #'s and/or colors - If multiple picks are placed on a source image, it would be helpful to have the different pick regions labeled Pick 1, Pick 2, etc. Or maybe use a different color for each that is also seen in the line plot somewhere.
|
process
|
automatically disambiguate source graphics with label s and or colors if multiple picks are placed on a source image it would be helpful to have the different pick regions labeled pick pick etc or maybe use a different color for each that is also seen in the line plot somewhere
| 1
|
4,843
| 7,738,494,960
|
IssuesEvent
|
2018-05-28 12:17:15
|
SharePoint/PnP-PowerShell
|
https://api.github.com/repos/SharePoint/PnP-PowerShell
|
closed
|
Set-PnPClientSidePage is removing WebParts
|
Needs investigation To be processed
|
### Reporting an Issue or Missing Feature
On a newly created team site, removing commenting on the homepage via "Set-PnPClientSidePage -Identity "Home.aspx" -CommentsEnabled:$false -Publish:$true" disables commenting, but also removes the existing News and Site Activity Webparts.
### Expected behavior
News and Activity webparts are still there, commenting is disabled.
### Actual behavior
Webparts are gone, commenting is disabled.
### Steps to reproduce behavior
- Create a new team site
- Issue command "Set-PnPClientSidePage -Identity "Home.aspx" -CommentsEnabled:$false -Publish:$true"
### Which version of the PnP-PowerShell Cmdlets are you using?
- [ ] PnP PowerShell for SharePoint 2013
- [ ] PnP PowerShell for SharePoint 2016
- [ x] PnP PowerShell for SharePoint Online
### What is the version of the Cmdlet module you are running?
2.24.1803.0
### How did you install the PnP-PowerShell Cmdlets?
- [ ] MSI Installed downloaded from GitHub
- [ x] Installed through the PowerShell Gallery with Install-Module
- [ ] Other means
|
1.0
|
Set-PnPClientSidePage is removing WebParts - ### Reporting an Issue or Missing Feature
On a newly created team site, removing commenting on the homepage via "Set-PnPClientSidePage -Identity "Home.aspx" -CommentsEnabled:$false -Publish:$true" disables commenting, but also removes the existing News and Site Activity Webparts.
### Expected behavior
News and Activity webparts are still there, commenting is disabled.
### Actual behavior
Webparts are gone, commenting is disabled.
### Steps to reproduce behavior
- Create a new team site
- Issue command "Set-PnPClientSidePage -Identity "Home.aspx" -CommentsEnabled:$false -Publish:$true"
### Which version of the PnP-PowerShell Cmdlets are you using?
- [ ] PnP PowerShell for SharePoint 2013
- [ ] PnP PowerShell for SharePoint 2016
- [ x] PnP PowerShell for SharePoint Online
### What is the version of the Cmdlet module you are running?
2.24.1803.0
### How did you install the PnP-PowerShell Cmdlets?
- [ ] MSI Installed downloaded from GitHub
- [ x] Installed through the PowerShell Gallery with Install-Module
- [ ] Other means
|
process
|
set pnpclientsidepage is removing webparts reporting an issue or missing feature on a newly created team site removing commenting on the homepage via set pnpclientsidepage identity home aspx commentsenabled false publish true disables commenting but also removes the existing news and site activity webparts expected behavior news and activity webparts are still there commenting is disabled actual behavior webparts are gone commenting is disabled steps to reproduce behavior create a new team site issue command set pnpclientsidepage identity home aspx commentsenabled false publish true which version of the pnp powershell cmdlets are you using pnp powershell for sharepoint pnp powershell for sharepoint pnp powershell for sharepoint online what is the version of the cmdlet module you are running how did you install the pnp powershell cmdlets msi installed downloaded from github installed through the powershell gallery with install module other means
| 1
|
2,157
| 5,006,292,584
|
IssuesEvent
|
2016-12-12 13:42:57
|
openvstorage/volumedriver
|
https://api.github.com/repos/openvstorage/volumedriver
|
closed
|
`fs_dtl_config_mode Automatic` causes volume creation to take a long time
|
process_wontfix type_bug
|
We've observed this issue on a gig environment. When performing a `truncate -s 10G test.raw`, the volume creation was hanging for approx. 5 minutes. as seen in the log files.
If we take a closer look you can see that the volume creation is stuck on `Setting the failover cache to foc://10.109.3.43:26202,Asynchronous`
```
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 031835 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VFSObjectRouter - 0000000000006142 - info -
create: Creating Volume-ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 031888 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/ObjectRegistry - 0000000000006143 - info -
register_base_volume: e4d8d399-0865-4b5f-994c-4e73cf461990/vmstorz5Hggc9rHMKf9Xam: registering ae95ac8b-aae8-49c4-8005-57de93bceccc, namespace ae95ac8b-aae8-49c4-8005-57de93bceccc, foc config mode
Automatic
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 035543 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/LockedArakoon - 0000000000006144 - info - r
un_sequence: updating counter succeeded after 1 attempt(s)
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 039162 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/LockedArakoon - 0000000000006145 - info - r
un_sequence: register basic volume succeeded after 1 attempt(s)
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 039216 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VolManager - 0000000000006146 - notice - Ne
w Volume, VolumeId: ae95ac8b-aae8-49c4-8005-57de93bceccc, Namespace: ae95ac8b-aae8-49c4-8005-57de93bceccc, Size: 0, CreateNamespace: CreateNamespace::T, START
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 039266 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006147 - info - Logger: Entering namespaceExists ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 039839 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006148 - info - ~Logger: Exiting namespaceExists for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 039865 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006149 - info - Logger: Entering createNamespace ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 209745 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
0000614a - info - ~Logger: Exiting createNamespace for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 209873 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VolManager - 000000000000614b - info - volu
mePotentialSCOCache: We have enough room for an additional 605 volumes with cluster size 4096, SCO multiplier 8192 (= SCO size 33554432), TLog multiplier 2
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 210212 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
0000614c - info - Logger: Entering write_tag ae95ac8b-aae8-49c4-8005-57de93bceccc owner_tag
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 103854 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
0000614d - info - ~Logger: Exiting write_tag for ae95ac8b-aae8-49c4-8005-57de93bceccc owner_tag
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 103948 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
0000614e - info - ~Logger: Exiting write_tag for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 104135 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/SnapshotPersistor - 000000000000614f - info
- newTLog: Starting new TLog tlog_e20a31ea-7489-443c-ae5d-110a30c85346
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107325 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataStore - 0000000000006150 - info
- MDSMetaDataStore: ae95ac8b-aae8-49c4-8005-57de93bceccc: home "/mnt/ssd2/vmstor_db_md_1/ae95ac8b-aae8-49c4-8005-57de93bceccc", cache capacity (pages) 8192, apply scrub results to slaves: ApplyRelo
cationsToSlaves::T, MDS timeout: 20 secs
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107375 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataStore - 0000000000006151 - info
- MDSMetaDataStore: MDS nodes:
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107402 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataStore - 0000000000006152 - info
- MDSMetaDataStore: mds://10.109.2.42:26300
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107419 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataStore - 0000000000006153 - info
- connect_: ae95ac8b-aae8-49c4-8005-57de93bceccc: connecting to mds://10.109.2.42:26300
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107438 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataBackendHelpers - 000000000000615
4 - info - make_db: mds://10.109.2.42:26300: running in-process, using fast path
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107579 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/RocksLogger - 0000000000006155 - info - /mn
t/ssd2/vmstor_db_mds_1: (skipping printing options)
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107980 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/RocksLogger - 0000000000006156 - info - /mn
t/ssd2/vmstor_db_mds_1: Created column family [ae95ac8b-aae8-49c4-8005-57de93bceccc] (ID 1621)
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108029 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MetaDataServerRocksTable - 0000000000006157
- info - RocksTable: ae95ac8b-aae8-49c4-8005-57de93bceccc: creating table
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108049 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MetaDataServerTable - 0000000000006158 - in
fo - Table: ae95ac8b-aae8-49c4-8005-57de93bceccc: new table, scratch dir "/mnt/ssd2/vmstor_db_mds_1/ae95ac8b-aae8-49c4-8005-57de93bceccc"
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108065 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MetaDataServerTable - 0000000000006159 - in
fo - start_: ae95ac8b-aae8-49c4-8005-57de93bceccc: starting periodic background check
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108102 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataBackend - 000000000000615a - inf
o - MDSMetaDataBackend: ae95ac8b-aae8-49c4-8005-57de93bceccc: using mds://10.109.2.42:26300, owner tag 2691
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108162 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MetaDataServerTable - 000000000000615b - in
fo - work_: ae95ac8b-aae8-49c4-8005-57de93bceccc: running periodic action
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108218 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MDSMetaDataBackend - 000000000000615c - inf
o - MDSMetaDataBackend: ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108321 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MDSMetaDataBackend - 000000000000615d - inf
o - lastCorkUUID: ae95ac8b-aae8-49c4-8005-57de93bceccc: --
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108345 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MDSMetaDataBackend - 000000000000615e - inf
o - scrub_id: ae95ac8b-aae8-49c4-8005-57de93bceccc: scrub ID --
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108381 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/CachedMetaDataStore - 000000000000615f - in
fo - init_pages_: ae95ac8b-aae8-49c4-8005-57de93bceccc: page capacity (entries): 64, max cached pages: 256
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108487 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MetaDataStoreBuilder - 0000000000006160 - i
nfo - update_metadata_store_: ae95ac8b-aae8-49c4-8005-57de93bceccc: bringing MetaDataStore in sync with backend, requested interval (--, --], check scrub ID: CheckScrubId::F, dry run:DryRun::F, ful
l rebuild: false
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108598 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006161 - info - Logger: Entering read ae95ac8b-aae8-49c4-8005-57de93bceccc snapshots.xml
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109348 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/AlbaConnection - 0000000000006162 - error -
convert_exceptions_: read object: caught Alba proxy exception: Proxy_protocol.Protocol.Error.ObjectDoesNotExist
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109514 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006163 - error - ~Logger: Exiting read for ae95ac8b-aae8-49c4-8005-57de93bceccc snapshots.xml with exception
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109544 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006164 - error - ~Logger: Exiting read for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109664 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/BackendInterface - 0000000000006165 - error
- fillObject: Problem getting snapshots.xml from ae95ac8b-aae8-49c4-8005-57de93bceccc: BackendException: object does not exist
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109835 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MDSMetaDataBackend - 0000000000006166 - inf
o - ~MDSMetaDataBackend: ae95ac8b-aae8-49c4-8005-57de93bceccc: used clusters: 0
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109873 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/PeriodicActionPoolTask - 0000000000006167 -
error - operator(): mds-poll-namespace-ae95ac8b-aae8-49c4-8005-57de93bceccc: caught exception: BackendException: object does not exist - ignored
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112098 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataBackend - 0000000000006168 - inf
o - lastCorkUUID: ae95ac8b-aae8-49c4-8005-57de93bceccc: --
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112139 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataBackend - 0000000000006169 - inf
o - scrub_id: ae95ac8b-aae8-49c4-8005-57de93bceccc: scrub ID --
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112714 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/CachedMetaDataStore - 000000000000616a - in
fo - init_pages_: ae95ac8b-aae8-49c4-8005-57de93bceccc: page capacity (entries): 64, max cached pages: 8192
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112733 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MetaDataServerTable - 000000000000616b - in
fo - set_role: ae95ac8b-aae8-49c4-8005-57de93bceccc: transition from SLAVE to MASTER requested, owner tag 2691
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112807 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cbbfff700 - volumedriverfs/PeriodicActionPoolTask - 000000000000616c -
info - operator(): mds-poll-namespace-ae95ac8b-aae8-49c4-8005-57de93bceccc: timer cancelled
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112853 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cbbfff700 - volumedriverfs/PeriodicActionPoolTask - 000000000000616d -
info - operator(): mds-poll-namespace-ae95ac8b-aae8-49c4-8005-57de93bceccc: bailing out
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112889 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VolumeFactory - 000000000000616e - warning
- make_metadata_store: ae95ac8b-aae8-49c4-8005-57de93bceccc: no scrub ID present in MetaDataStore - did we crash before writing it out?
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112917 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataBackend - 000000000000616f - inf
o - set_scrub_id: ae95ac8b-aae8-49c4-8005-57de93bceccc: setting scrub ID 617daae4-d093-4d51-8a92-e5227c2f8203
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 119499 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 0000000000006170 - info - Volume:
ae95ac8b-aae8-49c4-8005-57de93bceccc: Constructor of volume, namespace ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 119556 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/CachedMetaDataStore - 0000000000006171 - in
fo - cork: ae95ac8b-aae8-49c4-8005-57de93bceccc: corking e20a31ea-7489-443c-ae5d-110a30c85346
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 128473 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006172 - info - Logger: Entering write ae95ac8b-aae8-49c4-8005-57de93bceccc volume_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 202996 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006173 - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc volume_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 203088 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006174 - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 203314 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/SCOCacheMountPoint - 0000000000006175 - inf
o - addNamespace: "/mnt/ssd1/vmstor_write_sco_1": adding namespace ae95ac8b-aae8-49c4-8005-57de93bceccc to mountpoint
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 203497 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/DataStoreNG - 0000000000006176 - info - upd
ateCurrentSCO_: created new write SCO 00_00000001_00 for Volume ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 203880 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006177 - info - Logger: Entering write ae95ac8b-aae8-49c4-8005-57de93bceccc volume_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 326499 +0100 - stor-02.be-g8-4 - 10429/0x00007f6ccf7f6700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000006179 - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc snapshots.xml
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 326577 +0100 - stor-02.be-g8-4 - 10429/0x00007f6ccf7f6700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617a - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 331892 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617b - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc volume_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 331939 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617c - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 332319 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617d - info - Logger: Entering write ae95ac8b-aae8-49c4-8005-57de93bceccc failovercache_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410020 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617e - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc failovercache_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410085 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617f - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410205 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 0000000000006180 - info - setVolumeFailOverState: ae95ac8b-aae8-49c4-8005-57de93bceccc: transitioned from DEGRADED -> OK_STANDALONE
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410259 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 0000000000006181 - info - register_with_cluster_cache_: ae95ac8b-aae8-49c4-8005-57de93bceccc: registered with cluster cache, owner tag 2691, cluster cache handle 0
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410274 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 0000000000006182 - info - update_cluster_cache_limit_: ae95ac8b-aae8-49c4-8005-57de93bceccc: Not updating content based cluster cache limit to --
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410297 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VolManager - 0000000000006183 - notice - New Volume, VolumeId: ae95ac8b-aae8-49c4-8005-57de93bceccc, Namespace: ae95ac8b-aae8-49c4-8005-57de93bceccc, Size: 0, CreateNamespace: CreateNamespace::T, FINISHED
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410860 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VFSLocalNode - 0000000000006184 - info - adjust_failovercache_config_: ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410897 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VFSLocalNode - 0000000000006185 - info - do_adjust_failovercache_config_: ae95ac8b-aae8-49c4-8005-57de93bceccc: setting FailOverCacheConfig from -- -> foc://10.109.3.43:26202,Asynchronous
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410917 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 0000000000006186 - info - setVolumeFailOverState: ae95ac8b-aae8-49c4-8005-57de93bceccc: transitioned from OK_STANDALONE -> DEGRADED
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410940 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/FailOverCacheAsyncBridge - 0000000000006187 - info - newCache: newCache
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 411147 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000006188 - info - Logger: Entering write ae95ac8b-aae8-49c4-8005-57de93bceccc failovercache_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 547794 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000006189 - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc failovercache_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 547865 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000618a - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 547985 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 000000000000618b - info - setFailOverCacheConfig_: ae95ac8b-aae8-49c4-8005-57de93bceccc: Setting the failover cache to foc://10.109.3.43:26202,Asynchronous
Dec 06 12:22:20 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:20 846839 +0100 - stor-02.be-g8-4 - 10429/0x00007f6bb0bf8700 - volumedriverfs/MTServer - 000000000000618c - info - work: You have 2 connections running
.
.
.
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 488921 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Socket - 00000000000062f0 - error - connect: error connecting to 10.109.3.43:26202 - connect timeout
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 492585 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/FailOverCacheAsyncBridge - 00000000000062f1 - info - newCache: newCache
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 494313 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 00000000000062f2 - info - setFailOverCacheConfig_: ae95ac8b-aae8-49c4-8005-57de93bceccc: finding out volume state
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 496361 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 00000000000062f3 - info - setVolumeFailOverState: ae95ac8b-aae8-49c4-8005-57de93bceccc: transitioned from DEGRADED -> OK_SYNC
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 496462 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/SnapshotPersistor - 00000000000062f4 - info - newTLog: Starting new TLog tlog_56a541a4-cb9f-43bc-970c-5a5afb067daa
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 499232 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/CachedMetaDataStore - 00000000000062f5 - info - cork: ae95ac8b-aae8-49c4-8005-57de93bceccc: corking 56a541a4-cb9f-43bc-970c-5a5afb067daa
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 502509 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cd17fa700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000000062f6 - info - Logger: Entering write ae95ac8b-aae8-49c4-8005-57de93bceccc tlog_e20a31ea-7489-443c-ae5d-110a30c85346
```
After changing the volumedriver config parameter `fs_dtl_config_mode` from `Automatic` to `Manual`, the issue was resolved.
What was also noticeable is that not all volumedrivers were affected by this issue. Which could point to a connection to DTL problem?
|
1.0
|
`fs_dtl_config_mode Automatic` causes volume creation to take a long time - We've observed this issue on a gig environment. When performing a `truncate -s 10G test.raw`, the volume creation was hanging for approx. 5 minutes. as seen in the log files.
If we take a closer look you can see that the volume creation is stuck on `Setting the failover cache to foc://10.109.3.43:26202,Asynchronous`
```
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 031835 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VFSObjectRouter - 0000000000006142 - info -
create: Creating Volume-ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 031888 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/ObjectRegistry - 0000000000006143 - info -
register_base_volume: e4d8d399-0865-4b5f-994c-4e73cf461990/vmstorz5Hggc9rHMKf9Xam: registering ae95ac8b-aae8-49c4-8005-57de93bceccc, namespace ae95ac8b-aae8-49c4-8005-57de93bceccc, foc config mode
Automatic
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 035543 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/LockedArakoon - 0000000000006144 - info - r
un_sequence: updating counter succeeded after 1 attempt(s)
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 039162 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/LockedArakoon - 0000000000006145 - info - r
un_sequence: register basic volume succeeded after 1 attempt(s)
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 039216 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VolManager - 0000000000006146 - notice - Ne
w Volume, VolumeId: ae95ac8b-aae8-49c4-8005-57de93bceccc, Namespace: ae95ac8b-aae8-49c4-8005-57de93bceccc, Size: 0, CreateNamespace: CreateNamespace::T, START
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 039266 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006147 - info - Logger: Entering namespaceExists ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 039839 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006148 - info - ~Logger: Exiting namespaceExists for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 039865 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006149 - info - Logger: Entering createNamespace ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 209745 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
0000614a - info - ~Logger: Exiting createNamespace for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 209873 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VolManager - 000000000000614b - info - volu
mePotentialSCOCache: We have enough room for an additional 605 volumes with cluster size 4096, SCO multiplier 8192 (= SCO size 33554432), TLog multiplier 2
Dec 06 12:22:17 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:17 210212 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
0000614c - info - Logger: Entering write_tag ae95ac8b-aae8-49c4-8005-57de93bceccc owner_tag
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 103854 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
0000614d - info - ~Logger: Exiting write_tag for ae95ac8b-aae8-49c4-8005-57de93bceccc owner_tag
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 103948 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
0000614e - info - ~Logger: Exiting write_tag for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 104135 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/SnapshotPersistor - 000000000000614f - info
- newTLog: Starting new TLog tlog_e20a31ea-7489-443c-ae5d-110a30c85346
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107325 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataStore - 0000000000006150 - info
- MDSMetaDataStore: ae95ac8b-aae8-49c4-8005-57de93bceccc: home "/mnt/ssd2/vmstor_db_md_1/ae95ac8b-aae8-49c4-8005-57de93bceccc", cache capacity (pages) 8192, apply scrub results to slaves: ApplyRelo
cationsToSlaves::T, MDS timeout: 20 secs
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107375 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataStore - 0000000000006151 - info
- MDSMetaDataStore: MDS nodes:
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107402 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataStore - 0000000000006152 - info
- MDSMetaDataStore: mds://10.109.2.42:26300
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107419 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataStore - 0000000000006153 - info
- connect_: ae95ac8b-aae8-49c4-8005-57de93bceccc: connecting to mds://10.109.2.42:26300
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107438 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataBackendHelpers - 000000000000615
4 - info - make_db: mds://10.109.2.42:26300: running in-process, using fast path
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107579 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/RocksLogger - 0000000000006155 - info - /mn
t/ssd2/vmstor_db_mds_1: (skipping printing options)
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 107980 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/RocksLogger - 0000000000006156 - info - /mn
t/ssd2/vmstor_db_mds_1: Created column family [ae95ac8b-aae8-49c4-8005-57de93bceccc] (ID 1621)
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108029 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MetaDataServerRocksTable - 0000000000006157
- info - RocksTable: ae95ac8b-aae8-49c4-8005-57de93bceccc: creating table
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108049 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MetaDataServerTable - 0000000000006158 - in
fo - Table: ae95ac8b-aae8-49c4-8005-57de93bceccc: new table, scratch dir "/mnt/ssd2/vmstor_db_mds_1/ae95ac8b-aae8-49c4-8005-57de93bceccc"
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108065 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MetaDataServerTable - 0000000000006159 - in
fo - start_: ae95ac8b-aae8-49c4-8005-57de93bceccc: starting periodic background check
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108102 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataBackend - 000000000000615a - inf
o - MDSMetaDataBackend: ae95ac8b-aae8-49c4-8005-57de93bceccc: using mds://10.109.2.42:26300, owner tag 2691
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108162 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MetaDataServerTable - 000000000000615b - in
fo - work_: ae95ac8b-aae8-49c4-8005-57de93bceccc: running periodic action
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108218 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MDSMetaDataBackend - 000000000000615c - inf
o - MDSMetaDataBackend: ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108321 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MDSMetaDataBackend - 000000000000615d - inf
o - lastCorkUUID: ae95ac8b-aae8-49c4-8005-57de93bceccc: --
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108345 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MDSMetaDataBackend - 000000000000615e - inf
o - scrub_id: ae95ac8b-aae8-49c4-8005-57de93bceccc: scrub ID --
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108381 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/CachedMetaDataStore - 000000000000615f - in
fo - init_pages_: ae95ac8b-aae8-49c4-8005-57de93bceccc: page capacity (entries): 64, max cached pages: 256
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108487 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MetaDataStoreBuilder - 0000000000006160 - i
nfo - update_metadata_store_: ae95ac8b-aae8-49c4-8005-57de93bceccc: bringing MetaDataStore in sync with backend, requested interval (--, --], check scrub ID: CheckScrubId::F, dry run:DryRun::F, ful
l rebuild: false
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 108598 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006161 - info - Logger: Entering read ae95ac8b-aae8-49c4-8005-57de93bceccc snapshots.xml
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109348 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/AlbaConnection - 0000000000006162 - error -
convert_exceptions_: read object: caught Alba proxy exception: Proxy_protocol.Protocol.Error.ObjectDoesNotExist
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109514 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006163 - error - ~Logger: Exiting read for ae95ac8b-aae8-49c4-8005-57de93bceccc snapshots.xml with exception
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109544 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006164 - error - ~Logger: Exiting read for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109664 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/BackendInterface - 0000000000006165 - error
- fillObject: Problem getting snapshots.xml from ae95ac8b-aae8-49c4-8005-57de93bceccc: BackendException: object does not exist
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109835 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/MDSMetaDataBackend - 0000000000006166 - inf
o - ~MDSMetaDataBackend: ae95ac8b-aae8-49c4-8005-57de93bceccc: used clusters: 0
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 109873 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cc8fe9700 - volumedriverfs/PeriodicActionPoolTask - 0000000000006167 -
error - operator(): mds-poll-namespace-ae95ac8b-aae8-49c4-8005-57de93bceccc: caught exception: BackendException: object does not exist - ignored
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112098 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataBackend - 0000000000006168 - inf
o - lastCorkUUID: ae95ac8b-aae8-49c4-8005-57de93bceccc: --
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112139 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataBackend - 0000000000006169 - inf
o - scrub_id: ae95ac8b-aae8-49c4-8005-57de93bceccc: scrub ID --
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112714 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/CachedMetaDataStore - 000000000000616a - in
fo - init_pages_: ae95ac8b-aae8-49c4-8005-57de93bceccc: page capacity (entries): 64, max cached pages: 8192
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112733 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MetaDataServerTable - 000000000000616b - in
fo - set_role: ae95ac8b-aae8-49c4-8005-57de93bceccc: transition from SLAVE to MASTER requested, owner tag 2691
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112807 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cbbfff700 - volumedriverfs/PeriodicActionPoolTask - 000000000000616c -
info - operator(): mds-poll-namespace-ae95ac8b-aae8-49c4-8005-57de93bceccc: timer cancelled
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112853 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cbbfff700 - volumedriverfs/PeriodicActionPoolTask - 000000000000616d -
info - operator(): mds-poll-namespace-ae95ac8b-aae8-49c4-8005-57de93bceccc: bailing out
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112889 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VolumeFactory - 000000000000616e - warning
- make_metadata_store: ae95ac8b-aae8-49c4-8005-57de93bceccc: no scrub ID present in MetaDataStore - did we crash before writing it out?
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 112917 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/MDSMetaDataBackend - 000000000000616f - inf
o - set_scrub_id: ae95ac8b-aae8-49c4-8005-57de93bceccc: setting scrub ID 617daae4-d093-4d51-8a92-e5227c2f8203
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 119499 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 0000000000006170 - info - Volume:
ae95ac8b-aae8-49c4-8005-57de93bceccc: Constructor of volume, namespace ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 119556 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/CachedMetaDataStore - 0000000000006171 - in
fo - cork: ae95ac8b-aae8-49c4-8005-57de93bceccc: corking e20a31ea-7489-443c-ae5d-110a30c85346
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 128473 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006172 - info - Logger: Entering write ae95ac8b-aae8-49c4-8005-57de93bceccc volume_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 202996 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006173 - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc volume_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 203088 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006174 - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 203314 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/SCOCacheMountPoint - 0000000000006175 - inf
o - addNamespace: "/mnt/ssd1/vmstor_write_sco_1": adding namespace ae95ac8b-aae8-49c4-8005-57de93bceccc to mountpoint
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 203497 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/DataStoreNG - 0000000000006176 - info - upd
ateCurrentSCO_: created new write SCO 00_00000001_00 for Volume ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 203880 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000
00006177 - info - Logger: Entering write ae95ac8b-aae8-49c4-8005-57de93bceccc volume_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 326499 +0100 - stor-02.be-g8-4 - 10429/0x00007f6ccf7f6700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000006179 - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc snapshots.xml
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 326577 +0100 - stor-02.be-g8-4 - 10429/0x00007f6ccf7f6700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617a - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 331892 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617b - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc volume_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 331939 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617c - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 332319 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617d - info - Logger: Entering write ae95ac8b-aae8-49c4-8005-57de93bceccc failovercache_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410020 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617e - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc failovercache_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410085 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000617f - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410205 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 0000000000006180 - info - setVolumeFailOverState: ae95ac8b-aae8-49c4-8005-57de93bceccc: transitioned from DEGRADED -> OK_STANDALONE
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410259 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 0000000000006181 - info - register_with_cluster_cache_: ae95ac8b-aae8-49c4-8005-57de93bceccc: registered with cluster cache, owner tag 2691, cluster cache handle 0
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410274 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 0000000000006182 - info - update_cluster_cache_limit_: ae95ac8b-aae8-49c4-8005-57de93bceccc: Not updating content based cluster cache limit to --
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410297 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VolManager - 0000000000006183 - notice - New Volume, VolumeId: ae95ac8b-aae8-49c4-8005-57de93bceccc, Namespace: ae95ac8b-aae8-49c4-8005-57de93bceccc, Size: 0, CreateNamespace: CreateNamespace::T, FINISHED
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410860 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VFSLocalNode - 0000000000006184 - info - adjust_failovercache_config_: ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410897 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/VFSLocalNode - 0000000000006185 - info - do_adjust_failovercache_config_: ae95ac8b-aae8-49c4-8005-57de93bceccc: setting FailOverCacheConfig from -- -> foc://10.109.3.43:26202,Asynchronous
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410917 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 0000000000006186 - info - setVolumeFailOverState: ae95ac8b-aae8-49c4-8005-57de93bceccc: transitioned from OK_STANDALONE -> DEGRADED
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 410940 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/FailOverCacheAsyncBridge - 0000000000006187 - info - newCache: newCache
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 411147 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000006188 - info - Logger: Entering write ae95ac8b-aae8-49c4-8005-57de93bceccc failovercache_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 547794 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000006189 - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc failovercache_configuration
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 547865 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000000618a - info - ~Logger: Exiting write for ae95ac8b-aae8-49c4-8005-57de93bceccc
Dec 06 12:22:18 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:18 547985 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 000000000000618b - info - setFailOverCacheConfig_: ae95ac8b-aae8-49c4-8005-57de93bceccc: Setting the failover cache to foc://10.109.3.43:26202,Asynchronous
Dec 06 12:22:20 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:22:20 846839 +0100 - stor-02.be-g8-4 - 10429/0x00007f6bb0bf8700 - volumedriverfs/MTServer - 000000000000618c - info - work: You have 2 connections running
.
.
.
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 488921 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Socket - 00000000000062f0 - error - connect: error connecting to 10.109.3.43:26202 - connect timeout
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 492585 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/FailOverCacheAsyncBridge - 00000000000062f1 - info - newCache: newCache
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 494313 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 00000000000062f2 - info - setFailOverCacheConfig_: ae95ac8b-aae8-49c4-8005-57de93bceccc: finding out volume state
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 496361 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/Volume - 00000000000062f3 - info - setVolumeFailOverState: ae95ac8b-aae8-49c4-8005-57de93bceccc: transitioned from DEGRADED -> OK_SYNC
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 496462 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/SnapshotPersistor - 00000000000062f4 - info - newTLog: Starting new TLog tlog_56a541a4-cb9f-43bc-970c-5a5afb067daa
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 499232 +0100 - stor-02.be-g8-4 - 10429/0x00007f6b98ff8700 - volumedriverfs/CachedMetaDataStore - 00000000000062f5 - info - cork: ae95ac8b-aae8-49c4-8005-57de93bceccc: corking 56a541a4-cb9f-43bc-970c-5a5afb067daa
Dec 06 12:27:14 stor-02.be-g8-4 volumedriver_fs.sh[10429]: 2016-12-06 12:27:14 502509 +0100 - stor-02.be-g8-4 - 10429/0x00007f6cd17fa700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000000062f6 - info - Logger: Entering write ae95ac8b-aae8-49c4-8005-57de93bceccc tlog_e20a31ea-7489-443c-ae5d-110a30c85346
```
After changing the volumedriver config parameter `fs_dtl_config_mode` from `Automatic` to `Manual`, the issue was resolved.
What was also noticeable is that not all volumedrivers were affected by this issue. Which could point to a connection to DTL problem?
|
process
|
fs dtl config mode automatic causes volume creation to take a long time we ve observed this issue on a gig environment when performing a truncate s test raw the volume creation was hanging for approx minutes as seen in the log files if we take a closer look you can see that the volume creation is stuck on setting the failover cache to foc asynchronous dec stor be volumedriver fs sh stor be volumedriverfs vfsobjectrouter info create creating volume dec stor be volumedriver fs sh stor be volumedriverfs objectregistry info register base volume registering namespace foc config mode automatic dec stor be volumedriver fs sh stor be volumedriverfs lockedarakoon info r un sequence updating counter succeeded after attempt s dec stor be volumedriver fs sh stor be volumedriverfs lockedarakoon info r un sequence register basic volume succeeded after attempt s dec stor be volumedriver fs sh stor be volumedriverfs volmanager notice ne w volume volumeid namespace size createnamespace createnamespace t start dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering namespaceexists dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting namespaceexists for dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering createnamespace dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting createnamespace for dec stor be volumedriver fs sh stor be volumedriverfs volmanager info volu mepotentialscocache we have enough room for an additional volumes with cluster size sco multiplier sco size tlog multiplier dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering write tag owner tag dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write tag for owner tag dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write tag for dec stor be volumedriver fs sh stor be volumedriverfs snapshotpersistor info newtlog starting new tlog tlog dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatastore info mdsmetadatastore home mnt vmstor db md cache capacity pages apply scrub results to slaves applyrelo cationstoslaves t mds timeout secs dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatastore info mdsmetadatastore mds nodes dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatastore info mdsmetadatastore mds dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatastore info connect connecting to mds dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatabackendhelpers info make db mds running in process using fast path dec stor be volumedriver fs sh stor be volumedriverfs rockslogger info mn t vmstor db mds skipping printing options dec stor be volumedriver fs sh stor be volumedriverfs rockslogger info mn t vmstor db mds created column family id dec stor be volumedriver fs sh stor be volumedriverfs metadataserverrockstable info rockstable creating table dec stor be volumedriver fs sh stor be volumedriverfs metadataservertable in fo table new table scratch dir mnt vmstor db mds dec stor be volumedriver fs sh stor be volumedriverfs metadataservertable in fo start starting periodic background check dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatabackend inf o mdsmetadatabackend using mds owner tag dec stor be volumedriver fs sh stor be volumedriverfs metadataservertable in fo work running periodic action dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatabackend inf o mdsmetadatabackend dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatabackend inf o lastcorkuuid dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatabackend inf o scrub id scrub id dec stor be volumedriver fs sh stor be volumedriverfs cachedmetadatastore in fo init pages page capacity entries max cached pages dec stor be volumedriver fs sh stor be volumedriverfs metadatastorebuilder i nfo update metadata store bringing metadatastore in sync with backend requested interval check scrub id checkscrubid f dry run dryrun f ful l rebuild false dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering read snapshots xml dec stor be volumedriver fs sh stor be volumedriverfs albaconnection error convert exceptions read object caught alba proxy exception proxy protocol protocol error objectdoesnotexist dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger error logger exiting read for snapshots xml with exception dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger error logger exiting read for dec stor be volumedriver fs sh stor be volumedriverfs backendinterface error fillobject problem getting snapshots xml from backendexception object does not exist dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatabackend inf o mdsmetadatabackend used clusters dec stor be volumedriver fs sh stor be volumedriverfs periodicactionpooltask error operator mds poll namespace caught exception backendexception object does not exist ignored dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatabackend inf o lastcorkuuid dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatabackend inf o scrub id scrub id dec stor be volumedriver fs sh stor be volumedriverfs cachedmetadatastore in fo init pages page capacity entries max cached pages dec stor be volumedriver fs sh stor be volumedriverfs metadataservertable in fo set role transition from slave to master requested owner tag dec stor be volumedriver fs sh stor be volumedriverfs periodicactionpooltask info operator mds poll namespace timer cancelled dec stor be volumedriver fs sh stor be volumedriverfs periodicactionpooltask info operator mds poll namespace bailing out dec stor be volumedriver fs sh stor be volumedriverfs volumefactory warning make metadata store no scrub id present in metadatastore did we crash before writing it out dec stor be volumedriver fs sh stor be volumedriverfs mdsmetadatabackend inf o set scrub id setting scrub id dec stor be volumedriver fs sh stor be volumedriverfs volume info volume constructor of volume namespace dec stor be volumedriver fs sh stor be volumedriverfs cachedmetadatastore in fo cork corking dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering write volume configuration dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write for volume configuration dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write for dec stor be volumedriver fs sh stor be volumedriverfs scocachemountpoint inf o addnamespace mnt vmstor write sco adding namespace to mountpoint dec stor be volumedriver fs sh stor be volumedriverfs datastoreng info upd atecurrentsco created new write sco for volume dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering write volume configuration dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write for snapshots xml dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write for dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write for volume configuration dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write for dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering write failovercache configuration dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write for failovercache configuration dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write for dec stor be volumedriver fs sh stor be volumedriverfs volume info setvolumefailoverstate transitioned from degraded ok standalone dec stor be volumedriver fs sh stor be volumedriverfs volume info register with cluster cache registered with cluster cache owner tag cluster cache handle dec stor be volumedriver fs sh stor be volumedriverfs volume info update cluster cache limit not updating content based cluster cache limit to dec stor be volumedriver fs sh stor be volumedriverfs volmanager notice new volume volumeid namespace size createnamespace createnamespace t finished dec stor be volumedriver fs sh stor be volumedriverfs vfslocalnode info adjust failovercache config dec stor be volumedriver fs sh stor be volumedriverfs vfslocalnode info do adjust failovercache config setting failovercacheconfig from foc asynchronous dec stor be volumedriver fs sh stor be volumedriverfs volume info setvolumefailoverstate transitioned from ok standalone degraded dec stor be volumedriver fs sh stor be volumedriverfs failovercacheasyncbridge info newcache newcache dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering write failovercache configuration dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write for failovercache configuration dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting write for dec stor be volumedriver fs sh stor be volumedriverfs volume info setfailovercacheconfig setting the failover cache to foc asynchronous dec stor be volumedriver fs sh stor be volumedriverfs mtserver info work you have connections running dec stor be volumedriver fs sh stor be volumedriverfs socket error connect error connecting to connect timeout dec stor be volumedriver fs sh stor be volumedriverfs failovercacheasyncbridge info newcache newcache dec stor be volumedriver fs sh stor be volumedriverfs volume info setfailovercacheconfig finding out volume state dec stor be volumedriver fs sh stor be volumedriverfs volume info setvolumefailoverstate transitioned from degraded ok sync dec stor be volumedriver fs sh stor be volumedriverfs snapshotpersistor info newtlog starting new tlog tlog dec stor be volumedriver fs sh stor be volumedriverfs cachedmetadatastore info cork corking dec stor be volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering write tlog after changing the volumedriver config parameter fs dtl config mode from automatic to manual the issue was resolved what was also noticeable is that not all volumedrivers were affected by this issue which could point to a connection to dtl problem
| 1
|
3,784
| 6,761,480,002
|
IssuesEvent
|
2017-10-25 02:02:18
|
hyperrealm/libconfig
|
https://api.github.com/repos/hyperrealm/libconfig
|
closed
|
Run valgrind on latest code to check for allocation problems
|
process
|
Need to check latest code for leaks, etc.
|
1.0
|
Run valgrind on latest code to check for allocation problems - Need to check latest code for leaks, etc.
|
process
|
run valgrind on latest code to check for allocation problems need to check latest code for leaks etc
| 1
|
82,829
| 3,619,329,228
|
IssuesEvent
|
2016-02-08 15:39:19
|
DistrictDataLabs/logbook
|
https://api.github.com/repos/DistrictDataLabs/logbook
|
opened
|
Unicode Error in Enrollment
|
priority: high type: bug
|
Internal Server Error: /admin/catalog/enrollment/
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py", line 164, in get_response
response = response.render()
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/response.py", line 158, in render
self.content = self.rendered_content
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/response.py", line 135, in rendered_content
content = template.render(context, self._request)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/backends/django.py", line 74, in render
return self.template.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 209, in render
return self._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 201, in _render
return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py", line 135, in render
return compiled_parent._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 201, in _render
return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py", line 135, in render
return compiled_parent._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 201, in _render
return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py", line 65, in render
result = block.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py", line 65, in render
result = block.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 1271, in render
_dict = func(*resolved_args, **resolved_kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/templatetags/admin_list.py", line 320, in result_list
'results': list(results(cl))}
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/templatetags/admin_list.py", line 296, in results
yield ResultList(None, items_for_result(cl, res, None))
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/templatetags/admin_list.py", line 287, in __init__
super(ResultList, self).__init__(*items)
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/templatetags/admin_list.py", line 199, in items_for_result
f, attr, value = lookup_field(field_name, result, cl.model_admin)
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/utils.py", line 282, in lookup_field
value = attr()
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/base.py", line 503, in __str__
return force_text(self).encode('utf-8')
File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/encoding.py", line 92, in force_text
s = six.text_type(s)
File "/app/catalog/models.py", line 150, in __unicode__
return "{} enrolled in {}".format(self.user.profile.full_name, self.course)
UnicodeEncodeError: 'ascii' codec can't encode character u'\x92' in position 4: ordinal not in range(128)
|
1.0
|
Unicode Error in Enrollment - Internal Server Error: /admin/catalog/enrollment/
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py", line 164, in get_response
response = response.render()
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/response.py", line 158, in render
self.content = self.rendered_content
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/response.py", line 135, in rendered_content
content = template.render(context, self._request)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/backends/django.py", line 74, in render
return self.template.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 209, in render
return self._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 201, in _render
return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py", line 135, in render
return compiled_parent._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 201, in _render
return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py", line 135, in render
return compiled_parent._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 201, in _render
return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py", line 65, in render
result = block.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py", line 65, in render
result = block.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 1271, in render
_dict = func(*resolved_args, **resolved_kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/templatetags/admin_list.py", line 320, in result_list
'results': list(results(cl))}
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/templatetags/admin_list.py", line 296, in results
yield ResultList(None, items_for_result(cl, res, None))
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/templatetags/admin_list.py", line 287, in __init__
super(ResultList, self).__init__(*items)
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/templatetags/admin_list.py", line 199, in items_for_result
f, attr, value = lookup_field(field_name, result, cl.model_admin)
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/utils.py", line 282, in lookup_field
value = attr()
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/base.py", line 503, in __str__
return force_text(self).encode('utf-8')
File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/encoding.py", line 92, in force_text
s = six.text_type(s)
File "/app/catalog/models.py", line 150, in __unicode__
return "{} enrolled in {}".format(self.user.profile.full_name, self.course)
UnicodeEncodeError: 'ascii' codec can't encode character u'\x92' in position 4: ordinal not in range(128)
|
non_process
|
unicode error in enrollment internal server error admin catalog enrollment traceback most recent call last file app heroku python lib site packages django core handlers base py line in get response response response render file app heroku python lib site packages django template response py line in render self content self rendered content file app heroku python lib site packages django template response py line in rendered content content template render context self request file app heroku python lib site packages django template backends django py line in render return self template render context file app heroku python lib site packages django template base py line in render return self render context file app heroku python lib site packages django template base py line in render return self nodelist render context file app heroku python lib site packages django template base py line in render bit self render node node context file app heroku python lib site packages django template base py line in render node return node render context file app heroku python lib site packages django template loader tags py line in render return compiled parent render context file app heroku python lib site packages django template base py line in render return self nodelist render context file app heroku python lib site packages django template base py line in render bit self render node node context file app heroku python lib site packages django template base py line in render node return node render context file app heroku python lib site packages django template loader tags py line in render return compiled parent render context file app heroku python lib site packages django template base py line in render return self nodelist render context file app heroku python lib site packages django template base py line in render bit self render node node context file app heroku python lib site packages django template base py line in render node return node render context file app heroku python lib site packages django template loader tags py line in render result block nodelist render context file app heroku python lib site packages django template base py line in render bit self render node node context file app heroku python lib site packages django template base py line in render node return node render context file app heroku python lib site packages django template loader tags py line in render result block nodelist render context file app heroku python lib site packages django template base py line in render bit self render node node context file app heroku python lib site packages django template base py line in render node return node render context file app heroku python lib site packages django template base py line in render dict func resolved args resolved kwargs file app heroku python lib site packages django contrib admin templatetags admin list py line in result list results list results cl file app heroku python lib site packages django contrib admin templatetags admin list py line in results yield resultlist none items for result cl res none file app heroku python lib site packages django contrib admin templatetags admin list py line in init super resultlist self init items file app heroku python lib site packages django contrib admin templatetags admin list py line in items for result f attr value lookup field field name result cl model admin file app heroku python lib site packages django contrib admin utils py line in lookup field value attr file app heroku python lib site packages django db models base py line in str return force text self encode utf file app heroku python lib site packages django utils encoding py line in force text s six text type s file app catalog models py line in unicode return enrolled in format self user profile full name self course unicodeencodeerror ascii codec can t encode character u in position ordinal not in range
| 0
|
1,146
| 3,633,331,877
|
IssuesEvent
|
2016-02-11 14:12:34
|
matz-e/lobster
|
https://api.github.com/repos/matz-e/lobster
|
closed
|
Multi-core tasks put too many tasks in the queue
|
enhancement low-priority processing
|
With the current task generation, Lobster tries to fill up all uncommitted cores and have a buffer of tasks to fill up 10% of the total cores. With multi-core tasks, we overproduce tasks. Should be an easy fix, just some additional arithmetic.
|
1.0
|
Multi-core tasks put too many tasks in the queue - With the current task generation, Lobster tries to fill up all uncommitted cores and have a buffer of tasks to fill up 10% of the total cores. With multi-core tasks, we overproduce tasks. Should be an easy fix, just some additional arithmetic.
|
process
|
multi core tasks put too many tasks in the queue with the current task generation lobster tries to fill up all uncommitted cores and have a buffer of tasks to fill up of the total cores with multi core tasks we overproduce tasks should be an easy fix just some additional arithmetic
| 1
|
5,658
| 8,528,675,878
|
IssuesEvent
|
2018-11-03 02:06:20
|
pelias/openaddresses
|
https://api.github.com/repos/pelias/openaddresses
|
closed
|
Allow street-less records in white-listed countries
|
processed
|
Some countries allow addresses without streets. So far, it just appears to be the Czech Republic. Here are two examples:
* č.p. 360, 79862 Rozstání
* č.ev. 9, 79857 Rakůvka
These can be parsed as:
```
{
housenumber: 'č.p. 360',
postcode: '79862',
city: 'Rozstání'
}
```
and:
```
{
housenumber: 'č.ev. 9',
postcode: '79857',
city: 'Rakůvka'
}
```
|
1.0
|
Allow street-less records in white-listed countries - Some countries allow addresses without streets. So far, it just appears to be the Czech Republic. Here are two examples:
* č.p. 360, 79862 Rozstání
* č.ev. 9, 79857 Rakůvka
These can be parsed as:
```
{
housenumber: 'č.p. 360',
postcode: '79862',
city: 'Rozstání'
}
```
and:
```
{
housenumber: 'č.ev. 9',
postcode: '79857',
city: 'Rakůvka'
}
```
|
process
|
allow street less records in white listed countries some countries allow addresses without streets so far it just appears to be the czech republic here are two examples č p rozstání č ev rakůvka these can be parsed as housenumber č p postcode city rozstání and housenumber č ev postcode city rakůvka
| 1
|
19,427
| 25,588,290,296
|
IssuesEvent
|
2022-12-01 11:00:43
|
kdgregory/log4j-aws-appenders
|
https://api.github.com/repos/kdgregory/log4j-aws-appenders
|
closed
|
Synchronous mode: queue messages if writer not initialized
|
bug in-process
|
When running in synchronous mode, the `LogWriter.addMessage()` will attempt to send the batch on the invoking thread. However, this causes a race condition in the case of long-running initialization (made worse because we won't delay appender startup in 3.1.0).
Add a conditional test on `isRunning` at `AbstractLogWriter` line 233.
|
1.0
|
Synchronous mode: queue messages if writer not initialized - When running in synchronous mode, the `LogWriter.addMessage()` will attempt to send the batch on the invoking thread. However, this causes a race condition in the case of long-running initialization (made worse because we won't delay appender startup in 3.1.0).
Add a conditional test on `isRunning` at `AbstractLogWriter` line 233.
|
process
|
synchronous mode queue messages if writer not initialized when running in synchronous mode the logwriter addmessage will attempt to send the batch on the invoking thread however this causes a race condition in the case of long running initialization made worse because we won t delay appender startup in add a conditional test on isrunning at abstractlogwriter line
| 1
|
21,974
| 3,928,569,000
|
IssuesEvent
|
2016-04-24 10:38:11
|
ntop/ntopng
|
https://api.github.com/repos/ntop/ntopng
|
closed
|
ntopng doesnt auto start properly on boot on rasberry pi
|
testing needed
|
I found ntopng does not stay started after reboot on Rasberry Pi. I think because no hardware clock.
```
Apr 24 02:18:56 pi systemd[1]: Started LSB: Start/stop ntopng web.
Apr 24 02:18:56 pi ntopng: [Lua.cpp:4484] WARNING: Script failure [/usr/share/ntopng/scripts/callbacks/second.lua][/usr/share/ntopng/scripts/lua/modules/graph_utils.lua:854: /var/tmp/ntopng/3/rrd/bytes.rrd: illegal attempt to update using time 1461464336 when last update time is 1461464644 (minimum one second step)]
Apr 24 02:18:57 pi ntopng: [Lua.cpp:4484] WARNING: Script failure [/usr/share/ntopng/scripts/callbacks/second.lua][/usr/share/ntopng/scripts/lua/modules/graph_utils.lua:854: /var/tmp/ntopng/3/rrd/bytes.rrd: illegal attempt to update using time 1461464337 when last update time is 1461464644 (minimum one second step)]
Apr 24 02:18:58 pi ntopng: [Lua.cpp:4484] WARNING: Script failure [/usr/share/ntopng/scripts/callbacks/second.lua][/usr/share/ntopng/scripts/lua/modules/graph_utils.lua:854: /var/tmp/ntopng/3/rrd/bytes.rrd: illegal attempt to update using time 1461464338 when last update time is 1461464644 (minimum one second step)]
..
Apr 24 02:31:07 cognitive systemd[1]: Time has been changed
```
Because Rasberry doesn't come with hardward clock by default, it takes a while for NTP to come into sync from factory default old date. Firstly the RRD inserts fails being earlier than last update (before boot) as seen above.
But it seams ntopg process also stops also when the time gets updated on the fly, but I cant confirm different logs.
I think I solved my issue adding ntp-wait to /etc/init.d/ntopng
```
case "$1" in
start)
ntp-wait; <--------
start_ntopng 0;
;;
```
Just posting this for comment if this is the right approach.
Need to check too if init.d parallel processes on Rasberry pi, else the above could be holding up other processes starting??
|
1.0
|
ntopng doesnt auto start properly on boot on rasberry pi - I found ntopng does not stay started after reboot on Rasberry Pi. I think because no hardware clock.
```
Apr 24 02:18:56 pi systemd[1]: Started LSB: Start/stop ntopng web.
Apr 24 02:18:56 pi ntopng: [Lua.cpp:4484] WARNING: Script failure [/usr/share/ntopng/scripts/callbacks/second.lua][/usr/share/ntopng/scripts/lua/modules/graph_utils.lua:854: /var/tmp/ntopng/3/rrd/bytes.rrd: illegal attempt to update using time 1461464336 when last update time is 1461464644 (minimum one second step)]
Apr 24 02:18:57 pi ntopng: [Lua.cpp:4484] WARNING: Script failure [/usr/share/ntopng/scripts/callbacks/second.lua][/usr/share/ntopng/scripts/lua/modules/graph_utils.lua:854: /var/tmp/ntopng/3/rrd/bytes.rrd: illegal attempt to update using time 1461464337 when last update time is 1461464644 (minimum one second step)]
Apr 24 02:18:58 pi ntopng: [Lua.cpp:4484] WARNING: Script failure [/usr/share/ntopng/scripts/callbacks/second.lua][/usr/share/ntopng/scripts/lua/modules/graph_utils.lua:854: /var/tmp/ntopng/3/rrd/bytes.rrd: illegal attempt to update using time 1461464338 when last update time is 1461464644 (minimum one second step)]
..
Apr 24 02:31:07 cognitive systemd[1]: Time has been changed
```
Because Rasberry doesn't come with hardward clock by default, it takes a while for NTP to come into sync from factory default old date. Firstly the RRD inserts fails being earlier than last update (before boot) as seen above.
But it seams ntopg process also stops also when the time gets updated on the fly, but I cant confirm different logs.
I think I solved my issue adding ntp-wait to /etc/init.d/ntopng
```
case "$1" in
start)
ntp-wait; <--------
start_ntopng 0;
;;
```
Just posting this for comment if this is the right approach.
Need to check too if init.d parallel processes on Rasberry pi, else the above could be holding up other processes starting??
|
non_process
|
ntopng doesnt auto start properly on boot on rasberry pi i found ntopng does not stay started after reboot on rasberry pi i think because no hardware clock apr pi systemd started lsb start stop ntopng web apr pi ntopng warning script failure apr pi ntopng warning script failure apr pi ntopng warning script failure apr cognitive systemd time has been changed because rasberry doesn t come with hardward clock by default it takes a while for ntp to come into sync from factory default old date firstly the rrd inserts fails being earlier than last update before boot as seen above but it seams ntopg process also stops also when the time gets updated on the fly but i cant confirm different logs i think i solved my issue adding ntp wait to etc init d ntopng case in start ntp wait start ntopng just posting this for comment if this is the right approach need to check too if init d parallel processes on rasberry pi else the above could be holding up other processes starting
| 0
|
9,662
| 2,615,164,845
|
IssuesEvent
|
2015-03-01 06:44:55
|
chrsmith/reaver-wps
|
https://api.github.com/repos/chrsmith/reaver-wps
|
opened
|
Solving the 99.99% Reaver 1.4 issue
|
auto-migrated Priority-Triage Type-Defect
|
```
There are cases where Reaver 1.4 runs up to 99.99% and hangs. We have found that reverting to Reaver 1.3 in such cases always cracks the code. This has been noted elsewhere in the WPS Reaveer Issue Files
As the use of 1.3 is rare, uninstalling 1.4 and reinstalling 1.3 and vice-versa as the situation warrants seemed impractical.Therefore we handled the problem by making a persistant usb flash drive running BTR1 and then installing Reaver 1.3 to the operating system.
We suggest you use a 16gig flash although an 8 gig flash will work.
1. Partition the flashdrive into a 5 gig Fat32 partition and an pprox 12 gig
Ext3 partition. The 12 gig Ext 3 partition MUST be named = casper-rw to allow
the flash drive to be persistant. Perisitance here means that any changes you
make to the flashdrive will remain when you shut down the operating system on
the flash drive. We used XP, and Acronis Disk Director Suite to complete these
operations. Both are available via torrents if you look, try isohunt.
2. Since Bt5R1 does not include Reaver. We loaded it onto the flash drive using
unetbootin-windows. You can download the latest version of unetbootin easily.
You could use BT5R2 or R3 we simply did not do this and didnot explore the
steps required to install therefore no guidance is given.
3. Now to make the flashdrive persistant
After you have installed BT5R1,R2 or R3 to the flashdrive change the
syslinux.cfg in the Fat32 partition to read as below. Notice all we did was add
lines 5 thru 9 to the config file. You can do this with kate or even in windows
with notepad just do not save the file as text when using notepad.
A complete copy of the config file required for persistance is seen below.
==============
default menu.c32
prompt 0
menu title UNetbootin
timeout 100
label DEFAULT
menu label BackTrack Persistent Text - Persistent Text Mode Boot
kernel /casper/vmlinuz
append file=/cdrom/preseed/custom.seed boot=casper persistent
initrd=/casper/initrd.gz text splash vga=791--
label ubnentry1
menu label BackTrack Stealth - No Networking enabled
kernel /casper/vmlinuz
append initrd=/casper/initrds.gz file=/cdrom/preseed/custom.seed boot=casper
text splash staticip vga=791--
label ubnentry2
menu label BackTrack Forensics - No Drive or Swap Mount
kernel /casper/vmlinuz
append initrd=/casper/initrdf.gz file=/cdrom/preseed/custom.seed boot=casper
text splash vga=791--
label ubnentry3
menu label BackTrack noDRM - No DRM Drivers
kernel /casper/vmlinuz
append initrd=/casper/initrd.gz file=/cdrom/preseed/custom.seed boot=casper
text splash nomodeset vga=791--
label ubnentry4
menu label BackTrack Debug - Safe Mode
kernel /casper/vmlinuz
append initrd=/casper/initrd.gz file=/cdrom/preseed/custom.seed boot=casper
text--
label ubnentry5
menu label BackTrack Memtest - Run memtest
kernel /isolinux/memtest
append initrd=/ubninit -
label ubnentry6
menu label Hard Drive Boot - boot the first hard disk
kernel /ubnkern
append initrd=/ubninit -
===================
To run the flashdrive as an operating system remember to set your computer BIOS
to boot from the USB first before the hard drive and you will have a
functioning persistant BT5R1 operating system on a usb stick.
You should not try and upgrade the operating system as you will only need this
tool when Reaver 1.4 fails. Furthermore there is a chance you will install
reaver to 1.4 during the upgrade if you load R1.
Installing Reaver 1.3
Step 1 :- Download Reaver 1.3
# wget http://reaver-wps.googlecode.com/files/reaver-1.3.tar.gz
Step 2 :- Extract Reaver 1.3 Type and enter
# tar zxvf reaver-1.3.tar.gz
Step 3: Place the reaver-1.3 in the home directory Type and enter
# cd reaver-1.3/src
Step 4: Type and enter
# ./configure
Step 5: Type and enter
# make
Type and enter
# make install
In closing note that "wash" in Reaver 1.4 is called "walsh" in Reaver 1.3. When
runnng walsh you will not get any channel number so you may wish to use
airodump-ng to obtain info not provided by walsh. Most of this info provided is
NOT original and has been extracted for use here.
Musket Team Alpha
```
Original issue reported on code.google.com by `muske...@yahoo.com` on 21 Nov 2012 at 9:11
|
1.0
|
Solving the 99.99% Reaver 1.4 issue - ```
There are cases where Reaver 1.4 runs up to 99.99% and hangs. We have found that reverting to Reaver 1.3 in such cases always cracks the code. This has been noted elsewhere in the WPS Reaveer Issue Files
As the use of 1.3 is rare, uninstalling 1.4 and reinstalling 1.3 and vice-versa as the situation warrants seemed impractical.Therefore we handled the problem by making a persistant usb flash drive running BTR1 and then installing Reaver 1.3 to the operating system.
We suggest you use a 16gig flash although an 8 gig flash will work.
1. Partition the flashdrive into a 5 gig Fat32 partition and an pprox 12 gig
Ext3 partition. The 12 gig Ext 3 partition MUST be named = casper-rw to allow
the flash drive to be persistant. Perisitance here means that any changes you
make to the flashdrive will remain when you shut down the operating system on
the flash drive. We used XP, and Acronis Disk Director Suite to complete these
operations. Both are available via torrents if you look, try isohunt.
2. Since Bt5R1 does not include Reaver. We loaded it onto the flash drive using
unetbootin-windows. You can download the latest version of unetbootin easily.
You could use BT5R2 or R3 we simply did not do this and didnot explore the
steps required to install therefore no guidance is given.
3. Now to make the flashdrive persistant
After you have installed BT5R1,R2 or R3 to the flashdrive change the
syslinux.cfg in the Fat32 partition to read as below. Notice all we did was add
lines 5 thru 9 to the config file. You can do this with kate or even in windows
with notepad just do not save the file as text when using notepad.
A complete copy of the config file required for persistance is seen below.
==============
default menu.c32
prompt 0
menu title UNetbootin
timeout 100
label DEFAULT
menu label BackTrack Persistent Text - Persistent Text Mode Boot
kernel /casper/vmlinuz
append file=/cdrom/preseed/custom.seed boot=casper persistent
initrd=/casper/initrd.gz text splash vga=791--
label ubnentry1
menu label BackTrack Stealth - No Networking enabled
kernel /casper/vmlinuz
append initrd=/casper/initrds.gz file=/cdrom/preseed/custom.seed boot=casper
text splash staticip vga=791--
label ubnentry2
menu label BackTrack Forensics - No Drive or Swap Mount
kernel /casper/vmlinuz
append initrd=/casper/initrdf.gz file=/cdrom/preseed/custom.seed boot=casper
text splash vga=791--
label ubnentry3
menu label BackTrack noDRM - No DRM Drivers
kernel /casper/vmlinuz
append initrd=/casper/initrd.gz file=/cdrom/preseed/custom.seed boot=casper
text splash nomodeset vga=791--
label ubnentry4
menu label BackTrack Debug - Safe Mode
kernel /casper/vmlinuz
append initrd=/casper/initrd.gz file=/cdrom/preseed/custom.seed boot=casper
text--
label ubnentry5
menu label BackTrack Memtest - Run memtest
kernel /isolinux/memtest
append initrd=/ubninit -
label ubnentry6
menu label Hard Drive Boot - boot the first hard disk
kernel /ubnkern
append initrd=/ubninit -
===================
To run the flashdrive as an operating system remember to set your computer BIOS
to boot from the USB first before the hard drive and you will have a
functioning persistant BT5R1 operating system on a usb stick.
You should not try and upgrade the operating system as you will only need this
tool when Reaver 1.4 fails. Furthermore there is a chance you will install
reaver to 1.4 during the upgrade if you load R1.
Installing Reaver 1.3
Step 1 :- Download Reaver 1.3
# wget http://reaver-wps.googlecode.com/files/reaver-1.3.tar.gz
Step 2 :- Extract Reaver 1.3 Type and enter
# tar zxvf reaver-1.3.tar.gz
Step 3: Place the reaver-1.3 in the home directory Type and enter
# cd reaver-1.3/src
Step 4: Type and enter
# ./configure
Step 5: Type and enter
# make
Type and enter
# make install
In closing note that "wash" in Reaver 1.4 is called "walsh" in Reaver 1.3. When
runnng walsh you will not get any channel number so you may wish to use
airodump-ng to obtain info not provided by walsh. Most of this info provided is
NOT original and has been extracted for use here.
Musket Team Alpha
```
Original issue reported on code.google.com by `muske...@yahoo.com` on 21 Nov 2012 at 9:11
|
non_process
|
solving the reaver issue there are cases where reaver runs up to and hangs we have found that reverting to reaver in such cases always cracks the code this has been noted elsewhere in the wps reaveer issue files as the use of is rare uninstalling and reinstalling and vice versa as the situation warrants seemed impractical therefore we handled the problem by making a persistant usb flash drive running and then installing reaver to the operating system we suggest you use a flash although an gig flash will work partition the flashdrive into a gig partition and an pprox gig partition the gig ext partition must be named casper rw to allow the flash drive to be persistant perisitance here means that any changes you make to the flashdrive will remain when you shut down the operating system on the flash drive we used xp and acronis disk director suite to complete these operations both are available via torrents if you look try isohunt since does not include reaver we loaded it onto the flash drive using unetbootin windows you can download the latest version of unetbootin easily you could use or we simply did not do this and didnot explore the steps required to install therefore no guidance is given now to make the flashdrive persistant after you have installed or to the flashdrive change the syslinux cfg in the partition to read as below notice all we did was add lines thru to the config file you can do this with kate or even in windows with notepad just do not save the file as text when using notepad a complete copy of the config file required for persistance is seen below default menu prompt menu title unetbootin timeout label default menu label backtrack persistent text persistent text mode boot kernel casper vmlinuz append file cdrom preseed custom seed boot casper persistent initrd casper initrd gz text splash vga label menu label backtrack stealth no networking enabled kernel casper vmlinuz append initrd casper initrds gz file cdrom preseed custom seed boot casper text splash staticip vga label menu label backtrack forensics no drive or swap mount kernel casper vmlinuz append initrd casper initrdf gz file cdrom preseed custom seed boot casper text splash vga label menu label backtrack nodrm no drm drivers kernel casper vmlinuz append initrd casper initrd gz file cdrom preseed custom seed boot casper text splash nomodeset vga label menu label backtrack debug safe mode kernel casper vmlinuz append initrd casper initrd gz file cdrom preseed custom seed boot casper text label menu label backtrack memtest run memtest kernel isolinux memtest append initrd ubninit label menu label hard drive boot boot the first hard disk kernel ubnkern append initrd ubninit to run the flashdrive as an operating system remember to set your computer bios to boot from the usb first before the hard drive and you will have a functioning persistant operating system on a usb stick you should not try and upgrade the operating system as you will only need this tool when reaver fails furthermore there is a chance you will install reaver to during the upgrade if you load installing reaver step download reaver wget step extract reaver type and enter tar zxvf reaver tar gz step place the reaver in the home directory type and enter cd reaver src step type and enter configure step type and enter make type and enter make install in closing note that wash in reaver is called walsh in reaver when runnng walsh you will not get any channel number so you may wish to use airodump ng to obtain info not provided by walsh most of this info provided is not original and has been extracted for use here musket team alpha original issue reported on code google com by muske yahoo com on nov at
| 0
|
610,830
| 18,925,767,074
|
IssuesEvent
|
2021-11-17 09:22:52
|
opensrp/opensrp-server-web
|
https://api.github.com/repos/opensrp/opensrp-server-web
|
opened
|
Reinstatement of Check for Duplicate Plans on OpenSRP Case Triggered Plans
|
Priority: High
|
Recently we removed the checks for duplicates on OpenSRP. This was after the validation we had made prevent re-use of deleted index case IDs. There is a likely hood that duplicates would occur in the event OpenSRP timeouts on NiFi take longer to recover and case detail events are resubmitted. To curb this, CHAI has requested we update the logic to check for duplicates based on a query they use on their end that considers plans generated after the insertion of the index case as opposed to using the caseId no, which is what we previously did. Below is the query CHAI has used which we should consider in reinstating this functionality.
> select *
from (
select identifier, title, date, dateCreated, status, p.fiReason, caseNum, last_data_date, row_number() over(partition by title order by last_data_date desc) rn
from holding.[plan] p
left join plans_with_data_2_v pwd on pwd.planId = identifier
left join focus_masterlist fm on fm.opensrp_id = jurisdiction
where p.fiReason = 'Case Triggered' and province_externalid <> '12'
) f
where rn > 1
order by date desc
|
1.0
|
Reinstatement of Check for Duplicate Plans on OpenSRP Case Triggered Plans - Recently we removed the checks for duplicates on OpenSRP. This was after the validation we had made prevent re-use of deleted index case IDs. There is a likely hood that duplicates would occur in the event OpenSRP timeouts on NiFi take longer to recover and case detail events are resubmitted. To curb this, CHAI has requested we update the logic to check for duplicates based on a query they use on their end that considers plans generated after the insertion of the index case as opposed to using the caseId no, which is what we previously did. Below is the query CHAI has used which we should consider in reinstating this functionality.
> select *
from (
select identifier, title, date, dateCreated, status, p.fiReason, caseNum, last_data_date, row_number() over(partition by title order by last_data_date desc) rn
from holding.[plan] p
left join plans_with_data_2_v pwd on pwd.planId = identifier
left join focus_masterlist fm on fm.opensrp_id = jurisdiction
where p.fiReason = 'Case Triggered' and province_externalid <> '12'
) f
where rn > 1
order by date desc
|
non_process
|
reinstatement of check for duplicate plans on opensrp case triggered plans recently we removed the checks for duplicates on opensrp this was after the validation we had made prevent re use of deleted index case ids there is a likely hood that duplicates would occur in the event opensrp timeouts on nifi take longer to recover and case detail events are resubmitted to curb this chai has requested we update the logic to check for duplicates based on a query they use on their end that considers plans generated after the insertion of the index case as opposed to using the caseid no which is what we previously did below is the query chai has used which we should consider in reinstating this functionality select from select identifier title date datecreated status p fireason casenum last data date row number over partition by title order by last data date desc rn from holding p left join plans with data v pwd on pwd planid identifier left join focus masterlist fm on fm opensrp id jurisdiction where p fireason case triggered and province externalid f where rn order by date desc
| 0
|
19,673
| 26,031,333,866
|
IssuesEvent
|
2022-12-21 21:38:31
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Set an output variable in deployment job in one stage (StageA) and use it in another deployment job in another stage (StageB)
|
doc-enhancement devops/prod Pri2 devops-cicd-process/tech
|
This case is not documented and normal syntax is not working.
```
trigger:
paths:
include:
- build/stageVariablesDeployment/*
stages:
- stage: StageA
jobs:
- deployment: A1
environment: env1
strategy:
runOnce:
deploy:
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the deployment variable value"
name: setvarStep
- bash: echo $(System.JobName)
- stage: StageB
dependsOn: StageA
variables:
# myVarFromDeploymentJob: $[ stageDependencies.StageA.A1.outputs['setvarStep.myOutputVar'] ] # Normal syntax is not working
myVarFromDeploymentJob: $[ stageDependencies.StageA.A1.outputs['A1.setvarStep.myOutputVar'] ]
jobs:
- deployment: B1
environment: env1
strategy:
runOnce:
deploy:
steps:
- script: "echo $(myVarFromDeploymentJob)"
name: echovar
```
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#support-for-output-variables)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Set an output variable in deployment job in one stage (StageA) and use it in another deployment job in another stage (StageB) - This case is not documented and normal syntax is not working.
```
trigger:
paths:
include:
- build/stageVariablesDeployment/*
stages:
- stage: StageA
jobs:
- deployment: A1
environment: env1
strategy:
runOnce:
deploy:
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the deployment variable value"
name: setvarStep
- bash: echo $(System.JobName)
- stage: StageB
dependsOn: StageA
variables:
# myVarFromDeploymentJob: $[ stageDependencies.StageA.A1.outputs['setvarStep.myOutputVar'] ] # Normal syntax is not working
myVarFromDeploymentJob: $[ stageDependencies.StageA.A1.outputs['A1.setvarStep.myOutputVar'] ]
jobs:
- deployment: B1
environment: env1
strategy:
runOnce:
deploy:
steps:
- script: "echo $(myVarFromDeploymentJob)"
name: echovar
```
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#support-for-output-variables)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
set an output variable in deployment job in one stage stagea and use it in another deployment job in another stage stageb this case is not documented and normal syntax is not working trigger paths include build stagevariablesdeployment stages stage stagea jobs deployment environment strategy runonce deploy steps bash echo vso this is the deployment variable value name setvarstep bash echo system jobname stage stageb dependson stagea variables myvarfromdeploymentjob normal syntax is not working myvarfromdeploymentjob jobs deployment environment strategy runonce deploy steps script echo myvarfromdeploymentjob name echovar document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
120,382
| 17,644,186,147
|
IssuesEvent
|
2021-08-20 01:54:19
|
gdcorp-action-public-forks/toolchain
|
https://api.github.com/repos/gdcorp-action-public-forks/toolchain
|
opened
|
CVE-2021-23337 (High) detected in lodash-4.17.20.tgz
|
security vulnerability
|
## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>
Dependency Hierarchy:
- eslint-7.13.0.tgz (Root Library)
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.20","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"eslint:7.13.0;lodash:4.17.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23337","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23337 (High) detected in lodash-4.17.20.tgz - ## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>
Dependency Hierarchy:
- eslint-7.13.0.tgz (Root Library)
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.20","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"eslint:7.13.0;lodash:4.17.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23337","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href dependency hierarchy eslint tgz root library x lodash tgz vulnerable library found in base branch master vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree eslint lodash isminimumfixversionavailable true minimumfixversion lodash basebranches vulnerabilityidentifier cve vulnerabilitydetails lodash versions prior to are vulnerable to command injection via the template function vulnerabilityurl
| 0
|
18,459
| 24,549,296,899
|
IssuesEvent
|
2022-10-12 11:17:58
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
closed
|
Add App API endpoint for accessing the process description
|
area/process kind/user-story
|
## Description
There are currently no endpoint that provides access to the original BPMN file used by an app. The file is loaded and used by the app logic, but is never exposed as metadata. While no external system would normally need to use the file it might be useful for a developer to have access to the process.
## Consideration
We need to land a URL structure for App metadata documents.
## Acceptance criteria
It is possible to download the BPMN as a file
## Specification tasks
- [ ] Development tasks are defined
- [ ] Test design / decide test need
## Development tasks
> Add tasks here
## Definition of done
Verify that this issue meets [DoD](https://confluence.brreg.no/display/T3KP/Definition+of+Done#DefinitionofDone-DoD%E2%80%93utvikling) (Only for project members) before closing.
- [ ] Documentation is updated (if relevant)
- [ ] Technical documentation (docs.altinn.studio)
- [ ] User documentation (altinn.github.io/docs)
- [ ] QA
- [ ] Manual test is complete (if relevant)
- [ ] Automated test is implemented (if relevant)
- [ ] All tasks in this userstory are closed (i.e. remaining tasks are moved to other user stories or marked obsolete)
|
1.0
|
Add App API endpoint for accessing the process description - ## Description
There are currently no endpoint that provides access to the original BPMN file used by an app. The file is loaded and used by the app logic, but is never exposed as metadata. While no external system would normally need to use the file it might be useful for a developer to have access to the process.
## Consideration
We need to land a URL structure for App metadata documents.
## Acceptance criteria
It is possible to download the BPMN as a file
## Specification tasks
- [ ] Development tasks are defined
- [ ] Test design / decide test need
## Development tasks
> Add tasks here
## Definition of done
Verify that this issue meets [DoD](https://confluence.brreg.no/display/T3KP/Definition+of+Done#DefinitionofDone-DoD%E2%80%93utvikling) (Only for project members) before closing.
- [ ] Documentation is updated (if relevant)
- [ ] Technical documentation (docs.altinn.studio)
- [ ] User documentation (altinn.github.io/docs)
- [ ] QA
- [ ] Manual test is complete (if relevant)
- [ ] Automated test is implemented (if relevant)
- [ ] All tasks in this userstory are closed (i.e. remaining tasks are moved to other user stories or marked obsolete)
|
process
|
add app api endpoint for accessing the process description description there are currently no endpoint that provides access to the original bpmn file used by an app the file is loaded and used by the app logic but is never exposed as metadata while no external system would normally need to use the file it might be useful for a developer to have access to the process consideration we need to land a url structure for app metadata documents acceptance criteria it is possible to download the bpmn as a file specification tasks development tasks are defined test design decide test need development tasks add tasks here definition of done verify that this issue meets only for project members before closing documentation is updated if relevant technical documentation docs altinn studio user documentation altinn github io docs qa manual test is complete if relevant automated test is implemented if relevant all tasks in this userstory are closed i e remaining tasks are moved to other user stories or marked obsolete
| 1
|
69,382
| 22,333,633,356
|
IssuesEvent
|
2022-06-14 16:27:43
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
New search input field has rounded borders
|
T-Defect
|
### Steps to reproduce
As discussed with @nique


### Outcome
#### What did you expect?
straight line
#### What happened instead?
it seems inconsistent by
- corner radius within the modal
- only bottom border
- border lines in the app are straight, except the composer redesign
- different color and radius compared to composer
### Operating system
arch
### Application version
Element Nightly version: 2022061001 Olm version: 3.2.8
### How did you install the app?
aur
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
New search input field has rounded borders - ### Steps to reproduce
As discussed with @nique


### Outcome
#### What did you expect?
straight line
#### What happened instead?
it seems inconsistent by
- corner radius within the modal
- only bottom border
- border lines in the app are straight, except the composer redesign
- different color and radius compared to composer
### Operating system
arch
### Application version
Element Nightly version: 2022061001 Olm version: 3.2.8
### How did you install the app?
aur
### Homeserver
_No response_
### Will you send logs?
No
|
non_process
|
new search input field has rounded borders steps to reproduce as discussed with nique outcome what did you expect straight line what happened instead it seems inconsistent by corner radius within the modal only bottom border border lines in the app are straight except the composer redesign different color and radius compared to composer operating system arch application version element nightly version olm version how did you install the app aur homeserver no response will you send logs no
| 0
|
210,975
| 16,136,896,947
|
IssuesEvent
|
2021-04-29 12:59:59
|
Tencent/bk-ci
|
https://api.github.com/repos/Tencent/bk-ci
|
closed
|
流水线模板引入新插件,会导致流水线跑不了
|
area/ci/backend kind/bug priority/important-longterm stage/test test/passed
|
**Describe the bug**
在约束模式-更新流水线时,如果流水线模板,新版中有引入了新插件,会导致流水线跑不起来。
能不能自动检查下这个问题,并自动安装插件?
**To Reproduce**
Steps to reproduce the behavior:
1. 项目A发布了流水线模板,到研发商店
2. 项目B使用了模板,用约束模式创建了流水线
3. 项目A修改模板,引入新插件
4. 项目B通过模板更新流水线
5. 项目B因为没安装对应的插件,流水线无法运行!
**Expected behavior**
项目B的流水线能跑起来。
可考虑在更新流水线时,自动检查并自动安装新引入的插件
|
2.0
|
流水线模板引入新插件,会导致流水线跑不了 - **Describe the bug**
在约束模式-更新流水线时,如果流水线模板,新版中有引入了新插件,会导致流水线跑不起来。
能不能自动检查下这个问题,并自动安装插件?
**To Reproduce**
Steps to reproduce the behavior:
1. 项目A发布了流水线模板,到研发商店
2. 项目B使用了模板,用约束模式创建了流水线
3. 项目A修改模板,引入新插件
4. 项目B通过模板更新流水线
5. 项目B因为没安装对应的插件,流水线无法运行!
**Expected behavior**
项目B的流水线能跑起来。
可考虑在更新流水线时,自动检查并自动安装新引入的插件
|
non_process
|
流水线模板引入新插件,会导致流水线跑不了 describe the bug 在约束模式 更新流水线时,如果流水线模板,新版中有引入了新插件,会导致流水线跑不起来。 能不能自动检查下这个问题,并自动安装插件? to reproduce steps to reproduce the behavior 项目a发布了流水线模板,到研发商店 项目b使用了模板,用约束模式创建了流水线 项目a修改模板,引入新插件 项目b通过模板更新流水线 项目b因为没安装对应的插件,流水线无法运行! expected behavior 项目b的流水线能跑起来。 可考虑在更新流水线时,自动检查并自动安装新引入的插件
| 0
|
44,890
| 5,659,208,436
|
IssuesEvent
|
2017-04-10 12:22:43
|
appium/appium
|
https://api.github.com/repos/appium/appium
|
closed
|
Question about clear Safari Cookies for mobile web
|
Question XCUITest
|
## The problem
I cannot use deleteAllCookies or deleteCookie (ruby lib + appium 1.6.4) to delete the cookies after i open the web in safari on iOS 10.3 (iphone 6s simulator),through appium-xcuitest-driver
The reason is I don't want to restart ios simulator after each scenario, thus i set noReset to true. However if I run test about 'user login', so the users' cookies will always been there for other following scenarios, so I'd like to clear cookies at the very top of each scenario, but it doesn't work.
Should I post this question to Selenium or here?
Or do we have some options to clear the safari sessions but do not reset the simulator? (I already search a lot on internet, but unfortunately no suitable answer)
## Environment
* Appium version (or git revision) that exhibits the issue: 1.6.4 (installed by npm)
* Last Appium version that did not exhibit the issue (if applicable):
* Desktop OS/version used to run Appium: macOS Sierra 10.12
* Node.js version (unless using Appium.app|exe): v7.6.0
* Mobile platform/version under test: iOS simulator 10.3 (IPHONE 6S)
* Real device or emulator/simulator: simulator
* Appium CLI or Appium.app|exe: appium clii
Thanks a lot for your help
|
1.0
|
Question about clear Safari Cookies for mobile web - ## The problem
I cannot use deleteAllCookies or deleteCookie (ruby lib + appium 1.6.4) to delete the cookies after i open the web in safari on iOS 10.3 (iphone 6s simulator),through appium-xcuitest-driver
The reason is I don't want to restart ios simulator after each scenario, thus i set noReset to true. However if I run test about 'user login', so the users' cookies will always been there for other following scenarios, so I'd like to clear cookies at the very top of each scenario, but it doesn't work.
Should I post this question to Selenium or here?
Or do we have some options to clear the safari sessions but do not reset the simulator? (I already search a lot on internet, but unfortunately no suitable answer)
## Environment
* Appium version (or git revision) that exhibits the issue: 1.6.4 (installed by npm)
* Last Appium version that did not exhibit the issue (if applicable):
* Desktop OS/version used to run Appium: macOS Sierra 10.12
* Node.js version (unless using Appium.app|exe): v7.6.0
* Mobile platform/version under test: iOS simulator 10.3 (IPHONE 6S)
* Real device or emulator/simulator: simulator
* Appium CLI or Appium.app|exe: appium clii
Thanks a lot for your help
|
non_process
|
question about clear safari cookies for mobile web the problem i cannot use deleteallcookies or deletecookie ruby lib appium to delete the cookies after i open the web in safari on ios iphone simulator through appium xcuitest driver the reason is i don t want to restart ios simulator after each scenario thus i set noreset to true however if i run test about user login so the users cookies will always been there for other following scenarios so i d like to clear cookies at the very top of each scenario but it doesn t work should i post this question to selenium or here or do we have some options to clear the safari sessions but do not reset the simulator i already search a lot on internet but unfortunately no suitable answer environment appium version or git revision that exhibits the issue installed by npm last appium version that did not exhibit the issue if applicable desktop os version used to run appium macos sierra node js version unless using appium app exe mobile platform version under test ios simulator iphone real device or emulator simulator simulator appium cli or appium app exe appium clii thanks a lot for your help
| 0
|
106,139
| 13,247,729,402
|
IssuesEvent
|
2020-08-19 17:44:54
|
tokio-rs/tracing
|
https://api.github.com/repos/tokio-rs/tracing
|
closed
|
fmt::Subscribers don't want their output to be captured
|
crate/subscriber kind/feature needs/design
|
## Bug Report
<!--
Thank you for reporting an issue.
Please fill in as much of the template below as you're able.
-->
### Version
```
"checksum tracing 0.1.13 (registry+https://github.com/rust-lang/crates.io-index)" = "1721cc8cf7d770cc4257872507180f35a4797272f5962f24c806af9e7faf52ab"
"checksum tracing-attributes 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)" = "7fbad39da2f9af1cae3016339ad7f2c7a9e870f12e8fd04c4fd7ef35b30c0d2b"
"checksum tracing-core 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)" = "0aa83a9a47081cd522c09c81b31aec2c9273424976f922ad61c053b58350b715"
"checksum tracing-log 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "5e0f8c7178e13481ff6765bd169b33e8d554c5d2bbede5e32c356194be02b9b9"
"checksum tracing-serde 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "b6ccba2f8f16e0ed268fc765d9b7ff22e965e7185d32f8f1ec8294fe17d86e79"
"checksum tracing-subscriber 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "dedebcf5813b02261d6bab3a12c6a8ae702580c0405a2e8ec16c3713caf14c20"
```
### Platform
Linux localhost 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
### Crates
`tracing-subscriber`
### Description
I cannot get the output of a `fmt::Subscriber` to coöperate with settings that want to capture or redirect stdio. For example, the tracing messages all show through the test runner output of `cargo test`, even when the `--nocapture` flag is not set.
I have tried to even make my own `MakeWriter` that calls `eprintln!()` directly, but that only seems to hide some of the output. It would be nice to have an easy way to clean up test output short of disabling the output entirely, since it's nice to have the output there in case a test fails.
|
1.0
|
fmt::Subscribers don't want their output to be captured - ## Bug Report
<!--
Thank you for reporting an issue.
Please fill in as much of the template below as you're able.
-->
### Version
```
"checksum tracing 0.1.13 (registry+https://github.com/rust-lang/crates.io-index)" = "1721cc8cf7d770cc4257872507180f35a4797272f5962f24c806af9e7faf52ab"
"checksum tracing-attributes 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)" = "7fbad39da2f9af1cae3016339ad7f2c7a9e870f12e8fd04c4fd7ef35b30c0d2b"
"checksum tracing-core 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)" = "0aa83a9a47081cd522c09c81b31aec2c9273424976f922ad61c053b58350b715"
"checksum tracing-log 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "5e0f8c7178e13481ff6765bd169b33e8d554c5d2bbede5e32c356194be02b9b9"
"checksum tracing-serde 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "b6ccba2f8f16e0ed268fc765d9b7ff22e965e7185d32f8f1ec8294fe17d86e79"
"checksum tracing-subscriber 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "dedebcf5813b02261d6bab3a12c6a8ae702580c0405a2e8ec16c3713caf14c20"
```
### Platform
Linux localhost 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
### Crates
`tracing-subscriber`
### Description
I cannot get the output of a `fmt::Subscriber` to coöperate with settings that want to capture or redirect stdio. For example, the tracing messages all show through the test runner output of `cargo test`, even when the `--nocapture` flag is not set.
I have tried to even make my own `MakeWriter` that calls `eprintln!()` directly, but that only seems to hide some of the output. It would be nice to have an easy way to clean up test output short of disabling the output entirely, since it's nice to have the output there in case a test fails.
|
non_process
|
fmt subscribers don t want their output to be captured bug report thank you for reporting an issue please fill in as much of the template below as you re able version checksum tracing registry checksum tracing attributes registry checksum tracing core registry checksum tracing log registry checksum tracing serde registry checksum tracing subscriber registry platform linux localhost generic ubuntu smp tue feb utc gnu linux crates tracing subscriber description i cannot get the output of a fmt subscriber to coöperate with settings that want to capture or redirect stdio for example the tracing messages all show through the test runner output of cargo test even when the nocapture flag is not set i have tried to even make my own makewriter that calls eprintln directly but that only seems to hide some of the output it would be nice to have an easy way to clean up test output short of disabling the output entirely since it s nice to have the output there in case a test fails
| 0
|
20,009
| 26,483,341,907
|
IssuesEvent
|
2023-01-17 16:09:29
|
StormSurgeLive/asgs
|
https://api.github.com/repos/StormSurgeLive/asgs
|
opened
|
Create post processing script that will compute output datetimes for use in metadata
|
enhancement incremental improvement metadata postprocessing
|
A useful addition to the netCDF and run.properties metadata would be an array of output datetimes for each output file (or a range for minmax files). We currently have the coldstart date in each place, but only arrays of time in seconds since coldstart in the output files, and nothing in the run.properties file. This would relieve a burden on downstream data consumers as well as avoid confusion about what time the first output dataset corresponds to (i.e., one output time increment after run start time).
|
1.0
|
Create post processing script that will compute output datetimes for use in metadata - A useful addition to the netCDF and run.properties metadata would be an array of output datetimes for each output file (or a range for minmax files). We currently have the coldstart date in each place, but only arrays of time in seconds since coldstart in the output files, and nothing in the run.properties file. This would relieve a burden on downstream data consumers as well as avoid confusion about what time the first output dataset corresponds to (i.e., one output time increment after run start time).
|
process
|
create post processing script that will compute output datetimes for use in metadata a useful addition to the netcdf and run properties metadata would be an array of output datetimes for each output file or a range for minmax files we currently have the coldstart date in each place but only arrays of time in seconds since coldstart in the output files and nothing in the run properties file this would relieve a burden on downstream data consumers as well as avoid confusion about what time the first output dataset corresponds to i e one output time increment after run start time
| 1
|
49,568
| 10,372,679,398
|
IssuesEvent
|
2019-09-09 04:15:58
|
marbl/MetagenomeScope
|
https://api.github.com/repos/marbl/MetagenomeScope
|
closed
|
Convert collate script from python 2 to 3
|
codeissue futureideas
|
Since Python 2 is getting deprecated.
This is also needed for making this available as a Q2 plugin.
(Update: I ended up doing a lot of restructuring in this branch. There are a lot of changes.)
- [x] Update installation instructions re: python 3
- [x] Update installation instructions re: SPQR stuff (hack needed to create a py2 env, switch to that to build OGDF, then switch back to the py3 env for running mgsc—not too ugly, all things considered—this will allow us to hold off on #154 for a while)
- [x] Update acknowledgements/licenses re: use of futurize
- [x] Update acknowledgements/licenses re: style software used (flake8, black, prettier, jshint)
- [x] Update acknowledgements/licenses re: new testing software used (pytest-cov, nyc, CodeCov)
- [ ] Add some more tests now that all of this new infrastructure is in place and less rickety than before
|
1.0
|
Convert collate script from python 2 to 3 - Since Python 2 is getting deprecated.
This is also needed for making this available as a Q2 plugin.
(Update: I ended up doing a lot of restructuring in this branch. There are a lot of changes.)
- [x] Update installation instructions re: python 3
- [x] Update installation instructions re: SPQR stuff (hack needed to create a py2 env, switch to that to build OGDF, then switch back to the py3 env for running mgsc—not too ugly, all things considered—this will allow us to hold off on #154 for a while)
- [x] Update acknowledgements/licenses re: use of futurize
- [x] Update acknowledgements/licenses re: style software used (flake8, black, prettier, jshint)
- [x] Update acknowledgements/licenses re: new testing software used (pytest-cov, nyc, CodeCov)
- [ ] Add some more tests now that all of this new infrastructure is in place and less rickety than before
|
non_process
|
convert collate script from python to since python is getting deprecated this is also needed for making this available as a plugin update i ended up doing a lot of restructuring in this branch there are a lot of changes update installation instructions re python update installation instructions re spqr stuff hack needed to create a env switch to that to build ogdf then switch back to the env for running mgsc—not too ugly all things considered—this will allow us to hold off on for a while update acknowledgements licenses re use of futurize update acknowledgements licenses re style software used black prettier jshint update acknowledgements licenses re new testing software used pytest cov nyc codecov add some more tests now that all of this new infrastructure is in place and less rickety than before
| 0
|
201,401
| 15,193,695,773
|
IssuesEvent
|
2021-02-16 01:27:04
|
SteffeyDev/atemOSC
|
https://api.github.com/repos/SteffeyDev/atemOSC
|
closed
|
Hyperdeck Timecode Control Doesn't Recognize Frames
|
bug done-needs-testing
|
Greetings all:
Trick or Treat?
Is there some reason why the atemOSC clip-time timecode message for Hyperdecks does not recognize frames? In the atemOSC documentation clip-time is specified as 00:00:00 (hours, minutes, seconds), but in the Blackmagic Hyperdeck protocol documentation timecode is specified as 00:00:00:00 (hours, minutes, seconds, frames). When I send a clip-time message including frames, it just resolves back to the beginning of the clip (00:00:00).
Is there a workaround? Another way of doing this? I really need to trigger clip-time that includes frames for precise control.
I have tested on a Hyperdeck Studio and a Hyperdeck Mini.
Many thanks and Happy Halloween.
Randall
|
1.0
|
Hyperdeck Timecode Control Doesn't Recognize Frames - Greetings all:
Trick or Treat?
Is there some reason why the atemOSC clip-time timecode message for Hyperdecks does not recognize frames? In the atemOSC documentation clip-time is specified as 00:00:00 (hours, minutes, seconds), but in the Blackmagic Hyperdeck protocol documentation timecode is specified as 00:00:00:00 (hours, minutes, seconds, frames). When I send a clip-time message including frames, it just resolves back to the beginning of the clip (00:00:00).
Is there a workaround? Another way of doing this? I really need to trigger clip-time that includes frames for precise control.
I have tested on a Hyperdeck Studio and a Hyperdeck Mini.
Many thanks and Happy Halloween.
Randall
|
non_process
|
hyperdeck timecode control doesn t recognize frames greetings all trick or treat is there some reason why the atemosc clip time timecode message for hyperdecks does not recognize frames in the atemosc documentation clip time is specified as hours minutes seconds but in the blackmagic hyperdeck protocol documentation timecode is specified as hours minutes seconds frames when i send a clip time message including frames it just resolves back to the beginning of the clip is there a workaround another way of doing this i really need to trigger clip time that includes frames for precise control i have tested on a hyperdeck studio and a hyperdeck mini many thanks and happy halloween randall
| 0
|
16,574
| 21,605,283,159
|
IssuesEvent
|
2022-05-04 01:29:49
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
RFC: policy configuration for execArgv
|
discuss feature request process security cli policy stale
|
<!--
Thank you for suggesting an idea to make Node.js better.
Please fill in as much of the template below as you're able.
-->
**Is your feature request related to a problem? Please describe.**
Please describe the problem you are trying to solve.
Node has various features to aid with defensive coding and how behaviors for unsafe features (deprecated / experimental) are able to be used. However, these must be configured for every CLI invocation in complete totality. It would be nice for a variety of flags to be set all at once in a single configuration.
Some flags like `--disallow-code-generation-from-strings` or `--jitless` are likely to be desirable to be unable to be altered from what a policy provides.
Some flags like `--require` likely want to preload things for security hardening but allow other things to be loaded like debugging utilities.
Other flags like `--max-http-header-size=size` might want a different default for an environment to be the policy but for emergencies allow for alteration via the CLI.
**Describe the solution you'd like**
Please describe the desired behavior.
Policies have an existing configuration system for configuring many values at once. It should be possible when using them to have an opt-out way to enforce various flags for secure defaults but at the same time a way to prevent opt-out of critical flags.
Policies should be able to intercept and alter the argv parser by adding a new field to the policy manifest: "execArgv". Under this field behavior for unlisted flags should be allowed to cause the process to be ignored, cause a policy exception, or be allowed in various forms. Likely this would look something like the following (bikeshed open to change):
```json5
{
"execArgv": {
// "allow" | "ignore" | "error", default => "error"
"unlistedFlags": "ignore",
"allowedFlagValues": {
// null creates policy violation when passing the value to the CLI
// avoid false since may be treated as a value? Weak argument
// need something like `true` per other parts of policy to allow any value
"disable-codegen-from-strings": null,
// allows using --require=/path/to/debug.js
"require": ["/path/to/debug.js"],
// don't think we need "sets" of config where condition X only allowed if Y is --required
// Y could test for X for most things?
"condition": ["debug"],
},
// some flags could have interleaved ordering, need to use an array
// example: proposed --import (no PR by anyone yet) and --require
"defaultFlags": [
{
// this flag is a bit weird vs the --(no-)? convention
"flag": "disable-codegen-from-strings",
"value": true,
},
{
"flag": "require",
"value": "lockdown-core-apis"
}
]
}
}
```
**Describe alternatives you've considered**
Please describe alternative solutions or features you have considered.
I think we could create a separate out of band configuration for policies but this seems friction would be without reason.
----
I've added flags for APIs that would be visibly affected by this features.
|
1.0
|
RFC: policy configuration for execArgv - <!--
Thank you for suggesting an idea to make Node.js better.
Please fill in as much of the template below as you're able.
-->
**Is your feature request related to a problem? Please describe.**
Please describe the problem you are trying to solve.
Node has various features to aid with defensive coding and how behaviors for unsafe features (deprecated / experimental) are able to be used. However, these must be configured for every CLI invocation in complete totality. It would be nice for a variety of flags to be set all at once in a single configuration.
Some flags like `--disallow-code-generation-from-strings` or `--jitless` are likely to be desirable to be unable to be altered from what a policy provides.
Some flags like `--require` likely want to preload things for security hardening but allow other things to be loaded like debugging utilities.
Other flags like `--max-http-header-size=size` might want a different default for an environment to be the policy but for emergencies allow for alteration via the CLI.
**Describe the solution you'd like**
Please describe the desired behavior.
Policies have an existing configuration system for configuring many values at once. It should be possible when using them to have an opt-out way to enforce various flags for secure defaults but at the same time a way to prevent opt-out of critical flags.
Policies should be able to intercept and alter the argv parser by adding a new field to the policy manifest: "execArgv". Under this field behavior for unlisted flags should be allowed to cause the process to be ignored, cause a policy exception, or be allowed in various forms. Likely this would look something like the following (bikeshed open to change):
```json5
{
"execArgv": {
// "allow" | "ignore" | "error", default => "error"
"unlistedFlags": "ignore",
"allowedFlagValues": {
// null creates policy violation when passing the value to the CLI
// avoid false since may be treated as a value? Weak argument
// need something like `true` per other parts of policy to allow any value
"disable-codegen-from-strings": null,
// allows using --require=/path/to/debug.js
"require": ["/path/to/debug.js"],
// don't think we need "sets" of config where condition X only allowed if Y is --required
// Y could test for X for most things?
"condition": ["debug"],
},
// some flags could have interleaved ordering, need to use an array
// example: proposed --import (no PR by anyone yet) and --require
"defaultFlags": [
{
// this flag is a bit weird vs the --(no-)? convention
"flag": "disable-codegen-from-strings",
"value": true,
},
{
"flag": "require",
"value": "lockdown-core-apis"
}
]
}
}
```
**Describe alternatives you've considered**
Please describe alternative solutions or features you have considered.
I think we could create a separate out of band configuration for policies but this seems friction would be without reason.
----
I've added flags for APIs that would be visibly affected by this features.
|
process
|
rfc policy configuration for execargv thank you for suggesting an idea to make node js better please fill in as much of the template below as you re able is your feature request related to a problem please describe please describe the problem you are trying to solve node has various features to aid with defensive coding and how behaviors for unsafe features deprecated experimental are able to be used however these must be configured for every cli invocation in complete totality it would be nice for a variety of flags to be set all at once in a single configuration some flags like disallow code generation from strings or jitless are likely to be desirable to be unable to be altered from what a policy provides some flags like require likely want to preload things for security hardening but allow other things to be loaded like debugging utilities other flags like max http header size size might want a different default for an environment to be the policy but for emergencies allow for alteration via the cli describe the solution you d like please describe the desired behavior policies have an existing configuration system for configuring many values at once it should be possible when using them to have an opt out way to enforce various flags for secure defaults but at the same time a way to prevent opt out of critical flags policies should be able to intercept and alter the argv parser by adding a new field to the policy manifest execargv under this field behavior for unlisted flags should be allowed to cause the process to be ignored cause a policy exception or be allowed in various forms likely this would look something like the following bikeshed open to change execargv allow ignore error default error unlistedflags ignore allowedflagvalues null creates policy violation when passing the value to the cli avoid false since may be treated as a value weak argument need something like true per other parts of policy to allow any value disable codegen from strings null allows using require path to debug js require don t think we need sets of config where condition x only allowed if y is required y could test for x for most things condition some flags could have interleaved ordering need to use an array example proposed import no pr by anyone yet and require defaultflags this flag is a bit weird vs the no convention flag disable codegen from strings value true flag require value lockdown core apis describe alternatives you ve considered please describe alternative solutions or features you have considered i think we could create a separate out of band configuration for policies but this seems friction would be without reason i ve added flags for apis that would be visibly affected by this features
| 1
|
262,135
| 8,251,257,330
|
IssuesEvent
|
2018-09-12 07:10:53
|
vmware/vic-product
|
https://api.github.com/repos/vmware/vic-product
|
closed
|
Document automated installation/upgrade of the vSphere Client plug-in
|
area/pub area/pub/vsphere priority/p0 product/ova
|
Per https://github.com/vmware/vic-product/issues/1432.
@renmaosheng if the process is completely automated in 1.4.3, even though the docs apply to all 1.4.x releases (including 1.4.0, 1.4.1, and 1.4.2), can we completely remove the topics on installing and upgrading the client plug-in? Afterall, once 1.4.3 is available, why would anyone be doing a fresh install of, or upgrading to, ≤1.4.2?
|
1.0
|
Document automated installation/upgrade of the vSphere Client plug-in - Per https://github.com/vmware/vic-product/issues/1432.
@renmaosheng if the process is completely automated in 1.4.3, even though the docs apply to all 1.4.x releases (including 1.4.0, 1.4.1, and 1.4.2), can we completely remove the topics on installing and upgrading the client plug-in? Afterall, once 1.4.3 is available, why would anyone be doing a fresh install of, or upgrading to, ≤1.4.2?
|
non_process
|
document automated installation upgrade of the vsphere client plug in per renmaosheng if the process is completely automated in even though the docs apply to all x releases including and can we completely remove the topics on installing and upgrading the client plug in afterall once is available why would anyone be doing a fresh install of or upgrading to ≤
| 0
|
17,068
| 23,544,145,278
|
IssuesEvent
|
2022-08-20 21:36:34
|
haubna/PhysicsMod
|
https://api.github.com/repos/haubna/PhysicsMod
|
closed
|
[Crash] Medieval weapons projectile crash on collision
|
compatibility
|
When the Medieval Weapons mod is installed, throwing a javelin crashes the game.
https://gist.github.com/CasualJeremy/f1801239892b2057cc434f539338a19a
I put the error into the Medieval Weapons issues GitHub but they believe it is happening from this end.
"Looks like better physics tries to cast the javelin entity to a living entity with a mixin."
|
True
|
[Crash] Medieval weapons projectile crash on collision - When the Medieval Weapons mod is installed, throwing a javelin crashes the game.
https://gist.github.com/CasualJeremy/f1801239892b2057cc434f539338a19a
I put the error into the Medieval Weapons issues GitHub but they believe it is happening from this end.
"Looks like better physics tries to cast the javelin entity to a living entity with a mixin."
|
non_process
|
medieval weapons projectile crash on collision when the medieval weapons mod is installed throwing a javelin crashes the game i put the error into the medieval weapons issues github but they believe it is happening from this end looks like better physics tries to cast the javelin entity to a living entity with a mixin
| 0
|
22,292
| 30,844,056,648
|
IssuesEvent
|
2023-08-02 12:40:30
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
reopened
|
Removing a custom column used in the grouping crashes the query builder
|
Type:Bug Priority:P1 .Frontend .Regression/master .Team/QueryProcessor :hammer_and_wrench:
|
### Describe the bug
The query builder crashes when I remove a custom column from an existing question. That custom column is neither displayed nor used for sorting in the output/results but is used in the grouping.
### To Reproduce
1. Create a new question using the Orders sample table
2. Add a custom column called "Is Promotion": case([Discount] > 0, 1, 0)
3. Group by the custom column and count the number of distinct order IDs
4. Remove the custom column
5. Query builder crashes
<img width="1512" alt="Screenshot 2023-07-25 at 6 19 35 PM" src="https://github.com/metabase/metabase/assets/10627150/d7cd0ce6-4794-4875-a2e4-cc7a93efc407">
### Expected behavior
The custom column should just be dropped and the question should not group by the removed column anymore.
### Logs
<img width="1512" alt="Screenshot 2023-07-25 at 6 19 18 PM" src="https://github.com/metabase/metabase/assets/10627150/7740c0c3-d203-4854-a467-e65d5ec8fc6c">
### Information about your Metabase installation
```JSON
{
"browser-info": {
"language": "en-GB",
"platform": "MacIntel",
"userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
"vendor": "Google Inc."
},
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.19+7",
"java.vendor": "Eclipse Adoptium",
"java.vendor.url": "https://adoptium.net/",
"java.version": "11.0.19",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.19+7",
"os.name": "Linux",
"os.version": "5.10.184-175.731.amzn2.x86_64",
"user.language": "en",
"user.timezone": "GMT"
},
"metabase-info": {
"databases": [
"druid",
"redshift",
"mysql",
"bigquery-cloud-sdk",
"postgres",
"mongo",
"h2"
],
"hosting-env": "unknown",
"application-database": "postgres",
"application-database-details": {
"database": {
"name": "PostgreSQL",
"version": "14.7"
},
"jdbc-driver": {
"name": "PostgreSQL JDBC Driver",
"version": "42.5.4"
}
},
"run-mode": "prod",
"version": {
"date": "2023-07-25",
"tag": "vUNKNOWN",
"branch": "master",
"hash": "14a24c5"
},
"settings": {
"report-timezone": "US/Pacific"
}
}
}
```
### Severity
Annoying but there is a workaround by removing the grouping first
### Additional context
_No response_
|
1.0
|
Removing a custom column used in the grouping crashes the query builder - ### Describe the bug
The query builder crashes when I remove a custom column from an existing question. That custom column is neither displayed nor used for sorting in the output/results but is used in the grouping.
### To Reproduce
1. Create a new question using the Orders sample table
2. Add a custom column called "Is Promotion": case([Discount] > 0, 1, 0)
3. Group by the custom column and count the number of distinct order IDs
4. Remove the custom column
5. Query builder crashes
<img width="1512" alt="Screenshot 2023-07-25 at 6 19 35 PM" src="https://github.com/metabase/metabase/assets/10627150/d7cd0ce6-4794-4875-a2e4-cc7a93efc407">
### Expected behavior
The custom column should just be dropped and the question should not group by the removed column anymore.
### Logs
<img width="1512" alt="Screenshot 2023-07-25 at 6 19 18 PM" src="https://github.com/metabase/metabase/assets/10627150/7740c0c3-d203-4854-a467-e65d5ec8fc6c">
### Information about your Metabase installation
```JSON
{
"browser-info": {
"language": "en-GB",
"platform": "MacIntel",
"userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
"vendor": "Google Inc."
},
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.19+7",
"java.vendor": "Eclipse Adoptium",
"java.vendor.url": "https://adoptium.net/",
"java.version": "11.0.19",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.19+7",
"os.name": "Linux",
"os.version": "5.10.184-175.731.amzn2.x86_64",
"user.language": "en",
"user.timezone": "GMT"
},
"metabase-info": {
"databases": [
"druid",
"redshift",
"mysql",
"bigquery-cloud-sdk",
"postgres",
"mongo",
"h2"
],
"hosting-env": "unknown",
"application-database": "postgres",
"application-database-details": {
"database": {
"name": "PostgreSQL",
"version": "14.7"
},
"jdbc-driver": {
"name": "PostgreSQL JDBC Driver",
"version": "42.5.4"
}
},
"run-mode": "prod",
"version": {
"date": "2023-07-25",
"tag": "vUNKNOWN",
"branch": "master",
"hash": "14a24c5"
},
"settings": {
"report-timezone": "US/Pacific"
}
}
}
```
### Severity
Annoying but there is a workaround by removing the grouping first
### Additional context
_No response_
|
process
|
removing a custom column used in the grouping crashes the query builder describe the bug the query builder crashes when i remove a custom column from an existing question that custom column is neither displayed nor used for sorting in the output results but is used in the grouping to reproduce create a new question using the orders sample table add a custom column called is promotion case group by the custom column and count the number of distinct order ids remove the custom column query builder crashes img width alt screenshot at pm src expected behavior the custom column should just be dropped and the question should not group by the removed column anymore logs img width alt screenshot at pm src information about your metabase installation json browser info language en gb platform macintel useragent mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari vendor google inc system info file encoding utf java runtime name openjdk runtime environment java runtime version java vendor eclipse adoptium java vendor url java version java vm name openjdk bit server vm java vm version os name linux os version user language en user timezone gmt metabase info databases druid redshift mysql bigquery cloud sdk postgres mongo hosting env unknown application database postgres application database details database name postgresql version jdbc driver name postgresql jdbc driver version run mode prod version date tag vunknown branch master hash settings report timezone us pacific severity annoying but there is a workaround by removing the grouping first additional context no response
| 1
|
176,933
| 28,302,825,817
|
IssuesEvent
|
2023-04-10 08:00:55
|
bounswe/bounswe2023group5
|
https://api.github.com/repos/bounswe/bounswe2023group5
|
closed
|
Designing Class Diagram: Search
|
Priority: High Type: Design Status: In Review
|
### Description
The class diagrams are shared among the team members, I am responsible of the design of the class diagram of Search.
### 👮♀️ Reviewer
Halis Bal
### ⏰ Deadline
08.04.2023 - Saturday - 23:59
|
1.0
|
Designing Class Diagram: Search - ### Description
The class diagrams are shared among the team members, I am responsible of the design of the class diagram of Search.
### 👮♀️ Reviewer
Halis Bal
### ⏰ Deadline
08.04.2023 - Saturday - 23:59
|
non_process
|
designing class diagram search description the class diagrams are shared among the team members i am responsible of the design of the class diagram of search 👮♀️ reviewer halis bal ⏰ deadline saturday
| 0
|
202,559
| 15,286,994,782
|
IssuesEvent
|
2021-02-23 15:17:58
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: kv95/enc=false/nodes=1 failed
|
C-test-failure O-roachtest O-robot branch-release-20.2 release-blocker
|
[(roachtest).kv95/enc=false/nodes=1 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657159&tab=buildLog) on [release-20.2@8c79e2bc4b35d36c8527f4c40c974f03d9034f46](https://github.com/cockroachdb/cockroach/commits/8c79e2bc4b35d36c8527f4c40c974f03d9034f46):
```
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (2) output in run_080546.376_n2_workload_run_kv
Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2657159-1612856700-16-n2cpu8:2 -- ./workload run kv --init --histograms=perf/stats.json --concurrency=64 --splits=1000 --duration=10m0s --read-percent=95 {pgurl:1-1} returned
| stderr:
| ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload)
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 2. Command with error:
| | ```
| | ./workload run kv --init --histograms=perf/stats.json --concurrency=64 --splits=1000 --duration=10m0s --read-percent=95 {pgurl:1-1}
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
cluster.go:2654,kv.go:97,kv.go:184,test_runner.go:755: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2642
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2650
| main.registerKV.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/kv.go:97
| main.registerKV.func3
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/kv.go:184
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2698
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2612
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<details><summary>More</summary><p>
Artifacts: [/kv95/enc=false/nodes=1](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657159&tab=artifacts#/kv95/enc=false/nodes=1)
Related:
- #59919 roachtest: kv95/enc=false/nodes=1/cpu=32 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #59917 roachtest: kv95/enc=false/nodes=1 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Akv95%2Fenc%3Dfalse%2Fnodes%3D1.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: kv95/enc=false/nodes=1 failed - [(roachtest).kv95/enc=false/nodes=1 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657159&tab=buildLog) on [release-20.2@8c79e2bc4b35d36c8527f4c40c974f03d9034f46](https://github.com/cockroachdb/cockroach/commits/8c79e2bc4b35d36c8527f4c40c974f03d9034f46):
```
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (2) output in run_080546.376_n2_workload_run_kv
Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2657159-1612856700-16-n2cpu8:2 -- ./workload run kv --init --histograms=perf/stats.json --concurrency=64 --splits=1000 --duration=10m0s --read-percent=95 {pgurl:1-1} returned
| stderr:
| ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload)
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 2. Command with error:
| | ```
| | ./workload run kv --init --histograms=perf/stats.json --concurrency=64 --splits=1000 --duration=10m0s --read-percent=95 {pgurl:1-1}
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
cluster.go:2654,kv.go:97,kv.go:184,test_runner.go:755: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2642
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2650
| main.registerKV.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/kv.go:97
| main.registerKV.func3
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/kv.go:184
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2698
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2612
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<details><summary>More</summary><p>
Artifacts: [/kv95/enc=false/nodes=1](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657159&tab=artifacts#/kv95/enc=false/nodes=1)
Related:
- #59919 roachtest: kv95/enc=false/nodes=1/cpu=32 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #59917 roachtest: kv95/enc=false/nodes=1 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Akv95%2Fenc%3Dfalse%2Fnodes%3D1.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
roachtest enc false nodes failed on runtime goexit usr local go src runtime asm s wraps output in run workload run kv wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload run kv init histograms perf stats json concurrency splits duration read percent pgurl returned stderr workload lib linux gnu libm so version glibc not found required by workload error command problem exit status command problem wraps node command with error workload run kv init histograms perf stats json concurrency splits duration read percent pgurl wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack errutil withprefix main withcommanddetails exec exiterror cluster go kv go kv go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registerkv home agent work go src github com cockroachdb cockroach pkg cmd roachtest kv go main registerkv home agent work go src github com cockroachdb cockroach pkg cmd roachtest kv go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror more artifacts related roachtest enc false nodes cpu failed roachtest enc false nodes failed powered by
| 0
|
32,002
| 13,727,043,898
|
IssuesEvent
|
2020-10-04 03:52:56
|
invertase/react-native-firebase
|
https://api.github.com/repos/invertase/react-native-firebase
|
closed
|
🔥new OAuthProvider()` is not supported on the native Firebase SDKs. (Even though it is)
|
Service: Authentication Type: Stale
|
<!---
NOTE: We have no support in place for using React Native Firebase in Expo applications (ejected or otherwise).
If you are seeing an issue, it may most likely not be an issue with React Native Firebase itself, but with the Expo runtime or with an incorrect React Native Firebase setup. For support on how to use Firebase with Expo, you should contact the Expo team or the Expo community.
General Expo issues are no longer be allowed on the React Native Firebase issue tracker. If you've investigated the Expo runtime or your app and found a genuine issue with React Native Firebase, please continue to open an issue.
--->
<!---
1) For feature requests please visit our [Feature Request Board](https://boards.invertase.io/react-native-firebase).
2) For questions and support please use our Discord chat: https://discord.gg/C9aK28N or Stack Overflow: https://stackoverflow.com/questions/tagged/react-native-firebase
3) If this is a setup issue then please make sure you've correctly followed the setup guides, most setup issues such as 'duplicate dex files', 'default app has not been initialized' etc are all down to an incorrect setup as the guides haven't been correctly followed.
-->
<!-- NOTE: You can change any of the `[ ]` to `[x]` to mark an option(s) as selected -->
<!-- PLEASE DO NOT REMOVE ANY SECTIONS FROM THIS ISSUE TEMPLATE -->
<!-- Leave them as they are even if they're irrelevant to your issue -->
## Issue
<!-- Please describe your issue here --^ and provide as much detail as you can. -->
<!-- Include code snippets that show your usages of the library in the context of your project. -->
<!-- Snippets that also show how and where the library is imported in JS are useful to debug issues relating to importing or methods not found issues -->
Getting ```new OAuthProvider()` is not supported on the native Firebase SDKs.```
When running ```var provider = new firebase.auth.OAuthProvider('zoom.com');```
This definitely isn't true. See [here](https://firebase.google.com/docs/reference/android/com/google/firebase/auth/OAuthProvider) and [here](https://firebase.google.com/docs/reference/ios/firebaseauth/api/reference/Classes/FIROAuthProvider).
Likely related to this check in your unit tests (along with potentially no implementation?)
```
describe('OAuthProvider', () => {
describe('constructor', () => {
it('should throw an unsupported error', () => {
(() => new firebase.auth.OAuthProvider()).should.throw(
'`new OAuthProvider()` is not supported on the native Firebase SDKs.',
);
});
});
```
---
## Project Files
<!-- Provide the contents of key project files which will help to debug -->
<!-- For Example: -->
<!-- - iOS: `Podfile` contents. -->
<!-- - Android: `android/build.gradle` contents. -->
<!-- - Android: `android/app/build.gradle` contents. -->
<!-- - Android: `AndroidManifest.xml` contents. -->
<!-- ADD THE CONTENTS OF THE FILES IN THE PROVIDED CODE BLOCKS BELOW -->
### iOS
<details><summary>Click To Expand</summary>
<p>
#### `ios/Podfile`:
- [ ] I'm not using Pods
- [x] I'm using Pods and my Podfile looks like:
```ruby
#platform :ios, '10.0'
install! 'cocoapods', :disable_input_output_paths => true
require_relative '../node_modules/react-native-unimodules/cocoapods'
require_relative '../node_modules/@react-native-community/cli-platform-ios/native_modules'
target 'ReLearn' do
rnPrefix = "../node_modules/react-native"
use_native_modules!
#use_frameworks!
# Download pre-compiled Firestore library.
#pod 'FirebaseFirestore', :git => 'https://github.com/invertase/firestore-ios-sdk-frameworks.git', :tag => '6.25.0'
# React Native and its dependencies
pod 'FBLazyVector', :path => "#{rnPrefix}/Libraries/FBLazyVector"
pod 'FBReactNativeSpec', :path => "#{rnPrefix}/Libraries/FBReactNativeSpec"
pod 'RCTRequired', :path => "#{rnPrefix}/Libraries/RCTRequired"
pod 'RCTTypeSafety', :path => "#{rnPrefix}/Libraries/TypeSafety"
pod 'React', :path => "#{rnPrefix}/"
pod 'React-Core', :path => "#{rnPrefix}/"
pod 'React-CoreModules', :path => "#{rnPrefix}/React/CoreModules"
pod 'React-RCTActionSheet', :path => "#{rnPrefix}/Libraries/ActionSheetIOS"
pod 'React-RCTAnimation', :path => "#{rnPrefix}/Libraries/NativeAnimation"
pod 'React-RCTBlob', :path => "#{rnPrefix}/Libraries/Blob"
pod 'React-RCTImage', :path => "#{rnPrefix}/Libraries/Image"
pod 'React-RCTLinking', :path => "#{rnPrefix}/Libraries/LinkingIOS"
pod 'React-RCTNetwork', :path => "#{rnPrefix}/Libraries/Network"
pod 'React-RCTSettings', :path => "#{rnPrefix}/Libraries/Settings"
pod 'React-RCTText', :path => "#{rnPrefix}/Libraries/Text"
pod 'React-RCTVibration', :path => "#{rnPrefix}/Libraries/Vibration"
pod 'React-Core/RCTWebSocket', :path => "#{rnPrefix}/"
pod 'React-Core/DevSupport', :path => "#{rnPrefix}/"
pod 'React-cxxreact', :path => "#{rnPrefix}/ReactCommon/cxxreact"
pod 'React-jsi', :path => "#{rnPrefix}/ReactCommon/jsi"
pod 'React-jsiexecutor', :path => "#{rnPrefix}/ReactCommon/jsiexecutor"
pod 'React-jsinspector', :path => "#{rnPrefix}/ReactCommon/jsinspector"
pod 'ReactCommon/jscallinvoker', :path => "#{rnPrefix}/ReactCommon"
pod 'ReactCommon/turbomodule/core', :path => "#{rnPrefix}/ReactCommon"
pod 'Yoga', :path => "#{rnPrefix}/ReactCommon/yoga"
pod 'DoubleConversion', :podspec => "#{rnPrefix}/third-party-podspecs/DoubleConversion.podspec"
pod 'glog', :podspec => "#{rnPrefix}/third-party-podspecs/glog.podspec"
pod 'Folly', :podspec => "#{rnPrefix}/third-party-podspecs/Folly.podspec"
# Permission Managers
permissions_path = '../node_modules/react-native-permissions/ios'
pod 'Permission-BluetoothPeripheral', :path => "#{permissions_path}/BluetoothPeripheral.podspec"
pod 'Permission-Microphone', :path => "#{permissions_path}/Microphone.podspec"
pod 'Permission-SpeechRecognition', :path => "#{permissions_path}/SpeechRecognition.podspec"
# Other native modules
#pod 'RNGestureHandler', :podspec => '../node_modules/react-native-gesture-handler/RNGestureHandler.podspec'
#pod 'RNReanimated', :podspec => '../node_modules/react-native-reanimated/RNReanimated.podspec'
#pod 'RNScreens', :path => '../node_modules/react-native-screens'
#pod 'react-native-appearance', :path => '../node_modules/react-native-appearance'
pod 'react-native-transcript', :path => '../node_modules/react-native-transcript'
# Automatically detect installed unimodules
require_relative '../node_modules/react-native-unimodules/cocoapods.rb'
use_unimodules!(
modules_paths: ['../node_modules'],
exclude: [
'expo-bluetooth',
'expo-in-app-purchases',
'expo-payments-stripe',
],
)
end
post_install do |installer|
puts "Renaming logging functions"
root = File.dirname(installer.pods_project.path)
Dir.chdir(root);
Dir.glob("**/*.{h,cc,cpp,in}") {|filename|
filepath = root + "/" + filename
text = File.read(filepath)
addText = text.gsub!(/(?<!React)AddLogSink/, "ReactAddLogSink")
if addText
File.chmod(0644, filepath)
f = File.open(filepath, "w")
f.write(addText)
f.close
end
text2 = addText ? addText : text
removeText = text2.gsub!(/(?<!React)RemoveLogSink/, "ReactRemoveLogSink")
if removeText
File.chmod(0644, filepath)
f = File.open(filepath, "w")
f.write(removeText)
f.close
end
}
end
```
#### `AppDelegate.m`:
```objc
#import <Firebase.h>
#import "AppDelegate.h"
#import <React/RCTBundleURLProvider.h>
#import <React/RCTRootView.h>
#import <TSBackgroundFetch/TSBackgroundFetch.h>
#import <RNCPushNotificationIOS.h>
#import <UserNotifications/UserNotifications.h>
#import <UMCore/UMModuleRegistry.h>
#import <UMReactNativeAdapter/UMNativeModulesProxy.h>
#import <UMReactNativeAdapter/UMModuleRegistryAdapter.h>
@implementation AppDelegate
@synthesize window = _window;
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
if ([FIRApp defaultApp] == nil) {
[FIRApp configure];
}
[FIRDatabase database].persistenceEnabled = YES;
// [react-native-background-fetch Setup]
// THIS ONE TSBackgroundFetch *fetch = [TSBackgroundFetch sharedInstance];
// [REQUIRED] Register for usual periodic background refresh events here:
// THIS ONE [fetch registerAppRefreshTask];
self.moduleRegistryAdapter = [[UMModuleRegistryAdapter alloc] initWithModuleRegistryProvider:[[UMModuleRegistryProvider alloc] init]];
RCTBridge *bridge = [[RCTBridge alloc] initWithDelegate:self launchOptions:launchOptions];
RCTRootView *rootView = [[RCTRootView alloc] initWithBridge:bridge moduleName:@"ReLearn" initialProperties:nil];
rootView.backgroundColor = [[UIColor alloc] initWithRed:1.0f green:1.0f blue:1.0f alpha:1];
self.window = [[UIWindow alloc] initWithFrame:[UIScreen mainScreen].bounds];
UIViewController *rootViewController = [UIViewController new];
rootViewController.view = rootView;
self.window.rootViewController = rootViewController;
[self.window makeKeyAndVisible];
[super application:application didFinishLaunchingWithOptions:launchOptions];
return YES;
}
// Required to register for notifications
- (void)application:(UIApplication *)application didRegisterUserNotificationSettings:(UIUserNotificationSettings *)notificationSettings
{
[RNCPushNotificationIOS didRegisterUserNotificationSettings:notificationSettings];
}
// Required for the register event.
- (void)application:(UIApplication *)application didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)deviceToken
{
[RNCPushNotificationIOS didRegisterForRemoteNotificationsWithDeviceToken:deviceToken];
}
// Required for the notification event. You must call the completion handler after handling the remote notification.
- (void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo
fetchCompletionHandler:(void (^)(UIBackgroundFetchResult))completionHandler
{
[RNCPushNotificationIOS didReceiveRemoteNotification:userInfo fetchCompletionHandler:completionHandler];
}
// Required for the registrationError event.
- (void)application:(UIApplication *)application didFailToRegisterForRemoteNotificationsWithError:(NSError *)error
{
[RNCPushNotificationIOS didFailToRegisterForRemoteNotificationsWithError:error];
}
// Required for the localNotification event.
- (void)application:(UIApplication *)application didReceiveLocalNotification:(UILocalNotification *)notification
{
[RNCPushNotificationIOS didReceiveLocalNotification:notification];
}
- (NSArray<id<RCTBridgeModule>> *)extraModulesForBridge:(RCTBridge *)bridge
{
NSArray<id<RCTBridgeModule>> *extraModules = [_moduleRegistryAdapter extraModulesForBridge:bridge];
// You can inject any extra modules that you would like here, more information at:
// https://facebook.github.io/react-native/docs/native-modules-ios.html#dependency-injection
return extraModules;
}
- (NSURL *)sourceURLForBridge:(RCTBridge *)bridge {
#ifdef DEBUG
return [[RCTBundleURLProvider sharedSettings] jsBundleURLForBundleRoot:@"index" fallbackResource:nil];
#else
return [[NSBundle mainBundle] URLForResource:@"main" withExtension:@"jsbundle"];
#endif
}
@end
```
</p>
</details>
---
### Android
<details><summary>Click To Expand</summary>
<p>
#### Have you converted to AndroidX?
<!--- Mark any options that apply below -->
- [ ] my application is an AndroidX application?
- [ ] I am using `android/gradle.settings` `jetifier=true` for Android compatibility?
- [ ] I am using the NPM package `jetifier` for react-native compatibility?
#### `android/build.gradle`:
```groovy
// N/A
```
#### `android/app/build.gradle`:
```groovy
// N/A
```
#### `android/settings.gradle`:
```groovy
// N/A
```
#### `MainApplication.java`:
```java
// N/A
```
#### `AndroidManifest.xml`:
```xml
<!-- N/A -->
```
</p>
</details>
---
## Environment
<details><summary>Click To Expand</summary>
<p>
**`react-native info` output:**
<!-- Please run `react-native info` on your terminal and paste the contents into the code block below -->
```
System:
OS: macOS 10.15.5
CPU: (8) x64 Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz
Memory: 480.98 MB / 32.00 GB
Shell: 5.7.1 - /bin/zsh
Binaries:
Node: 14.4.0 - /usr/local/bin/node
npm: 6.14.4 - /usr/local/bin/npm
SDKs:
iOS SDK:
Platforms: iOS 13.5, DriverKit 19.0, macOS 10.15, tvOS 13.4, watchOS 6.2
IDEs:
Xcode: 11.5/11E608c - /usr/bin/xcodebuild
npmPackages:
react: ~16.9.0 => 16.9.0
react-native: ~0.61.4 => 0.61.5
npmGlobalPackages:
react-native-cli: 2.0.1
```
<!-- change `[ ]` to `[x]` to select an option(s) -->
- **Platform that you're experiencing the issue on**:
- [ ] iOS
- [ ] Android
- [X ] **iOS** but have not tested behavior on Android
- [ ] **Android** but have not tested behavior on iOS
- [ ] Both
- **`react-native-firebase` version you're using that has this issue:**
"@react-native-firebase/app": "^8.2.0",
"@react-native-firebase/auth": "^8.2.0",
- **`Firebase` module(s) you're using that has the issue:**
- Auth
- **Are you using `TypeScript`?**
- `N`
- **Are you using `Expo`?**
- `Y` & `36.0.0`
</p>
</details>
<!-- Thanks for reading this far down ❤️ -->
<!-- High quality, detailed issues are much easier to triage for maintainers -->
<!-- For bonus points, if you put a 🔥 (:fire:) emojii at the start of the issue title we'll know -->
<!-- that you took the time to fill this out correctly, or, at least read this far -->
---
- 👉 Check out [`React Native Firebase`](https://twitter.com/rnfirebase) and [`Invertase`](https://twitter.com/invertaseio) on Twitter for updates on the library.
|
1.0
|
🔥new OAuthProvider()` is not supported on the native Firebase SDKs. (Even though it is) - <!---
NOTE: We have no support in place for using React Native Firebase in Expo applications (ejected or otherwise).
If you are seeing an issue, it may most likely not be an issue with React Native Firebase itself, but with the Expo runtime or with an incorrect React Native Firebase setup. For support on how to use Firebase with Expo, you should contact the Expo team or the Expo community.
General Expo issues are no longer be allowed on the React Native Firebase issue tracker. If you've investigated the Expo runtime or your app and found a genuine issue with React Native Firebase, please continue to open an issue.
--->
<!---
1) For feature requests please visit our [Feature Request Board](https://boards.invertase.io/react-native-firebase).
2) For questions and support please use our Discord chat: https://discord.gg/C9aK28N or Stack Overflow: https://stackoverflow.com/questions/tagged/react-native-firebase
3) If this is a setup issue then please make sure you've correctly followed the setup guides, most setup issues such as 'duplicate dex files', 'default app has not been initialized' etc are all down to an incorrect setup as the guides haven't been correctly followed.
-->
<!-- NOTE: You can change any of the `[ ]` to `[x]` to mark an option(s) as selected -->
<!-- PLEASE DO NOT REMOVE ANY SECTIONS FROM THIS ISSUE TEMPLATE -->
<!-- Leave them as they are even if they're irrelevant to your issue -->
## Issue
<!-- Please describe your issue here --^ and provide as much detail as you can. -->
<!-- Include code snippets that show your usages of the library in the context of your project. -->
<!-- Snippets that also show how and where the library is imported in JS are useful to debug issues relating to importing or methods not found issues -->
Getting ```new OAuthProvider()` is not supported on the native Firebase SDKs.```
When running ```var provider = new firebase.auth.OAuthProvider('zoom.com');```
This definitely isn't true. See [here](https://firebase.google.com/docs/reference/android/com/google/firebase/auth/OAuthProvider) and [here](https://firebase.google.com/docs/reference/ios/firebaseauth/api/reference/Classes/FIROAuthProvider).
Likely related to this check in your unit tests (along with potentially no implementation?)
```
describe('OAuthProvider', () => {
describe('constructor', () => {
it('should throw an unsupported error', () => {
(() => new firebase.auth.OAuthProvider()).should.throw(
'`new OAuthProvider()` is not supported on the native Firebase SDKs.',
);
});
});
```
---
## Project Files
<!-- Provide the contents of key project files which will help to debug -->
<!-- For Example: -->
<!-- - iOS: `Podfile` contents. -->
<!-- - Android: `android/build.gradle` contents. -->
<!-- - Android: `android/app/build.gradle` contents. -->
<!-- - Android: `AndroidManifest.xml` contents. -->
<!-- ADD THE CONTENTS OF THE FILES IN THE PROVIDED CODE BLOCKS BELOW -->
### iOS
<details><summary>Click To Expand</summary>
<p>
#### `ios/Podfile`:
- [ ] I'm not using Pods
- [x] I'm using Pods and my Podfile looks like:
```ruby
#platform :ios, '10.0'
install! 'cocoapods', :disable_input_output_paths => true
require_relative '../node_modules/react-native-unimodules/cocoapods'
require_relative '../node_modules/@react-native-community/cli-platform-ios/native_modules'
target 'ReLearn' do
rnPrefix = "../node_modules/react-native"
use_native_modules!
#use_frameworks!
# Download pre-compiled Firestore library.
#pod 'FirebaseFirestore', :git => 'https://github.com/invertase/firestore-ios-sdk-frameworks.git', :tag => '6.25.0'
# React Native and its dependencies
pod 'FBLazyVector', :path => "#{rnPrefix}/Libraries/FBLazyVector"
pod 'FBReactNativeSpec', :path => "#{rnPrefix}/Libraries/FBReactNativeSpec"
pod 'RCTRequired', :path => "#{rnPrefix}/Libraries/RCTRequired"
pod 'RCTTypeSafety', :path => "#{rnPrefix}/Libraries/TypeSafety"
pod 'React', :path => "#{rnPrefix}/"
pod 'React-Core', :path => "#{rnPrefix}/"
pod 'React-CoreModules', :path => "#{rnPrefix}/React/CoreModules"
pod 'React-RCTActionSheet', :path => "#{rnPrefix}/Libraries/ActionSheetIOS"
pod 'React-RCTAnimation', :path => "#{rnPrefix}/Libraries/NativeAnimation"
pod 'React-RCTBlob', :path => "#{rnPrefix}/Libraries/Blob"
pod 'React-RCTImage', :path => "#{rnPrefix}/Libraries/Image"
pod 'React-RCTLinking', :path => "#{rnPrefix}/Libraries/LinkingIOS"
pod 'React-RCTNetwork', :path => "#{rnPrefix}/Libraries/Network"
pod 'React-RCTSettings', :path => "#{rnPrefix}/Libraries/Settings"
pod 'React-RCTText', :path => "#{rnPrefix}/Libraries/Text"
pod 'React-RCTVibration', :path => "#{rnPrefix}/Libraries/Vibration"
pod 'React-Core/RCTWebSocket', :path => "#{rnPrefix}/"
pod 'React-Core/DevSupport', :path => "#{rnPrefix}/"
pod 'React-cxxreact', :path => "#{rnPrefix}/ReactCommon/cxxreact"
pod 'React-jsi', :path => "#{rnPrefix}/ReactCommon/jsi"
pod 'React-jsiexecutor', :path => "#{rnPrefix}/ReactCommon/jsiexecutor"
pod 'React-jsinspector', :path => "#{rnPrefix}/ReactCommon/jsinspector"
pod 'ReactCommon/jscallinvoker', :path => "#{rnPrefix}/ReactCommon"
pod 'ReactCommon/turbomodule/core', :path => "#{rnPrefix}/ReactCommon"
pod 'Yoga', :path => "#{rnPrefix}/ReactCommon/yoga"
pod 'DoubleConversion', :podspec => "#{rnPrefix}/third-party-podspecs/DoubleConversion.podspec"
pod 'glog', :podspec => "#{rnPrefix}/third-party-podspecs/glog.podspec"
pod 'Folly', :podspec => "#{rnPrefix}/third-party-podspecs/Folly.podspec"
# Permission Managers
permissions_path = '../node_modules/react-native-permissions/ios'
pod 'Permission-BluetoothPeripheral', :path => "#{permissions_path}/BluetoothPeripheral.podspec"
pod 'Permission-Microphone', :path => "#{permissions_path}/Microphone.podspec"
pod 'Permission-SpeechRecognition', :path => "#{permissions_path}/SpeechRecognition.podspec"
# Other native modules
#pod 'RNGestureHandler', :podspec => '../node_modules/react-native-gesture-handler/RNGestureHandler.podspec'
#pod 'RNReanimated', :podspec => '../node_modules/react-native-reanimated/RNReanimated.podspec'
#pod 'RNScreens', :path => '../node_modules/react-native-screens'
#pod 'react-native-appearance', :path => '../node_modules/react-native-appearance'
pod 'react-native-transcript', :path => '../node_modules/react-native-transcript'
# Automatically detect installed unimodules
require_relative '../node_modules/react-native-unimodules/cocoapods.rb'
use_unimodules!(
modules_paths: ['../node_modules'],
exclude: [
'expo-bluetooth',
'expo-in-app-purchases',
'expo-payments-stripe',
],
)
end
post_install do |installer|
puts "Renaming logging functions"
root = File.dirname(installer.pods_project.path)
Dir.chdir(root);
Dir.glob("**/*.{h,cc,cpp,in}") {|filename|
filepath = root + "/" + filename
text = File.read(filepath)
addText = text.gsub!(/(?<!React)AddLogSink/, "ReactAddLogSink")
if addText
File.chmod(0644, filepath)
f = File.open(filepath, "w")
f.write(addText)
f.close
end
text2 = addText ? addText : text
removeText = text2.gsub!(/(?<!React)RemoveLogSink/, "ReactRemoveLogSink")
if removeText
File.chmod(0644, filepath)
f = File.open(filepath, "w")
f.write(removeText)
f.close
end
}
end
```
#### `AppDelegate.m`:
```objc
#import <Firebase.h>
#import "AppDelegate.h"
#import <React/RCTBundleURLProvider.h>
#import <React/RCTRootView.h>
#import <TSBackgroundFetch/TSBackgroundFetch.h>
#import <RNCPushNotificationIOS.h>
#import <UserNotifications/UserNotifications.h>
#import <UMCore/UMModuleRegistry.h>
#import <UMReactNativeAdapter/UMNativeModulesProxy.h>
#import <UMReactNativeAdapter/UMModuleRegistryAdapter.h>
@implementation AppDelegate
@synthesize window = _window;
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
if ([FIRApp defaultApp] == nil) {
[FIRApp configure];
}
[FIRDatabase database].persistenceEnabled = YES;
// [react-native-background-fetch Setup]
// THIS ONE TSBackgroundFetch *fetch = [TSBackgroundFetch sharedInstance];
// [REQUIRED] Register for usual periodic background refresh events here:
// THIS ONE [fetch registerAppRefreshTask];
self.moduleRegistryAdapter = [[UMModuleRegistryAdapter alloc] initWithModuleRegistryProvider:[[UMModuleRegistryProvider alloc] init]];
RCTBridge *bridge = [[RCTBridge alloc] initWithDelegate:self launchOptions:launchOptions];
RCTRootView *rootView = [[RCTRootView alloc] initWithBridge:bridge moduleName:@"ReLearn" initialProperties:nil];
rootView.backgroundColor = [[UIColor alloc] initWithRed:1.0f green:1.0f blue:1.0f alpha:1];
self.window = [[UIWindow alloc] initWithFrame:[UIScreen mainScreen].bounds];
UIViewController *rootViewController = [UIViewController new];
rootViewController.view = rootView;
self.window.rootViewController = rootViewController;
[self.window makeKeyAndVisible];
[super application:application didFinishLaunchingWithOptions:launchOptions];
return YES;
}
// Required to register for notifications
- (void)application:(UIApplication *)application didRegisterUserNotificationSettings:(UIUserNotificationSettings *)notificationSettings
{
[RNCPushNotificationIOS didRegisterUserNotificationSettings:notificationSettings];
}
// Required for the register event.
- (void)application:(UIApplication *)application didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)deviceToken
{
[RNCPushNotificationIOS didRegisterForRemoteNotificationsWithDeviceToken:deviceToken];
}
// Required for the notification event. You must call the completion handler after handling the remote notification.
- (void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo
fetchCompletionHandler:(void (^)(UIBackgroundFetchResult))completionHandler
{
[RNCPushNotificationIOS didReceiveRemoteNotification:userInfo fetchCompletionHandler:completionHandler];
}
// Required for the registrationError event.
- (void)application:(UIApplication *)application didFailToRegisterForRemoteNotificationsWithError:(NSError *)error
{
[RNCPushNotificationIOS didFailToRegisterForRemoteNotificationsWithError:error];
}
// Required for the localNotification event.
- (void)application:(UIApplication *)application didReceiveLocalNotification:(UILocalNotification *)notification
{
[RNCPushNotificationIOS didReceiveLocalNotification:notification];
}
- (NSArray<id<RCTBridgeModule>> *)extraModulesForBridge:(RCTBridge *)bridge
{
NSArray<id<RCTBridgeModule>> *extraModules = [_moduleRegistryAdapter extraModulesForBridge:bridge];
// You can inject any extra modules that you would like here, more information at:
// https://facebook.github.io/react-native/docs/native-modules-ios.html#dependency-injection
return extraModules;
}
- (NSURL *)sourceURLForBridge:(RCTBridge *)bridge {
#ifdef DEBUG
return [[RCTBundleURLProvider sharedSettings] jsBundleURLForBundleRoot:@"index" fallbackResource:nil];
#else
return [[NSBundle mainBundle] URLForResource:@"main" withExtension:@"jsbundle"];
#endif
}
@end
```
</p>
</details>
---
### Android
<details><summary>Click To Expand</summary>
<p>
#### Have you converted to AndroidX?
<!--- Mark any options that apply below -->
- [ ] my application is an AndroidX application?
- [ ] I am using `android/gradle.settings` `jetifier=true` for Android compatibility?
- [ ] I am using the NPM package `jetifier` for react-native compatibility?
#### `android/build.gradle`:
```groovy
// N/A
```
#### `android/app/build.gradle`:
```groovy
// N/A
```
#### `android/settings.gradle`:
```groovy
// N/A
```
#### `MainApplication.java`:
```java
// N/A
```
#### `AndroidManifest.xml`:
```xml
<!-- N/A -->
```
</p>
</details>
---
## Environment
<details><summary>Click To Expand</summary>
<p>
**`react-native info` output:**
<!-- Please run `react-native info` on your terminal and paste the contents into the code block below -->
```
System:
OS: macOS 10.15.5
CPU: (8) x64 Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz
Memory: 480.98 MB / 32.00 GB
Shell: 5.7.1 - /bin/zsh
Binaries:
Node: 14.4.0 - /usr/local/bin/node
npm: 6.14.4 - /usr/local/bin/npm
SDKs:
iOS SDK:
Platforms: iOS 13.5, DriverKit 19.0, macOS 10.15, tvOS 13.4, watchOS 6.2
IDEs:
Xcode: 11.5/11E608c - /usr/bin/xcodebuild
npmPackages:
react: ~16.9.0 => 16.9.0
react-native: ~0.61.4 => 0.61.5
npmGlobalPackages:
react-native-cli: 2.0.1
```
<!-- change `[ ]` to `[x]` to select an option(s) -->
- **Platform that you're experiencing the issue on**:
- [ ] iOS
- [ ] Android
- [X ] **iOS** but have not tested behavior on Android
- [ ] **Android** but have not tested behavior on iOS
- [ ] Both
- **`react-native-firebase` version you're using that has this issue:**
"@react-native-firebase/app": "^8.2.0",
"@react-native-firebase/auth": "^8.2.0",
- **`Firebase` module(s) you're using that has the issue:**
- Auth
- **Are you using `TypeScript`?**
- `N`
- **Are you using `Expo`?**
- `Y` & `36.0.0`
</p>
</details>
<!-- Thanks for reading this far down ❤️ -->
<!-- High quality, detailed issues are much easier to triage for maintainers -->
<!-- For bonus points, if you put a 🔥 (:fire:) emojii at the start of the issue title we'll know -->
<!-- that you took the time to fill this out correctly, or, at least read this far -->
---
- 👉 Check out [`React Native Firebase`](https://twitter.com/rnfirebase) and [`Invertase`](https://twitter.com/invertaseio) on Twitter for updates on the library.
|
non_process
|
🔥new oauthprovider is not supported on the native firebase sdks even though it is note we have no support in place for using react native firebase in expo applications ejected or otherwise if you are seeing an issue it may most likely not be an issue with react native firebase itself but with the expo runtime or with an incorrect react native firebase setup for support on how to use firebase with expo you should contact the expo team or the expo community general expo issues are no longer be allowed on the react native firebase issue tracker if you ve investigated the expo runtime or your app and found a genuine issue with react native firebase please continue to open an issue for feature requests please visit our for questions and support please use our discord chat or stack overflow if this is a setup issue then please make sure you ve correctly followed the setup guides most setup issues such as duplicate dex files default app has not been initialized etc are all down to an incorrect setup as the guides haven t been correctly followed issue getting new oauthprovider is not supported on the native firebase sdks when running var provider new firebase auth oauthprovider zoom com this definitely isn t true see and likely related to this check in your unit tests along with potentially no implementation describe oauthprovider describe constructor it should throw an unsupported error new firebase auth oauthprovider should throw new oauthprovider is not supported on the native firebase sdks project files ios click to expand ios podfile i m not using pods i m using pods and my podfile looks like ruby platform ios install cocoapods disable input output paths true require relative node modules react native unimodules cocoapods require relative node modules react native community cli platform ios native modules target relearn do rnprefix node modules react native use native modules use frameworks download pre compiled firestore library pod firebasefirestore git tag react native and its dependencies pod fblazyvector path rnprefix libraries fblazyvector pod fbreactnativespec path rnprefix libraries fbreactnativespec pod rctrequired path rnprefix libraries rctrequired pod rcttypesafety path rnprefix libraries typesafety pod react path rnprefix pod react core path rnprefix pod react coremodules path rnprefix react coremodules pod react rctactionsheet path rnprefix libraries actionsheetios pod react rctanimation path rnprefix libraries nativeanimation pod react rctblob path rnprefix libraries blob pod react rctimage path rnprefix libraries image pod react rctlinking path rnprefix libraries linkingios pod react rctnetwork path rnprefix libraries network pod react rctsettings path rnprefix libraries settings pod react rcttext path rnprefix libraries text pod react rctvibration path rnprefix libraries vibration pod react core rctwebsocket path rnprefix pod react core devsupport path rnprefix pod react cxxreact path rnprefix reactcommon cxxreact pod react jsi path rnprefix reactcommon jsi pod react jsiexecutor path rnprefix reactcommon jsiexecutor pod react jsinspector path rnprefix reactcommon jsinspector pod reactcommon jscallinvoker path rnprefix reactcommon pod reactcommon turbomodule core path rnprefix reactcommon pod yoga path rnprefix reactcommon yoga pod doubleconversion podspec rnprefix third party podspecs doubleconversion podspec pod glog podspec rnprefix third party podspecs glog podspec pod folly podspec rnprefix third party podspecs folly podspec permission managers permissions path node modules react native permissions ios pod permission bluetoothperipheral path permissions path bluetoothperipheral podspec pod permission microphone path permissions path microphone podspec pod permission speechrecognition path permissions path speechrecognition podspec other native modules pod rngesturehandler podspec node modules react native gesture handler rngesturehandler podspec pod rnreanimated podspec node modules react native reanimated rnreanimated podspec pod rnscreens path node modules react native screens pod react native appearance path node modules react native appearance pod react native transcript path node modules react native transcript automatically detect installed unimodules require relative node modules react native unimodules cocoapods rb use unimodules modules paths exclude expo bluetooth expo in app purchases expo payments stripe end post install do installer puts renaming logging functions root file dirname installer pods project path dir chdir root dir glob h cc cpp in filename filepath root filename text file read filepath addtext text gsub react addlogsink reactaddlogsink if addtext file chmod filepath f file open filepath w f write addtext f close end addtext addtext text removetext gsub react removelogsink reactremovelogsink if removetext file chmod filepath f file open filepath w f write removetext f close end end appdelegate m objc import import appdelegate h import import import import import import import import implementation appdelegate synthesize window window bool application uiapplication application didfinishlaunchingwithoptions nsdictionary launchoptions if nil persistenceenabled yes this one tsbackgroundfetch fetch register for usual periodic background refresh events here this one self moduleregistryadapter initwithmoduleregistryprovider init rctbridge bridge initwithdelegate self launchoptions launchoptions rctrootview rootview initwithbridge bridge modulename relearn initialproperties nil rootview backgroundcolor initwithred green blue alpha self window initwithframe bounds uiviewcontroller rootviewcontroller rootviewcontroller view rootview self window rootviewcontroller rootviewcontroller return yes required to register for notifications void application uiapplication application didregisterusernotificationsettings uiusernotificationsettings notificationsettings required for the register event void application uiapplication application didregisterforremotenotificationswithdevicetoken nsdata devicetoken required for the notification event you must call the completion handler after handling the remote notification void application uiapplication application didreceiveremotenotification nsdictionary userinfo fetchcompletionhandler void uibackgroundfetchresult completionhandler required for the registrationerror event void application uiapplication application didfailtoregisterforremotenotificationswitherror nserror error required for the localnotification event void application uiapplication application didreceivelocalnotification uilocalnotification notification nsarray extramodulesforbridge rctbridge bridge nsarray extramodules you can inject any extra modules that you would like here more information at return extramodules nsurl sourceurlforbridge rctbridge bridge ifdef debug return jsbundleurlforbundleroot index fallbackresource nil else return urlforresource main withextension jsbundle endif end android click to expand have you converted to androidx my application is an androidx application i am using android gradle settings jetifier true for android compatibility i am using the npm package jetifier for react native compatibility android build gradle groovy n a android app build gradle groovy n a android settings gradle groovy n a mainapplication java java n a androidmanifest xml xml environment click to expand react native info output system os macos cpu intel r core tm cpu memory mb gb shell bin zsh binaries node usr local bin node npm usr local bin npm sdks ios sdk platforms ios driverkit macos tvos watchos ides xcode usr bin xcodebuild npmpackages react react native npmglobalpackages react native cli platform that you re experiencing the issue on ios android ios but have not tested behavior on android android but have not tested behavior on ios both react native firebase version you re using that has this issue react native firebase app react native firebase auth firebase module s you re using that has the issue auth are you using typescript n are you using expo y 👉 check out and on twitter for updates on the library
| 0
|
630,795
| 20,118,080,282
|
IssuesEvent
|
2022-02-07 21:52:50
|
status-im/status-desktop
|
https://api.github.com/repos/status-im/status-desktop
|
closed
|
[base_bc] collections in OpenSea are not being loaded
|
bug Wallet priority 1: high
|
### Description

```
ERR 2022-01-28 13:41:49.421+03:00 error: topics="collectible-service" tid=4344574 file=service.nim:46 errDesription="\nstatus-go error [methodName:wallet_getOpenseaAssetsByOwnerAndCollection, code:-32000, message:invalid character \'e\' looking for beginning of value ]\n"
```
|
1.0
|
[base_bc] collections in OpenSea are not being loaded - ### Description

```
ERR 2022-01-28 13:41:49.421+03:00 error: topics="collectible-service" tid=4344574 file=service.nim:46 errDesription="\nstatus-go error [methodName:wallet_getOpenseaAssetsByOwnerAndCollection, code:-32000, message:invalid character \'e\' looking for beginning of value ]\n"
```
|
non_process
|
collections in opensea are not being loaded description err error topics collectible service tid file service nim errdesription nstatus go error n
| 0
|
17,585
| 23,398,848,140
|
IssuesEvent
|
2022-08-12 05:05:50
|
spinalcordtoolbox/spinalcordtoolbox
|
https://api.github.com/repos/spinalcordtoolbox/spinalcordtoolbox
|
closed
|
Output csa perslice and distance from PMJ for each slices if `-distance` not specified
|
sct_process_segmentation feature API: aggregate_slicewise.py
|
## Context
This is a feature related to [Measure CSA based on distance from pontomedullary junction (PMJ)](https://github.com/spinalcordtoolbox/spinalcordtoolbox/pull/3429)
Currently, CSA is computed at a distance from PMJ and averaged across slices of corresponding extent.
## Suggestion
If the flag `-distance` is not specified, CSA could be computed per slice and the distance from PMJ could be specified at each slice in csa.csv.
|
1.0
|
Output csa perslice and distance from PMJ for each slices if `-distance` not specified - ## Context
This is a feature related to [Measure CSA based on distance from pontomedullary junction (PMJ)](https://github.com/spinalcordtoolbox/spinalcordtoolbox/pull/3429)
Currently, CSA is computed at a distance from PMJ and averaged across slices of corresponding extent.
## Suggestion
If the flag `-distance` is not specified, CSA could be computed per slice and the distance from PMJ could be specified at each slice in csa.csv.
|
process
|
output csa perslice and distance from pmj for each slices if distance not specified context this is a feature related to currently csa is computed at a distance from pmj and averaged across slices of corresponding extent suggestion if the flag distance is not specified csa could be computed per slice and the distance from pmj could be specified at each slice in csa csv
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.