Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,160
| 13,044,162,646
|
IssuesEvent
|
2020-07-29 03:47:34
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `AesDecryptIV` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `AesDecryptIV` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `AesDecryptIV` from TiDB -
## Description
Port the scalar function `AesDecryptIV` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function aesdecryptiv from tidb description port the scalar function aesdecryptiv from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
19,729
| 26,078,957,293
|
IssuesEvent
|
2022-12-25 02:00:06
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Fri, 23 Dec 22
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### SHLE: Devices Tracking and Depth Filtering for Stereo-based Height Limit Estimation
- **Authors:** Zhaoxin Fan, Kaixing Yang, Min Zhang, Zhenbo Song, Hongyan Liu, Jun He
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.11538
- **Pdf link:** https://arxiv.org/pdf/2212.11538
- **Abstract**
Recently, over-height vehicle strike frequently occurs, causing great economic cost and serious safety problems. Hence, an alert system which can accurately discover any possible height limiting devices in advance is necessary to be employed in modern large or medium sized cars, such as touring cars. Detecting and estimating the height limiting devices act as the key point of a successful height limit alert system. Though there are some works research height limit estimation, existing methods are either too computational expensive or not accurate enough. In this paper, we propose a novel stereo-based pipeline named SHLE for height limit estimation. Our SHLE pipeline consists of two stages. In stage 1, a novel devices detection and tracking scheme is introduced, which accurately locate the height limit devices in the left or right image. Then, in stage 2, the depth is temporally measured, extracted and filtered to calculate the height limit device. To benchmark the height limit estimation task, we build a large-scale dataset named "Disparity Height", where stereo images, pre-computed disparities and ground-truth height limit annotations are provided. We conducted extensive experiments on "Disparity Height" and the results show that SHLE achieves an average error below than 10cm though the car is 70m away from the devices. Our method also outperforms all compared baselines and achieves state-of-the-art performance. Code is available at https://github.com/Yang-Kaixing/SHLE.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### IPProtect: protecting the intellectual property of visual datasets during data valuation
- **Authors:** Gursimran Singh, Chendi Wang, Ahnaf Tazwar, Lanjun Wang, Yong Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR)
- **Arxiv link:** https://arxiv.org/abs/2212.11468
- **Pdf link:** https://arxiv.org/pdf/2212.11468
- **Abstract**
Data trading is essential to accelerate the development of data-driven machine learning pipelines. The central problem in data trading is to estimate the utility of a seller's dataset with respect to a given buyer's machine learning task, also known as data valuation. Typically, data valuation requires one or more participants to share their raw dataset with others, leading to potential risks of intellectual property (IP) violations. In this paper, we tackle the novel task of preemptively protecting the IP of datasets that need to be shared during data valuation. First, we identify and formalize two kinds of novel IP risks in visual datasets: data-item (image) IP and statistical (dataset) IP. Then, we propose a novel algorithm to convert the raw dataset into a sanitized version, that provides resistance to IP violations, while at the same time allowing accurate data valuation. The key idea is to limit the transfer of information from the raw dataset to the sanitized dataset, thereby protecting against potential intellectual property violations. Next, we analyze our method for the likely existence of a solution and immunity against reconstruction attacks. Finally, we conduct extensive experiments on three computer vision datasets demonstrating the advantages of our method in comparison to other baselines.
### Monocular 3D Object Detection using Multi-Stage Approaches with Attention and Slicing aided hyper inference
- **Authors:** Abonia Sojasingarayar, Ashish Patel
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2212.11804
- **Pdf link:** https://arxiv.org/pdf/2212.11804
- **Abstract**
3D object detection is vital as it would enable us to capture objects' sizes, orientation, and position in the world. As a result, we would be able to use this 3D detection in real-world applications such as Augmented Reality (AR), self-driving cars, and robotics which perceive the world the same way we do as humans. Monocular 3D Object Detection is the task to draw 3D bounding box around objects in a single 2D RGB image. It is localization task but without any extra information like depth or other sensors or multiple images. Monocular 3D object detection is an important yet challenging task. Beyond the significant progress in image-based 2D object detection, 3D understanding of real-world objects is an open challenge that has not been explored extensively thus far. In addition to the most closely related studies.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Fri, 23 Dec 22 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### SHLE: Devices Tracking and Depth Filtering for Stereo-based Height Limit Estimation
- **Authors:** Zhaoxin Fan, Kaixing Yang, Min Zhang, Zhenbo Song, Hongyan Liu, Jun He
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.11538
- **Pdf link:** https://arxiv.org/pdf/2212.11538
- **Abstract**
Recently, over-height vehicle strike frequently occurs, causing great economic cost and serious safety problems. Hence, an alert system which can accurately discover any possible height limiting devices in advance is necessary to be employed in modern large or medium sized cars, such as touring cars. Detecting and estimating the height limiting devices act as the key point of a successful height limit alert system. Though there are some works research height limit estimation, existing methods are either too computational expensive or not accurate enough. In this paper, we propose a novel stereo-based pipeline named SHLE for height limit estimation. Our SHLE pipeline consists of two stages. In stage 1, a novel devices detection and tracking scheme is introduced, which accurately locate the height limit devices in the left or right image. Then, in stage 2, the depth is temporally measured, extracted and filtered to calculate the height limit device. To benchmark the height limit estimation task, we build a large-scale dataset named "Disparity Height", where stereo images, pre-computed disparities and ground-truth height limit annotations are provided. We conducted extensive experiments on "Disparity Height" and the results show that SHLE achieves an average error below than 10cm though the car is 70m away from the devices. Our method also outperforms all compared baselines and achieves state-of-the-art performance. Code is available at https://github.com/Yang-Kaixing/SHLE.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### IPProtect: protecting the intellectual property of visual datasets during data valuation
- **Authors:** Gursimran Singh, Chendi Wang, Ahnaf Tazwar, Lanjun Wang, Yong Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR)
- **Arxiv link:** https://arxiv.org/abs/2212.11468
- **Pdf link:** https://arxiv.org/pdf/2212.11468
- **Abstract**
Data trading is essential to accelerate the development of data-driven machine learning pipelines. The central problem in data trading is to estimate the utility of a seller's dataset with respect to a given buyer's machine learning task, also known as data valuation. Typically, data valuation requires one or more participants to share their raw dataset with others, leading to potential risks of intellectual property (IP) violations. In this paper, we tackle the novel task of preemptively protecting the IP of datasets that need to be shared during data valuation. First, we identify and formalize two kinds of novel IP risks in visual datasets: data-item (image) IP and statistical (dataset) IP. Then, we propose a novel algorithm to convert the raw dataset into a sanitized version, that provides resistance to IP violations, while at the same time allowing accurate data valuation. The key idea is to limit the transfer of information from the raw dataset to the sanitized dataset, thereby protecting against potential intellectual property violations. Next, we analyze our method for the likely existence of a solution and immunity against reconstruction attacks. Finally, we conduct extensive experiments on three computer vision datasets demonstrating the advantages of our method in comparison to other baselines.
### Monocular 3D Object Detection using Multi-Stage Approaches with Attention and Slicing aided hyper inference
- **Authors:** Abonia Sojasingarayar, Ashish Patel
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2212.11804
- **Pdf link:** https://arxiv.org/pdf/2212.11804
- **Abstract**
3D object detection is vital as it would enable us to capture objects' sizes, orientation, and position in the world. As a result, we would be able to use this 3D detection in real-world applications such as Augmented Reality (AR), self-driving cars, and robotics which perceive the world the same way we do as humans. Monocular 3D Object Detection is the task to draw 3D bounding box around objects in a single 2D RGB image. It is localization task but without any extra information like depth or other sensors or multiple images. Monocular 3D object detection is an important yet challenging task. Beyond the significant progress in image-based 2D object detection, 3D understanding of real-world objects is an open challenge that has not been explored extensively thus far. In addition to the most closely related studies.
## Keyword: raw image
There is no result
|
process
|
new submissions for fri dec keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp shle devices tracking and depth filtering for stereo based height limit estimation authors zhaoxin fan kaixing yang min zhang zhenbo song hongyan liu jun he subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract recently over height vehicle strike frequently occurs causing great economic cost and serious safety problems hence an alert system which can accurately discover any possible height limiting devices in advance is necessary to be employed in modern large or medium sized cars such as touring cars detecting and estimating the height limiting devices act as the key point of a successful height limit alert system though there are some works research height limit estimation existing methods are either too computational expensive or not accurate enough in this paper we propose a novel stereo based pipeline named shle for height limit estimation our shle pipeline consists of two stages in stage a novel devices detection and tracking scheme is introduced which accurately locate the height limit devices in the left or right image then in stage the depth is temporally measured extracted and filtered to calculate the height limit device to benchmark the height limit estimation task we build a large scale dataset named disparity height where stereo images pre computed disparities and ground truth height limit annotations are provided we conducted extensive experiments on disparity height and the results show that shle achieves an average error below than though the car is away from the devices our method also outperforms all compared baselines and achieves state of the art performance code is available at keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw ipprotect protecting the intellectual property of visual datasets during data valuation authors gursimran singh chendi wang ahnaf tazwar lanjun wang yong zhang subjects computer vision and pattern recognition cs cv cryptography and security cs cr arxiv link pdf link abstract data trading is essential to accelerate the development of data driven machine learning pipelines the central problem in data trading is to estimate the utility of a seller s dataset with respect to a given buyer s machine learning task also known as data valuation typically data valuation requires one or more participants to share their raw dataset with others leading to potential risks of intellectual property ip violations in this paper we tackle the novel task of preemptively protecting the ip of datasets that need to be shared during data valuation first we identify and formalize two kinds of novel ip risks in visual datasets data item image ip and statistical dataset ip then we propose a novel algorithm to convert the raw dataset into a sanitized version that provides resistance to ip violations while at the same time allowing accurate data valuation the key idea is to limit the transfer of information from the raw dataset to the sanitized dataset thereby protecting against potential intellectual property violations next we analyze our method for the likely existence of a solution and immunity against reconstruction attacks finally we conduct extensive experiments on three computer vision datasets demonstrating the advantages of our method in comparison to other baselines monocular object detection using multi stage approaches with attention and slicing aided hyper inference authors abonia sojasingarayar ashish patel subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract object detection is vital as it would enable us to capture objects sizes orientation and position in the world as a result we would be able to use this detection in real world applications such as augmented reality ar self driving cars and robotics which perceive the world the same way we do as humans monocular object detection is the task to draw bounding box around objects in a single rgb image it is localization task but without any extra information like depth or other sensors or multiple images monocular object detection is an important yet challenging task beyond the significant progress in image based object detection understanding of real world objects is an open challenge that has not been explored extensively thus far in addition to the most closely related studies keyword raw image there is no result
| 1
|
17,953
| 10,168,583,339
|
IssuesEvent
|
2019-08-07 21:12:43
|
w3c/json-ld-syntax
|
https://api.github.com/repos/w3c/json-ld-syntax
|
reopened
|
Consider context by reference with metadata
|
defer-future-version hr:privacy hr:security
|
On the call of 2018-12-14, we discussed (briefly) #20, #98, and #86. One realization that came out of the discussion was that we currently have two ways to refer to contexts - either by references as a single string (the URI of the context) or by value as a JSON object (the actual context). In order to have in-document metadata about the context, such as asserting fixity such as via the SRI specification, we would need to have a URI with additional metadata about it.
Questions that arise:
* How to distinguish between a context by value JSON object and a context by reference with metadata JSON object?
* As the version is in the context, and this functionality is only version 1.1, how would we signal the processing requirement - some sort of transclusion within a context that has the version?
* Are metadata properties extensible or fixed in the specification? If they're extensible, we would need some sort of meta-context wherein the mapping is asserted. If they're fixed, we would need to be very careful to accommodate the various use cases otherwise the tendency will be to simply add in new keys regardless and hope for the best.
For example:
```
{
"@context": [
"https://example.com/context-by-reference",
{"id": "@id"},
{"@version": 1.1, "@context": "https://example.com/context-with-metadata", "@sri": "sha256-abcd"}
]
}
```
|
True
|
Consider context by reference with metadata -
On the call of 2018-12-14, we discussed (briefly) #20, #98, and #86. One realization that came out of the discussion was that we currently have two ways to refer to contexts - either by references as a single string (the URI of the context) or by value as a JSON object (the actual context). In order to have in-document metadata about the context, such as asserting fixity such as via the SRI specification, we would need to have a URI with additional metadata about it.
Questions that arise:
* How to distinguish between a context by value JSON object and a context by reference with metadata JSON object?
* As the version is in the context, and this functionality is only version 1.1, how would we signal the processing requirement - some sort of transclusion within a context that has the version?
* Are metadata properties extensible or fixed in the specification? If they're extensible, we would need some sort of meta-context wherein the mapping is asserted. If they're fixed, we would need to be very careful to accommodate the various use cases otherwise the tendency will be to simply add in new keys regardless and hope for the best.
For example:
```
{
"@context": [
"https://example.com/context-by-reference",
{"id": "@id"},
{"@version": 1.1, "@context": "https://example.com/context-with-metadata", "@sri": "sha256-abcd"}
]
}
```
|
non_process
|
consider context by reference with metadata on the call of we discussed briefly and one realization that came out of the discussion was that we currently have two ways to refer to contexts either by references as a single string the uri of the context or by value as a json object the actual context in order to have in document metadata about the context such as asserting fixity such as via the sri specification we would need to have a uri with additional metadata about it questions that arise how to distinguish between a context by value json object and a context by reference with metadata json object as the version is in the context and this functionality is only version how would we signal the processing requirement some sort of transclusion within a context that has the version are metadata properties extensible or fixed in the specification if they re extensible we would need some sort of meta context wherein the mapping is asserted if they re fixed we would need to be very careful to accommodate the various use cases otherwise the tendency will be to simply add in new keys regardless and hope for the best for example context id id version context sri abcd
| 0
|
213,094
| 16,509,723,076
|
IssuesEvent
|
2021-05-26 01:26:16
|
p4gefau1t/trojan-go
|
https://api.github.com/repos/p4gefau1t/trojan-go
|
closed
|
[Feature Request] Homebrew formula (for easy installation on macOS)
|
documentation help wanted
|
I have written a [Homebrew formula](https://github.com/xiruizhao/homebrew-trojan-go) for trojan-go based on v2ray's formula.
You may invite others to test it first and add it to your README.
```
# Usage on macOS
# install
brew tap xiruizhao/trojan-go
brew install trojan-go
# start trojan-go and register it to launch at login
brew services start trojan-go
# for more commands run `brew services --help`
```
|
1.0
|
[Feature Request] Homebrew formula (for easy installation on macOS) - I have written a [Homebrew formula](https://github.com/xiruizhao/homebrew-trojan-go) for trojan-go based on v2ray's formula.
You may invite others to test it first and add it to your README.
```
# Usage on macOS
# install
brew tap xiruizhao/trojan-go
brew install trojan-go
# start trojan-go and register it to launch at login
brew services start trojan-go
# for more commands run `brew services --help`
```
|
non_process
|
homebrew formula for easy installation on macos i have written a for trojan go based on s formula you may invite others to test it first and add it to your readme usage on macos install brew tap xiruizhao trojan go brew install trojan go start trojan go and register it to launch at login brew services start trojan go for more commands run brew services help
| 0
|
10,156
| 13,044,162,613
|
IssuesEvent
|
2020-07-29 03:47:34
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `TiDBVersion` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `TiDBVersion` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `TiDBVersion` from TiDB -
## Description
Port the scalar function `TiDBVersion` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function tidbversion from tidb description port the scalar function tidbversion from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
6,451
| 9,546,352,692
|
IssuesEvent
|
2019-05-01 19:41:39
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Depoloyment clarification
|
automation/svc cxp process-automation/subsvc triaged
|
When deploying this via the portal, I had pre-created a log analytics workspace and an automation account, both in separate resource groups. When selecting the automation account and deploying the solution, it opens a new browser instance and I could filter on other subscriptions. I had to update the global subscription filter to include multiple subs before adding the solution. Then I ran into an issue whereby the automation account and log analytics workspace were required to be in the same resource group. Would be nice if the solution allowed for automation accounts and log analytics to be in separate RG's.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**
|
1.0
|
Depoloyment clarification - When deploying this via the portal, I had pre-created a log analytics workspace and an automation account, both in separate resource groups. When selecting the automation account and deploying the solution, it opens a new browser instance and I could filter on other subscriptions. I had to update the global subscription filter to include multiple subs before adding the solution. Then I ran into an issue whereby the automation account and log analytics workspace were required to be in the same resource group. Would be nice if the solution allowed for automation accounts and log analytics to be in separate RG's.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**
|
process
|
depoloyment clarification when deploying this via the portal i had pre created a log analytics workspace and an automation account both in separate resource groups when selecting the automation account and deploying the solution it opens a new browser instance and i could filter on other subscriptions i had to update the global subscription filter to include multiple subs before adding the solution then i ran into an issue whereby the automation account and log analytics workspace were required to be in the same resource group would be nice if the solution allowed for automation accounts and log analytics to be in separate rg s document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login georgewallace microsoft alias gwallace
| 1
|
18,627
| 24,579,749,574
|
IssuesEvent
|
2022-10-13 14:48:27
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Consent API] [PM] Unable to open consent pdf document from the browser
|
Bug P1 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Login into the PM.
2. In participant details screen > Click on the consent pdf document.
3. Try to open pdf document from the browser and Verify.
**AR:** Unable to open consent pdf document from the browser.
**ER:** Pdf document should be open even from the browser.

|
3.0
|
[Consent API] [PM] Unable to open consent pdf document from the browser - **Steps:**
1. Login into the PM.
2. In participant details screen > Click on the consent pdf document.
3. Try to open pdf document from the browser and Verify.
**AR:** Unable to open consent pdf document from the browser.
**ER:** Pdf document should be open even from the browser.

|
process
|
unable to open consent pdf document from the browser steps login into the pm in participant details screen click on the consent pdf document try to open pdf document from the browser and verify ar unable to open consent pdf document from the browser er pdf document should be open even from the browser
| 1
|
20,323
| 26,963,857,842
|
IssuesEvent
|
2023-02-08 20:28:47
|
open-telemetry/opentelemetry-dotnet-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-dotnet-contrib
|
closed
|
[OpenTelemetry.Instrumentation.Process] Multiple provider dispose issue
|
comp:instrumentation.process
|
# Issue with OpenTelemetry.Instrumentation.Process
`process.cpu.utilization` requires tracking of the last collection time. When multiple providers are used, it needs to track it based on the provider. There isn't a really good way to do this. What we're doing is creating a `Meter` instance for each provider. But each instance has the same name for `Meter` & `Instrument`s so SDK is probably just treating everything as one data stream. To compound that problem, when a provider is disposed it should dispose its instrumentation. Due to the same name issue this leads to all providers being unsubscribed if one provider is disposed.
Update: [There is a failing test which demonstrates the disposal problem](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/73486c705ea4cf55dec78c9271fe380ab236e0ad/test/OpenTelemetry.Instrumentation.Process.Tests/ProcessMetricsTests.cs#L128).
|
1.0
|
[OpenTelemetry.Instrumentation.Process] Multiple provider dispose issue - # Issue with OpenTelemetry.Instrumentation.Process
`process.cpu.utilization` requires tracking of the last collection time. When multiple providers are used, it needs to track it based on the provider. There isn't a really good way to do this. What we're doing is creating a `Meter` instance for each provider. But each instance has the same name for `Meter` & `Instrument`s so SDK is probably just treating everything as one data stream. To compound that problem, when a provider is disposed it should dispose its instrumentation. Due to the same name issue this leads to all providers being unsubscribed if one provider is disposed.
Update: [There is a failing test which demonstrates the disposal problem](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/73486c705ea4cf55dec78c9271fe380ab236e0ad/test/OpenTelemetry.Instrumentation.Process.Tests/ProcessMetricsTests.cs#L128).
|
process
|
multiple provider dispose issue issue with opentelemetry instrumentation process process cpu utilization requires tracking of the last collection time when multiple providers are used it needs to track it based on the provider there isn t a really good way to do this what we re doing is creating a meter instance for each provider but each instance has the same name for meter instrument s so sdk is probably just treating everything as one data stream to compound that problem when a provider is disposed it should dispose its instrumentation due to the same name issue this leads to all providers being unsubscribed if one provider is disposed update
| 1
|
45,188
| 9,692,904,833
|
IssuesEvent
|
2019-05-24 14:50:07
|
spatialos/gdk-for-unity
|
https://api.github.com/repos/spatialos/gdk-for-unity
|
opened
|
Potential name clash between components and events/commands in generated code
|
A: codegen S: known-issue T: bug
|
**Affects:** Release v0.1.0 and up
**Internal Ticket:** [UTY-1962](https://improbableio.atlassian.net/browse/UTY-1962)
---
#### Description
Given a schema file like one of the following:
```
package name;
type Foo {
float test_field = 1;
}
component Bar
{
id = 200;
event Foo bar;
}
```
```
package name;
type Foo {
float test_field = 1;
}
component Bar
{
id = 200;
command Foo bar(Foo);
}
```
The code generator will generate invalid C# code that does not compile.
> Note this occurs when the name of a component and the name of an event/command inside that component are _the same_
#### Workaround
Rename either the component or the event/command.
|
1.0
|
Potential name clash between components and events/commands in generated code - **Affects:** Release v0.1.0 and up
**Internal Ticket:** [UTY-1962](https://improbableio.atlassian.net/browse/UTY-1962)
---
#### Description
Given a schema file like one of the following:
```
package name;
type Foo {
float test_field = 1;
}
component Bar
{
id = 200;
event Foo bar;
}
```
```
package name;
type Foo {
float test_field = 1;
}
component Bar
{
id = 200;
command Foo bar(Foo);
}
```
The code generator will generate invalid C# code that does not compile.
> Note this occurs when the name of a component and the name of an event/command inside that component are _the same_
#### Workaround
Rename either the component or the event/command.
|
non_process
|
potential name clash between components and events commands in generated code affects release and up internal ticket description given a schema file like one of the following package name type foo float test field component bar id event foo bar package name type foo float test field component bar id command foo bar foo the code generator will generate invalid c code that does not compile note this occurs when the name of a component and the name of an event command inside that component are the same workaround rename either the component or the event command
| 0
|
264,856
| 20,035,741,093
|
IssuesEvent
|
2022-02-02 11:41:07
|
Clinical-Genomics/meatballs
|
https://api.github.com/repos/Clinical-Genomics/meatballs
|
closed
|
Too much porridge!
|
Documentation Effort S Gain L
|
There are two pages for porridge under the Breakfast folder. I think I caused it... sorry
|
1.0
|
Too much porridge! - There are two pages for porridge under the Breakfast folder. I think I caused it... sorry
|
non_process
|
too much porridge there are two pages for porridge under the breakfast folder i think i caused it sorry
| 0
|
313,911
| 23,496,506,257
|
IssuesEvent
|
2022-08-18 02:26:18
|
rafaelportomoura/federacao-lavrense-de-futebol
|
https://api.github.com/repos/rafaelportomoura/federacao-lavrense-de-futebol
|
closed
|
Documentar segunda release do produto
|
documentation
|
Usar funcionalidades "Tags" e "Release" do GitHub para estabelecer a primeira baseline do projeto. Esta baseline deve conter toda a documentação de requisitos do projeto.
|
1.0
|
Documentar segunda release do produto - Usar funcionalidades "Tags" e "Release" do GitHub para estabelecer a primeira baseline do projeto. Esta baseline deve conter toda a documentação de requisitos do projeto.
|
non_process
|
documentar segunda release do produto usar funcionalidades tags e release do github para estabelecer a primeira baseline do projeto esta baseline deve conter toda a documentação de requisitos do projeto
| 0
|
693,445
| 23,775,739,081
|
IssuesEvent
|
2022-09-01 20:44:35
|
restarone/violet_rails
|
https://api.github.com/repos/restarone/violet_rails
|
closed
|
ahoy cookies should only be presented if tracking is enabled and the user consents to it
|
enhancement high priority
|
if tracking is enabled, an HTML snippet of the cookie consent should be defined in the admin as well
<img width="845" alt="Screen Shot 2022-08-14 at 10 22 15 AM" src="https://user-images.githubusercontent.com/35935196/184541380-a6af413a-bd96-4ffe-a798-d412f280645f.png">
If tracking is enabled -> present the visitor with a cookie consent banner and once consent is given -> then drop the ahoy cookies into their browser
If tracking is enabled -> present the visitor with a cookie consent banner -- if consent is not given -> don't drop ahoy cookies into their browser
If tracking is disabled -> do nothing
|
1.0
|
ahoy cookies should only be presented if tracking is enabled and the user consents to it - if tracking is enabled, an HTML snippet of the cookie consent should be defined in the admin as well
<img width="845" alt="Screen Shot 2022-08-14 at 10 22 15 AM" src="https://user-images.githubusercontent.com/35935196/184541380-a6af413a-bd96-4ffe-a798-d412f280645f.png">
If tracking is enabled -> present the visitor with a cookie consent banner and once consent is given -> then drop the ahoy cookies into their browser
If tracking is enabled -> present the visitor with a cookie consent banner -- if consent is not given -> don't drop ahoy cookies into their browser
If tracking is disabled -> do nothing
|
non_process
|
ahoy cookies should only be presented if tracking is enabled and the user consents to it if tracking is enabled an html snippet of the cookie consent should be defined in the admin as well img width alt screen shot at am src if tracking is enabled present the visitor with a cookie consent banner and once consent is given then drop the ahoy cookies into their browser if tracking is enabled present the visitor with a cookie consent banner if consent is not given don t drop ahoy cookies into their browser if tracking is disabled do nothing
| 0
|
12,253
| 9,605,257,214
|
IssuesEvent
|
2019-05-10 23:03:31
|
edgexfoundry/edgex-go
|
https://api.github.com/repos/edgexfoundry/edgex-go
|
closed
|
export-distro fails to be killed with ctrl-c after make run
|
bug export-services
|
To reproduce on current master build the binaries with `make build`, DON'T run mongodb (i.e. turn it off or disable it), and do:
```bash
make run
# wait a few seconds for things to try to connect and go into a connection loop
# interrupt with ctrl-c
```
You will continue to see export-distro show errors:
```
^Clevel=WARN ts=2019-05-03T02:05:40.613376466Z app=edgex-sys-mgmt-agent source=main.go:71 msg="terminating: interrupt"
level=WARN ts=2019-05-03T02:05:40.613389706Z app=edgex-support-notifications source=main.go:75 msg="terminating: interrupt"
level=WARN ts=2019-05-03T02:05:40.613503134Z app=edgex-core-command source=main.go:71 msg="terminating: interrupt"
Makefile:70: recipe for target 'run' failed
make: *** [run] Interrupt
me@localhost:~/go/src/github.com/edgexfoundry/edgex-go$ ^C
me@localhost:~/go/src/github.com/edgexfoundry/edgex-go$ ^C
me@localhost:~/go/src/github.com/edgexfoundry/edgex-go$ level=ERROR ts=2019-05-03T02:05:41.527711086Z app=edgex-export-distro source=client.go:28 msg="Error getting all registrations: http://localhost:48071/api/v1/registration. Error: Get http://localhost:48071/api/v1/registration: dial tcp 127.0.0.1:48071: connect: connection refused"
level=INFO ts=2019-05-03T02:05:41.535249365Z app=edgex-export-distro source=registrations.go:287 msg="Waiting for client microservice"
^C
me@localhost:~/go/src/github.com/edgexfoundry/edgex-go$ level=ERROR ts=2019-05-03T02:05:42.539137539Z app=edgex-export-distro source=client.go:28 msg="Error getting all registrations: http://localhost:48071/api/v1/registration. Error: Get http://localhost:48071/api/v1/registration: dial tcp 127.0.0.1:48071: connect: connection refused"
level=INFO ts=2019-05-03T02:05:42.544166385Z app=edgex-export-distro source=registrations.go:287 msg="Waiting for client microservice"
```
|
1.0
|
export-distro fails to be killed with ctrl-c after make run - To reproduce on current master build the binaries with `make build`, DON'T run mongodb (i.e. turn it off or disable it), and do:
```bash
make run
# wait a few seconds for things to try to connect and go into a connection loop
# interrupt with ctrl-c
```
You will continue to see export-distro show errors:
```
^Clevel=WARN ts=2019-05-03T02:05:40.613376466Z app=edgex-sys-mgmt-agent source=main.go:71 msg="terminating: interrupt"
level=WARN ts=2019-05-03T02:05:40.613389706Z app=edgex-support-notifications source=main.go:75 msg="terminating: interrupt"
level=WARN ts=2019-05-03T02:05:40.613503134Z app=edgex-core-command source=main.go:71 msg="terminating: interrupt"
Makefile:70: recipe for target 'run' failed
make: *** [run] Interrupt
me@localhost:~/go/src/github.com/edgexfoundry/edgex-go$ ^C
me@localhost:~/go/src/github.com/edgexfoundry/edgex-go$ ^C
me@localhost:~/go/src/github.com/edgexfoundry/edgex-go$ level=ERROR ts=2019-05-03T02:05:41.527711086Z app=edgex-export-distro source=client.go:28 msg="Error getting all registrations: http://localhost:48071/api/v1/registration. Error: Get http://localhost:48071/api/v1/registration: dial tcp 127.0.0.1:48071: connect: connection refused"
level=INFO ts=2019-05-03T02:05:41.535249365Z app=edgex-export-distro source=registrations.go:287 msg="Waiting for client microservice"
^C
me@localhost:~/go/src/github.com/edgexfoundry/edgex-go$ level=ERROR ts=2019-05-03T02:05:42.539137539Z app=edgex-export-distro source=client.go:28 msg="Error getting all registrations: http://localhost:48071/api/v1/registration. Error: Get http://localhost:48071/api/v1/registration: dial tcp 127.0.0.1:48071: connect: connection refused"
level=INFO ts=2019-05-03T02:05:42.544166385Z app=edgex-export-distro source=registrations.go:287 msg="Waiting for client microservice"
```
|
non_process
|
export distro fails to be killed with ctrl c after make run to reproduce on current master build the binaries with make build don t run mongodb i e turn it off or disable it and do bash make run wait a few seconds for things to try to connect and go into a connection loop interrupt with ctrl c you will continue to see export distro show errors clevel warn ts app edgex sys mgmt agent source main go msg terminating interrupt level warn ts app edgex support notifications source main go msg terminating interrupt level warn ts app edgex core command source main go msg terminating interrupt makefile recipe for target run failed make interrupt me localhost go src github com edgexfoundry edgex go c me localhost go src github com edgexfoundry edgex go c me localhost go src github com edgexfoundry edgex go level error ts app edgex export distro source client go msg error getting all registrations error get dial tcp connect connection refused level info ts app edgex export distro source registrations go msg waiting for client microservice c me localhost go src github com edgexfoundry edgex go level error ts app edgex export distro source client go msg error getting all registrations error get dial tcp connect connection refused level info ts app edgex export distro source registrations go msg waiting for client microservice
| 0
|
17,376
| 3,002,400,414
|
IssuesEvent
|
2015-07-24 16:59:49
|
GoldenSoftwareLtd/gedemin
|
https://api.github.com/repos/GoldenSoftwareLtd/gedemin
|
closed
|
Наследование форм для TgdcValue
|
GedeminExe Inheritance Priority-Medium Type-Defect
|
Originally reported on Google Code with ID 3599
```
Делаю наследника от единиц измерения GD_VALUE. Локализованное имя - ОКЕИ. В Исследователе
он отображается правильно, с именем ОКЕИ. Формы просмотра и редактирования наз. Единицы
измерения (почему не ОКЕИ тоже?). В наследнике только одно поле - Код, но почему-то
в форму редактирования оно не попало (компонент для работы с ним не создан).
```
Reported by `alexandra.gsoftware` on 2015-06-08 18:54:19
|
1.0
|
Наследование форм для TgdcValue - Originally reported on Google Code with ID 3599
```
Делаю наследника от единиц измерения GD_VALUE. Локализованное имя - ОКЕИ. В Исследователе
он отображается правильно, с именем ОКЕИ. Формы просмотра и редактирования наз. Единицы
измерения (почему не ОКЕИ тоже?). В наследнике только одно поле - Код, но почему-то
в форму редактирования оно не попало (компонент для работы с ним не создан).
```
Reported by `alexandra.gsoftware` on 2015-06-08 18:54:19
|
non_process
|
наследование форм для tgdcvalue originally reported on google code with id делаю наследника от единиц измерения gd value локализованное имя океи в исследователе он отображается правильно с именем океи формы просмотра и редактирования наз единицы измерения почему не океи тоже в наследнике только одно поле код но почему то в форму редактирования оно не попало компонент для работы с ним не создан reported by alexandra gsoftware on
| 0
|
497,171
| 14,364,991,037
|
IssuesEvent
|
2020-12-01 00:42:03
|
NuGet/Home
|
https://api.github.com/repos/NuGet/Home
|
closed
|
Nuget 5.8.0.6930 cannot handle projects that support multiple .net framework versions
|
Functionality:Restore Priority:1 Triage:NeedsRepro Type:Bug WaitingForCustomer
|
## Details about Problem
NuGet product used (NuGet.exe | VS UI | Package Manager Console | dotnet.exe):
NuGet version (x.x.x.xxx): Nuget 5.8.0.6930
Worked before? If so, with which NuGet version: 5.7
## Detailed repro steps so we can see the same problem
1. Create a .net .vcsproj project that supports multiple .net framework versions
2. Try to have the project build on Azure Devops with the latest version of Nuget
The build will fail with the error:
`
##[error]The nuget command failed with exit code(1) and error(Invalid restore input. Duplicate frameworks found: 'net472, net472'. Input files: D:\a\1\s\Product\Source\BIMrxCommon\Microdesk.BIMrxCommon.DbApp\Microdesk.BIMrxCommon.DbApp.csproj.
`
...
## Other suggested things
On this past Monday 09/11/2020 all our builds on Azure Devops started failing.
The error message we did get was
##[error]The nuget command failed with exit code(1) and error(Invalid restore input. Duplicate frameworks found: 'net472, net472'. Input files: D:\a\1\s\Product\Source\BIMrxCommon\Microdesk.BIMrxCommon.DbApp\Microdesk.BIMrxCommon.DbApp.csproj.
The version of nuget used by Azure was: NuGet version 5.8.0.6930
** After forcing the build script to use Nuget 5.7 the problems did disappear. **
It looks that the latest nuget version has problems with projects that support multiple .net framework versions
At first I did create a report against Azure pipelines, but was told to report it here.
[https://github.com/microsoft/azure-pipelines-tasks/issues/13887](url)
### Verbose Logs
Please include verbose logs (NuGet.exe <COMMAND> -verbosity detailed | dotnet.exe <COMMAND> --verbosity diag | etc...)
### Sample Project
To reproduce, probably it is enough to add the following code to your project:
`
<PropertyGroup>
<TargetFrameworks>net472;net48</TargetFrameworks>
</PropertyGroup>
`
[NugetLog5.8.zip](https://github.com/NuGet/Home/files/5550498/NugetLog5.8.zip)
|
1.0
|
Nuget 5.8.0.6930 cannot handle projects that support multiple .net framework versions - ## Details about Problem
NuGet product used (NuGet.exe | VS UI | Package Manager Console | dotnet.exe):
NuGet version (x.x.x.xxx): Nuget 5.8.0.6930
Worked before? If so, with which NuGet version: 5.7
## Detailed repro steps so we can see the same problem
1. Create a .net .vcsproj project that supports multiple .net framework versions
2. Try to have the project build on Azure Devops with the latest version of Nuget
The build will fail with the error:
`
##[error]The nuget command failed with exit code(1) and error(Invalid restore input. Duplicate frameworks found: 'net472, net472'. Input files: D:\a\1\s\Product\Source\BIMrxCommon\Microdesk.BIMrxCommon.DbApp\Microdesk.BIMrxCommon.DbApp.csproj.
`
...
## Other suggested things
On this past Monday 09/11/2020 all our builds on Azure Devops started failing.
The error message we did get was
##[error]The nuget command failed with exit code(1) and error(Invalid restore input. Duplicate frameworks found: 'net472, net472'. Input files: D:\a\1\s\Product\Source\BIMrxCommon\Microdesk.BIMrxCommon.DbApp\Microdesk.BIMrxCommon.DbApp.csproj.
The version of nuget used by Azure was: NuGet version 5.8.0.6930
** After forcing the build script to use Nuget 5.7 the problems did disappear. **
It looks that the latest nuget version has problems with projects that support multiple .net framework versions
At first I did create a report against Azure pipelines, but was told to report it here.
[https://github.com/microsoft/azure-pipelines-tasks/issues/13887](url)
### Verbose Logs
Please include verbose logs (NuGet.exe <COMMAND> -verbosity detailed | dotnet.exe <COMMAND> --verbosity diag | etc...)
### Sample Project
To reproduce, probably it is enough to add the following code to your project:
`
<PropertyGroup>
<TargetFrameworks>net472;net48</TargetFrameworks>
</PropertyGroup>
`
[NugetLog5.8.zip](https://github.com/NuGet/Home/files/5550498/NugetLog5.8.zip)
|
non_process
|
nuget cannot handle projects that support multiple net framework versions details about problem nuget product used nuget exe vs ui package manager console dotnet exe nuget version x x x xxx nuget worked before if so with which nuget version detailed repro steps so we can see the same problem create a net vcsproj project that supports multiple net framework versions try to have the project build on azure devops with the latest version of nuget the build will fail with the error the nuget command failed with exit code and error invalid restore input duplicate frameworks found input files d a s product source bimrxcommon microdesk bimrxcommon dbapp microdesk bimrxcommon dbapp csproj other suggested things on this past monday all our builds on azure devops started failing the error message we did get was the nuget command failed with exit code and error invalid restore input duplicate frameworks found input files d a s product source bimrxcommon microdesk bimrxcommon dbapp microdesk bimrxcommon dbapp csproj the version of nuget used by azure was nuget version after forcing the build script to use nuget the problems did disappear it looks that the latest nuget version has problems with projects that support multiple net framework versions at first i did create a report against azure pipelines but was told to report it here url verbose logs please include verbose logs nuget exe verbosity detailed dotnet exe verbosity diag etc sample project to reproduce probably it is enough to add the following code to your project
| 0
|
9,815
| 12,824,878,653
|
IssuesEvent
|
2020-07-06 14:09:01
|
prisma/prisma-examples
|
https://api.github.com/repos/prisma/prisma-examples
|
opened
|
Change Github Token to be from Prismo
|
kind/improvement process/candidate
|
Currently automated commits seems to be from @steebchen:

|
1.0
|
Change Github Token to be from Prismo - Currently automated commits seems to be from @steebchen:

|
process
|
change github token to be from prismo currently automated commits seems to be from steebchen
| 1
|
422,797
| 12,287,484,291
|
IssuesEvent
|
2020-05-09 12:25:22
|
googleapis/elixir-google-api
|
https://api.github.com/repos/googleapis/elixir-google-api
|
opened
|
Synthesis failed for Genomics
|
api: genomics autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate Genomics. :broken_heart:
Here's the output from running `synth.py`:
```
2020-05-09 05:13:37 [INFO] logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
2020-05-09 05:13:37,882 autosynth > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
Switched to branch 'autosynth-genomics'
2020-05-09 05:13:39 [INFO] Running synthtool
2020-05-09 05:13:39,341 autosynth > Running synthtool
2020-05-09 05:13:39 [INFO] ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/genomics/synth.metadata', 'synth.py', '--']
2020-05-09 05:13:39,341 autosynth > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/genomics/synth.metadata', 'synth.py', '--']
2020-05-09 05:13:39,548 synthtool > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py.
On branch autosynth-genomics
nothing to commit, working tree clean
2020-05-09 05:13:39,655 synthtool > Cloning https://github.com/googleapis/elixir-google-api.git.
2020-05-09 05:13:40,167 synthtool > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/elixir-google-api:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Genomics
2020-05-09 05:13:43,999 synthtool > No files in sources /home/kbuilder/.cache/synthtool/elixir-google-api/clients were copied. Does the source contain files?
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 180, in __exit__
write(self.metadata_file_path)
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 112, in write
with open(outfile, "w") as fh:
FileNotFoundError: [Errno 2] No such file or directory: 'clients/genomics/synth.metadata'
2020-05-09 05:13:44 [ERROR] Synthesis failed
2020-05-09 05:13:44,028 autosynth > Synthesis failed
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 599, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 471, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 549, in _inner_main
).synthesize(base_synth_log_path)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 118, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/genomics/synth.metadata', 'synth.py', '--', 'Genomics']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](https://sponge/11ff3741-9158-4831-8681-fff828f77e1a).
|
1.0
|
Synthesis failed for Genomics - Hello! Autosynth couldn't regenerate Genomics. :broken_heart:
Here's the output from running `synth.py`:
```
2020-05-09 05:13:37 [INFO] logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
2020-05-09 05:13:37,882 autosynth > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
Switched to branch 'autosynth-genomics'
2020-05-09 05:13:39 [INFO] Running synthtool
2020-05-09 05:13:39,341 autosynth > Running synthtool
2020-05-09 05:13:39 [INFO] ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/genomics/synth.metadata', 'synth.py', '--']
2020-05-09 05:13:39,341 autosynth > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/genomics/synth.metadata', 'synth.py', '--']
2020-05-09 05:13:39,548 synthtool > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py.
On branch autosynth-genomics
nothing to commit, working tree clean
2020-05-09 05:13:39,655 synthtool > Cloning https://github.com/googleapis/elixir-google-api.git.
2020-05-09 05:13:40,167 synthtool > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/elixir-google-api:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Genomics
2020-05-09 05:13:43,999 synthtool > No files in sources /home/kbuilder/.cache/synthtool/elixir-google-api/clients were copied. Does the source contain files?
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 180, in __exit__
write(self.metadata_file_path)
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 112, in write
with open(outfile, "w") as fh:
FileNotFoundError: [Errno 2] No such file or directory: 'clients/genomics/synth.metadata'
2020-05-09 05:13:44 [ERROR] Synthesis failed
2020-05-09 05:13:44,028 autosynth > Synthesis failed
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 599, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 471, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 549, in _inner_main
).synthesize(base_synth_log_path)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 118, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/genomics/synth.metadata', 'synth.py', '--', 'Genomics']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](https://sponge/11ff3741-9158-4831-8681-fff828f77e1a).
|
non_process
|
synthesis failed for genomics hello autosynth couldn t regenerate genomics broken heart here s the output from running synth py logs will be written to tmpfs src github synthtool logs googleapis elixir google api autosynth logs will be written to tmpfs src github synthtool logs googleapis elixir google api switched to branch autosynth genomics running synthtool autosynth running synthtool autosynth synthtool executing home kbuilder cache synthtool elixir google api synth py on branch autosynth genomics nothing to commit working tree clean synthtool cloning synthtool running docker run rm v home kbuilder cache synthtool elixir google api workspace v var run docker sock var run docker sock e user group w workspace gcr io cloud devrel public resources scripts generate client sh genomics synthtool no files in sources home kbuilder cache synthtool elixir google api clients were copied does the source contain files traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file tmpfs src github synthtool synthtool metadata py line in exit write self metadata file path file tmpfs src github synthtool synthtool metadata py line in write with open outfile w as fh filenotfounderror no such file or directory clients genomics synth metadata synthesis failed autosynth synthesis failed traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize base synth log path file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
| 0
|
4,488
| 7,345,948,790
|
IssuesEvent
|
2018-03-07 19:04:24
|
UKHomeOffice/dq-aws-transition
|
https://api.github.com/repos/UKHomeOffice/dq-aws-transition
|
closed
|
Add data-transfer job for ACL data to S3 archive
|
DQ Data Pipeline DQ Tranche 1 Production SSM processing
|
Add Data Transfer job for ACL data to S3 archive
|
1.0
|
Add data-transfer job for ACL data to S3 archive - Add Data Transfer job for ACL data to S3 archive
|
process
|
add data transfer job for acl data to archive add data transfer job for acl data to archive
| 1
|
231,779
| 17,754,890,461
|
IssuesEvent
|
2021-08-28 15:02:39
|
Quezadajl/GmR
|
https://api.github.com/repos/Quezadajl/GmR
|
reopened
|
Add introduction in README
|
documentation
|
This repo is about how to set up a Kanban board in Github. Add an introduction to the Readme to describe the goal of the repo
|
1.0
|
Add introduction in README - This repo is about how to set up a Kanban board in Github. Add an introduction to the Readme to describe the goal of the repo
|
non_process
|
add introduction in readme this repo is about how to set up a kanban board in github add an introduction to the readme to describe the goal of the repo
| 0
|
20,560
| 3,605,057,094
|
IssuesEvent
|
2016-02-04 02:07:37
|
aosdict/yacs
|
https://api.github.com/repos/aosdict/yacs
|
closed
|
Search - order results by number of terms matched
|
important include-in-design
|
When the search API is used to find courses, it should keep track of how many terms in the search query matched a given course. Then when it returns the courses to the user, it should sort the list of courses by that number. All matching results will be displayed, but in order of relevance. This combines the benefits of OR-based searches (show anything that matched) with the benefits of AND-based searches (multiple search terms give the best, most specific result).
Example: Searching for "CourseName ProfName" will return first courses named CourseName which are taught by ProfName, then any other courses named or matching CourseName, then all other courses taught by ProfName.
|
1.0
|
Search - order results by number of terms matched - When the search API is used to find courses, it should keep track of how many terms in the search query matched a given course. Then when it returns the courses to the user, it should sort the list of courses by that number. All matching results will be displayed, but in order of relevance. This combines the benefits of OR-based searches (show anything that matched) with the benefits of AND-based searches (multiple search terms give the best, most specific result).
Example: Searching for "CourseName ProfName" will return first courses named CourseName which are taught by ProfName, then any other courses named or matching CourseName, then all other courses taught by ProfName.
|
non_process
|
search order results by number of terms matched when the search api is used to find courses it should keep track of how many terms in the search query matched a given course then when it returns the courses to the user it should sort the list of courses by that number all matching results will be displayed but in order of relevance this combines the benefits of or based searches show anything that matched with the benefits of and based searches multiple search terms give the best most specific result example searching for coursename profname will return first courses named coursename which are taught by profname then any other courses named or matching coursename then all other courses taught by profname
| 0
|
71
| 2,499,852,202
|
IssuesEvent
|
2015-01-08 06:56:56
|
pjuu/pjuu
|
https://api.github.com/repos/pjuu/pjuu
|
opened
|
Image uploads... finally.
|
design development
|
For 0.7 (release after Mongo release) I will add in image uploads.
MongoDB was added to aid with this and simplify our stack. Images will be stored inside GridFS on MongoDB and there will be a `/media` url added which will be able to reach in to GridFS and return a desired image.
To speed this up all of the correct headers will be set so that these images can be cached at a web server such as Nginx or Varnish.
|
1.0
|
Image uploads... finally. - For 0.7 (release after Mongo release) I will add in image uploads.
MongoDB was added to aid with this and simplify our stack. Images will be stored inside GridFS on MongoDB and there will be a `/media` url added which will be able to reach in to GridFS and return a desired image.
To speed this up all of the correct headers will be set so that these images can be cached at a web server such as Nginx or Varnish.
|
non_process
|
image uploads finally for release after mongo release i will add in image uploads mongodb was added to aid with this and simplify our stack images will be stored inside gridfs on mongodb and there will be a media url added which will be able to reach in to gridfs and return a desired image to speed this up all of the correct headers will be set so that these images can be cached at a web server such as nginx or varnish
| 0
|
120,195
| 25,753,859,571
|
IssuesEvent
|
2022-12-08 15:03:57
|
gitpod-io/gitpod
|
https://api.github.com/repos/gitpod-io/gitpod
|
closed
|
workspaceLocation does not work on VSCode desktop
|
type: bug feature: gitpod yml team: IDE editor: code (desktop)
|
### Bug description
One of our projects if using `workspaceLocation` configuration in `.gitpod.yml` file.
It opens the required structure as defined if we open in browser but that doesn't work in VSCode Desktop.
### Steps to reproduce
Here's the `gitpod.yml` file
```yml
image:
file: .gitpod.Dockerfile
workspaceLocation: /workspace/flat/.code-workspace
```
and `.code-workspace` file is:
```json
{
"folders": [
{
"name": "webapp",
"path": "."
},
{
"name": "api",
"path": "/workspace/api"
}
],
"settings": {
"explorer.expandSingleFolderWorkspaces": false,
"typescript.tsdk": "node_modules/typescript/lib"
}
}
```
### Workspace affected
_No response_
### Expected behavior
_No response_
### Example repository
_No response_
### Anything else?
Link to Discord [conversation](https://discord.com/channels/816244985187008514/1042775691927752754).
|
1.0
|
workspaceLocation does not work on VSCode desktop - ### Bug description
One of our projects if using `workspaceLocation` configuration in `.gitpod.yml` file.
It opens the required structure as defined if we open in browser but that doesn't work in VSCode Desktop.
### Steps to reproduce
Here's the `gitpod.yml` file
```yml
image:
file: .gitpod.Dockerfile
workspaceLocation: /workspace/flat/.code-workspace
```
and `.code-workspace` file is:
```json
{
"folders": [
{
"name": "webapp",
"path": "."
},
{
"name": "api",
"path": "/workspace/api"
}
],
"settings": {
"explorer.expandSingleFolderWorkspaces": false,
"typescript.tsdk": "node_modules/typescript/lib"
}
}
```
### Workspace affected
_No response_
### Expected behavior
_No response_
### Example repository
_No response_
### Anything else?
Link to Discord [conversation](https://discord.com/channels/816244985187008514/1042775691927752754).
|
non_process
|
workspacelocation does not work on vscode desktop bug description one of our projects if using workspacelocation configuration in gitpod yml file it opens the required structure as defined if we open in browser but that doesn t work in vscode desktop steps to reproduce here s the gitpod yml file yml image file gitpod dockerfile workspacelocation workspace flat code workspace and code workspace file is json folders name webapp path name api path workspace api settings explorer expandsinglefolderworkspaces false typescript tsdk node modules typescript lib workspace affected no response expected behavior no response example repository no response anything else link to discord
| 0
|
5,919
| 8,742,299,404
|
IssuesEvent
|
2018-12-12 16:06:54
|
prusa3d/Slic3r
|
https://api.github.com/repos/prusa3d/Slic3r
|
closed
|
[Request] Estimated print time / filament usage in output name
|
background processing
|

### Version
1.40.0 Beta
It would be nice to have somewhere a checkbox to activate additional output names like estimated time and required filament.
Currently I am doing this by hand, but when I use "Send to printer" it is quite awful to add a additional name after the export (but this is the time where I get the estimated values).
What I would like to have is the original name in the output + estimated time in a defined format:
Import Filename = Ankly
Output Filename = Ankly16H12M
Same with filament
Import Filename = Ankly
Output Filename = Ankly20m
Combined it could look like
Output Filename = Ankly16H12M20m
This would be a quite nice feature as I am prepairing the next 10-20 files to print but I need to change the order of the prints sometimes as it depends on the times when I am back home to change the print. But for this I need to know the roughly estimated times / filament usage.
Edit:
Found a way to change the export name, here it would be nice to have the Estimated time available.
|
1.0
|
[Request] Estimated print time / filament usage in output name -

### Version
1.40.0 Beta
It would be nice to have somewhere a checkbox to activate additional output names like estimated time and required filament.
Currently I am doing this by hand, but when I use "Send to printer" it is quite awful to add a additional name after the export (but this is the time where I get the estimated values).
What I would like to have is the original name in the output + estimated time in a defined format:
Import Filename = Ankly
Output Filename = Ankly16H12M
Same with filament
Import Filename = Ankly
Output Filename = Ankly20m
Combined it could look like
Output Filename = Ankly16H12M20m
This would be a quite nice feature as I am prepairing the next 10-20 files to print but I need to change the order of the prints sometimes as it depends on the times when I am back home to change the print. But for this I need to know the roughly estimated times / filament usage.
Edit:
Found a way to change the export name, here it would be nice to have the Estimated time available.
|
process
|
estimated print time filament usage in output name version beta it would be nice to have somewhere a checkbox to activate additional output names like estimated time and required filament currently i am doing this by hand but when i use send to printer it is quite awful to add a additional name after the export but this is the time where i get the estimated values what i would like to have is the original name in the output estimated time in a defined format import filename ankly output filename same with filament import filename ankly output filename combined it could look like output filename this would be a quite nice feature as i am prepairing the next files to print but i need to change the order of the prints sometimes as it depends on the times when i am back home to change the print but for this i need to know the roughly estimated times filament usage edit found a way to change the export name here it would be nice to have the estimated time available
| 1
|
302,347
| 26,140,052,513
|
IssuesEvent
|
2022-12-29 17:04:50
|
apache/beam
|
https://api.github.com/repos/apache/beam
|
closed
|
Spark Runner support for PerfKit Benchmarker
|
runners spark tests P3 bug
|
See https://docs.google.com/document/d/1PsjGPSN6FuorEEPrKEP3u3m16tyOzph5FnL2DhaRDz0/edit?ts=58a78e73#heading=h.exn0s6jsm24q for more details on what this entails.
Imported from Jira [BEAM-1602](https://issues.apache.org/jira/browse/BEAM-1602). Original Jira may contain additional context.
Reported by: jaku.
|
1.0
|
Spark Runner support for PerfKit Benchmarker - See https://docs.google.com/document/d/1PsjGPSN6FuorEEPrKEP3u3m16tyOzph5FnL2DhaRDz0/edit?ts=58a78e73#heading=h.exn0s6jsm24q for more details on what this entails.
Imported from Jira [BEAM-1602](https://issues.apache.org/jira/browse/BEAM-1602). Original Jira may contain additional context.
Reported by: jaku.
|
non_process
|
spark runner support for perfkit benchmarker see for more details on what this entails imported from jira original jira may contain additional context reported by jaku
| 0
|
19,359
| 25,491,410,537
|
IssuesEvent
|
2022-11-27 05:05:28
|
hsmusic/hsmusic-wiki
|
https://api.github.com/repos/hsmusic/hsmusic-wiki
|
closed
|
"Artists - by Latest Contribution" sorts artists reverse-alphabetically
|
type: bug (user-facing) scope: data processing thing: listings
|
They should be sorted by reverse *date* (backwards chronological), then forward alphabetically!
Fix is to add a new `{reverse: true}` option to `sortChronological`.
We should check for any other uses of `sortChronological().reverse()` at the same time, since they must have similar issues!
|
1.0
|
"Artists - by Latest Contribution" sorts artists reverse-alphabetically - They should be sorted by reverse *date* (backwards chronological), then forward alphabetically!
Fix is to add a new `{reverse: true}` option to `sortChronological`.
We should check for any other uses of `sortChronological().reverse()` at the same time, since they must have similar issues!
|
process
|
artists by latest contribution sorts artists reverse alphabetically they should be sorted by reverse date backwards chronological then forward alphabetically fix is to add a new reverse true option to sortchronological we should check for any other uses of sortchronological reverse at the same time since they must have similar issues
| 1
|
11,152
| 13,957,693,399
|
IssuesEvent
|
2020-10-24 08:10:58
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
CY: Rejected harvesting
|
CY - Cyprus Geoportal Harvesting process
|
Dear Anotnis,
this is to let you know that the result of yesterday's harvesting was rejectd by the Geoportal because of the request for the last batch of metadata failed with this error message:
The harvesting did not go well: The interaction with the remote service at "http://eservices.dls.moi.gov.cy/geoportal_inspire/csw?request=GetCapabilities&service=CSW&version=2.0.2" ended with the following error "http://eservices.dls.moi.gov.cy/geoportal_inspire/csw: eservices.dls.moi.gov.cy:80 failed to respond"
Best regards,
Angelo
|
1.0
|
CY: Rejected harvesting - Dear Anotnis,
this is to let you know that the result of yesterday's harvesting was rejectd by the Geoportal because of the request for the last batch of metadata failed with this error message:
The harvesting did not go well: The interaction with the remote service at "http://eservices.dls.moi.gov.cy/geoportal_inspire/csw?request=GetCapabilities&service=CSW&version=2.0.2" ended with the following error "http://eservices.dls.moi.gov.cy/geoportal_inspire/csw: eservices.dls.moi.gov.cy:80 failed to respond"
Best regards,
Angelo
|
process
|
cy rejected harvesting dear anotnis this is to let you know that the result of yesterday s harvesting was rejectd by the geoportal because of the request for the last batch of metadata failed with this error message the harvesting did not go well the interaction with the remote service at quot ended with the following error quot eservices dls moi gov cy failed to respond quot best regards angelo
| 1
|
2,594
| 5,353,070,808
|
IssuesEvent
|
2017-02-20 03:21:02
|
uccser/kordac
|
https://api.github.com/repos/uccser/kordac
|
closed
|
Implement backslash
|
processor implementation testing
|
Implement backslash as currently used in the existing CSFG
```
[backslash]
regex:(\\\{|\\\})
function:escape_backslash
```
|
1.0
|
Implement backslash - Implement backslash as currently used in the existing CSFG
```
[backslash]
regex:(\\\{|\\\})
function:escape_backslash
```
|
process
|
implement backslash implement backslash as currently used in the existing csfg regex function escape backslash
| 1
|
167,472
| 13,031,453,010
|
IssuesEvent
|
2020-07-28 01:16:27
|
NakiNorton/refactor-tractor-fitlitA
|
https://api.github.com/repos/NakiNorton/refactor-tractor-fitlitA
|
closed
|
Fix bug(s) in Sleep-test-js
|
bug sleep class testing
|
- [ ] Sleep-test is currently not working as it should, we need to go in and find the issue
note: If tests are modified, take note of broken methods
|
1.0
|
Fix bug(s) in Sleep-test-js - - [ ] Sleep-test is currently not working as it should, we need to go in and find the issue
note: If tests are modified, take note of broken methods
|
non_process
|
fix bug s in sleep test js sleep test is currently not working as it should we need to go in and find the issue note if tests are modified take note of broken methods
| 0
|
10,520
| 13,303,692,240
|
IssuesEvent
|
2020-08-25 15:51:18
|
GoogleCloudPlatform/dotnet-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
|
opened
|
[IAM]: Fix and reactivate Quickstart test.
|
api: iam priority: p1 type: process
|
- QuickStartTest.GoogleCloudSamples.QuickStartTest.TestQuickStart
Build log [here](https://source.cloud.google.com/results/invocations/b9858a5c-a195-4cc3-a730-a003e0771ba7/targets/github%2Fdotnet-docs-samples%2Fiam%2Fapi%2FQuickStartV2Test%2FTestResults/tests)
|
1.0
|
[IAM]: Fix and reactivate Quickstart test. - - QuickStartTest.GoogleCloudSamples.QuickStartTest.TestQuickStart
Build log [here](https://source.cloud.google.com/results/invocations/b9858a5c-a195-4cc3-a730-a003e0771ba7/targets/github%2Fdotnet-docs-samples%2Fiam%2Fapi%2FQuickStartV2Test%2FTestResults/tests)
|
process
|
fix and reactivate quickstart test quickstarttest googlecloudsamples quickstarttest testquickstart build log
| 1
|
15,225
| 19,094,032,815
|
IssuesEvent
|
2021-11-29 15:00:00
|
scaffold-eth/scaffold-eth
|
https://api.github.com/repos/scaffold-eth/scaffold-eth
|
closed
|
Yarn Toolkit Install Build Error on Ubuntu 21.04
|
🙋♂️ Help Wanted In-process
|
Hello Scaffold Team,
I am trying to do YARN installation of Scaffold ETH on Ubuntu 21.04. Request for help to address this issue.
yarn install v1.22.10
[1/4] Resolving packages...
[2/4] Fetching packages...
warning Pattern ["@apollo/client@latest"] is trying to unpack in the same destination "/home/semiott/.cache/yarn/v6/npm-@apollo-client-3.3.21-2862baa4e1ced8c5e89ebe6fc52877fc64a726aa-integrity/node_modules/@apollo/client" as pattern
["@apollo/client@^3.3.21"]. This could result in non-deterministic behavior, skipping.
info fsevents@2.3.2: The platform "linux" is incompatible with this module.
info "fsevents@2.3.2" is an optional dependency and failed compatibility check. Excluding it from installation.
info fsevents@2.1.3: The platform "linux" is incompatible with this module.
info "fsevents@2.1.3" is an optional dependency and failed compatibility check. Excluding it from installation.
info fsevents@1.2.13: The platform "linux" is incompatible with this module.
info "fsevents@1.2.13" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > @nomiclabs/hardhat-waffle@2.0.1" has incorrect peer dependency "@nomiclabs/hardhat-ethers@^2.0.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > hardhat-deploy@0.9.0" has unmet peer dependency "@ethersproject/hardware-wallets@^5.0.14".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-config-airbnb@18.2.1" has unmet peer dependency "eslint-plugin-import@^2.22.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-config-airbnb@18.2.1" has unmet peer dependency "eslint-plugin-jsx-a11y@^6.4.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-config-airbnb@18.2.1" has unmet peer dependency "eslint-plugin-react@^7.21.5".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-config-airbnb@18.2.1" has unmet peer dependency "eslint-plugin-react-hooks@^4 || ^3 || ^2.3.0 || ^1.7.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-plugin-prettier@3.4.0" has unmet peer dependency "prettier@>=1.13.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @testing-library/user-event@12.8.3" has incorrect peer dependency "@testing-library/dom@>=7.21.4".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @uniswap/sdk@3.0.3" has unmet peer dependency "@ethersproject/address@^5.0.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @uniswap/sdk@3.0.3" has unmet peer dependency "@ethersproject/contracts@^5.0.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @uniswap/sdk@3.0.3" has unmet peer dependency "@ethersproject/networks@^5.0.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @uniswap/sdk@3.0.3" has unmet peer dependency "@ethersproject/providers@^5.0.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @uniswap/sdk@3.0.3" has unmet peer dependency "@ethersproject/solidity@^5.0.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > arb-ts@0.0.18" has incorrect peer dependency "ethers@~5.0.24".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/address@^5.0.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/bignumber@^5.0.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/constants@^5.0.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/contracts@^5.0.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/providers@^5.0.2".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/units@^5.0.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has incorrect peer dependency "react@^16.9.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql@1.4.2" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql@1.4.2" has incorrect peer dependency "react@^16.8.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql@1.4.2" has incorrect peer dependency "react-dom@^16.8.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > react-qr-reader@2.2.1" has incorrect peer dependency "react@~16".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > react-qr-reader@2.2.1" has incorrect peer dependency "react-dom@~16".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-config-airbnb > eslint-config-airbnb-base@14.2.1" has unmet peer dependency "eslint-plugin-import@^2.22.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > @graphiql/toolkit@0.2.2" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > codemirror-graphql@1.0.2" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > graphql-language-service@3.1.4" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > web3modal > styled-components@5.3.0" has unmet peer dependency "react-is@>= 16.8.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > codemirror-graphql > graphql-language-service-interface@2.8.4" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > codemirror-graphql > graphql-language-service-parser@1.9.2" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > graphql-language-service > graphql-language-service-types@1.8.2" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > react-scripts > @typescript-eslint/eslint-plugin > tsutils@3.21.0" has unmet peer dependency "typescript@>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > walletlink > eth-block-tracker > @babel/plugin-transform-runtime@7.14.5" has unmet peer dependency "@babel/core@^7.0.0-0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > ethereum-waffle > @ethereum-waffle/compiler > typechain > ts-essentials@6.0.7" has unmet peer dependency "typescript@>=3.7.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > codemirror-graphql > graphql-language-service-interface > graphql-language-service-utils@2.5.3" has incorrect peer dependency
"graphql@>= v14.5.0 <= 15.5.0".
[4/4] Building fresh packages...
[9/21] ⢀ secp256k1
[15/21] ⢀ ursa-optional
[8/21] ⢀ keccak
[16/21] ⢀ keytar
**warning Error running install script for optional dependency: "/home/semiott/Apps/eth-apps/PolgarNet/scaffold-eth/node_modules/keytar: Command failed.
Exit code: 1**
Command: prebuild-install || node-gyp rebuild
Arguments:
Directory: /home/semiott/Apps/eth-apps/PolgarNet/scaffold-eth/node_modules/keytar
Output:
prebuild-install WARN install No prebuilt binaries found (target=14.0.0 runtime=node arch=x64 libc= platform=linux)
gyp info it worked if it ends with ok
gyp info using node-gyp@5.1.0
gyp info using node@14.0.0 | linux | x64
gyp info find Python using Python version 3.9.5 found at \"/usr/bin/python3\"
gyp info spawn /usr/bin/python3
gyp info spawn args [
gyp info spawn args '/home/semiott/.nvm/versions/node/v14.0.0/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/home/semiott/Apps/eth-apps/PolgarNet/scaffold-eth/node_modules/keytar/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/home/semiott/.nvm/versions/node/v14.0.0/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/home/semiott/.cache/node-gyp/14.0.0/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/home/semiott/.cache/node-gyp/14.0.0',
gyp info spawn args '-Dnode_gyp_dir=/home/semiott/.nvm/versions/node/v14.0.0/lib/node_modules/npm/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/home/semiott/.cache/node-gyp/14.0.0/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/home/semiott/Apps/eth-apps/PolgarNet/scaffold-eth/node_modules/keytar',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
**Package libsecret-1 was not found in the pkg-config search path.
Perhaps you should add the directory containing `libsecret-1.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libsecret-1' found
gyp: Call to 'pkg-config --cflags libsecret-1' returned exit status 1 while in binding.gyp. while trying to load binding.gyp
gyp ERR! configure error
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/home/semiott/.nvm/versions/node/v14.0.0/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:351:16)**
**gyp ERR! stack at ChildProcess.emit (events.js:315:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:276:12)
gyp ERR! System Linux 5.11.0-31-generic
gyp ERR! command \"/home/semiott/.nvm/versions/node/v14.0.0/bin/node\" \"/home/semiott/.nvm/versions/node/v14.0.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js\" \"rebuild\"
gyp ERR! cwd /home/semiott/Apps/eth-apps/PolgarNet/scaffold-eth/node_modules/keytar**
success Saved lockfile.
Done in 253.64s.
|
1.0
|
Yarn Toolkit Install Build Error on Ubuntu 21.04 - Hello Scaffold Team,
I am trying to do YARN installation of Scaffold ETH on Ubuntu 21.04. Request for help to address this issue.
yarn install v1.22.10
[1/4] Resolving packages...
[2/4] Fetching packages...
warning Pattern ["@apollo/client@latest"] is trying to unpack in the same destination "/home/semiott/.cache/yarn/v6/npm-@apollo-client-3.3.21-2862baa4e1ced8c5e89ebe6fc52877fc64a726aa-integrity/node_modules/@apollo/client" as pattern
["@apollo/client@^3.3.21"]. This could result in non-deterministic behavior, skipping.
info fsevents@2.3.2: The platform "linux" is incompatible with this module.
info "fsevents@2.3.2" is an optional dependency and failed compatibility check. Excluding it from installation.
info fsevents@2.1.3: The platform "linux" is incompatible with this module.
info "fsevents@2.1.3" is an optional dependency and failed compatibility check. Excluding it from installation.
info fsevents@1.2.13: The platform "linux" is incompatible with this module.
info "fsevents@1.2.13" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > @nomiclabs/hardhat-waffle@2.0.1" has incorrect peer dependency "@nomiclabs/hardhat-ethers@^2.0.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > hardhat-deploy@0.9.0" has unmet peer dependency "@ethersproject/hardware-wallets@^5.0.14".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-config-airbnb@18.2.1" has unmet peer dependency "eslint-plugin-import@^2.22.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-config-airbnb@18.2.1" has unmet peer dependency "eslint-plugin-jsx-a11y@^6.4.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-config-airbnb@18.2.1" has unmet peer dependency "eslint-plugin-react@^7.21.5".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-config-airbnb@18.2.1" has unmet peer dependency "eslint-plugin-react-hooks@^4 || ^3 || ^2.3.0 || ^1.7.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-plugin-prettier@3.4.0" has unmet peer dependency "prettier@>=1.13.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @testing-library/user-event@12.8.3" has incorrect peer dependency "@testing-library/dom@>=7.21.4".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @uniswap/sdk@3.0.3" has unmet peer dependency "@ethersproject/address@^5.0.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @uniswap/sdk@3.0.3" has unmet peer dependency "@ethersproject/contracts@^5.0.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @uniswap/sdk@3.0.3" has unmet peer dependency "@ethersproject/networks@^5.0.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @uniswap/sdk@3.0.3" has unmet peer dependency "@ethersproject/providers@^5.0.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > @uniswap/sdk@3.0.3" has unmet peer dependency "@ethersproject/solidity@^5.0.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > arb-ts@0.0.18" has incorrect peer dependency "ethers@~5.0.24".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/address@^5.0.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/bignumber@^5.0.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/constants@^5.0.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/contracts@^5.0.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/providers@^5.0.2".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has unmet peer dependency "@ethersproject/units@^5.0.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > eth-hooks@1.1.2" has incorrect peer dependency "react@^16.9.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql@1.4.2" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql@1.4.2" has incorrect peer dependency "react@^16.8.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql@1.4.2" has incorrect peer dependency "react-dom@^16.8.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > react-qr-reader@2.2.1" has incorrect peer dependency "react@~16".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > react-qr-reader@2.2.1" has incorrect peer dependency "react-dom@~16".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > eslint-config-airbnb > eslint-config-airbnb-base@14.2.1" has unmet peer dependency "eslint-plugin-import@^2.22.1".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > @graphiql/toolkit@0.2.2" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > codemirror-graphql@1.0.2" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > graphql-language-service@3.1.4" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > web3modal > styled-components@5.3.0" has unmet peer dependency "react-is@>= 16.8.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > codemirror-graphql > graphql-language-service-interface@2.8.4" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > codemirror-graphql > graphql-language-service-parser@1.9.2" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > graphql-language-service > graphql-language-service-types@1.8.2" has incorrect peer dependency "graphql@>= v14.5.0 <= 15.5.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > react-scripts > @typescript-eslint/eslint-plugin > tsutils@3.21.0" has unmet peer dependency "typescript@>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > walletlink > eth-block-tracker > @babel/plugin-transform-runtime@7.14.5" has unmet peer dependency "@babel/core@^7.0.0-0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/hardhat > ethereum-waffle > @ethereum-waffle/compiler > typechain > ts-essentials@6.0.7" has unmet peer dependency "typescript@>=3.7.0".
warning "workspace-aggregator-2d691895-2743-472d-9cea-396bd7c83864 > @scaffold-eth/react-app > graphiql > codemirror-graphql > graphql-language-service-interface > graphql-language-service-utils@2.5.3" has incorrect peer dependency
"graphql@>= v14.5.0 <= 15.5.0".
[4/4] Building fresh packages...
[9/21] ⢀ secp256k1
[15/21] ⢀ ursa-optional
[8/21] ⢀ keccak
[16/21] ⢀ keytar
**warning Error running install script for optional dependency: "/home/semiott/Apps/eth-apps/PolgarNet/scaffold-eth/node_modules/keytar: Command failed.
Exit code: 1**
Command: prebuild-install || node-gyp rebuild
Arguments:
Directory: /home/semiott/Apps/eth-apps/PolgarNet/scaffold-eth/node_modules/keytar
Output:
prebuild-install WARN install No prebuilt binaries found (target=14.0.0 runtime=node arch=x64 libc= platform=linux)
gyp info it worked if it ends with ok
gyp info using node-gyp@5.1.0
gyp info using node@14.0.0 | linux | x64
gyp info find Python using Python version 3.9.5 found at \"/usr/bin/python3\"
gyp info spawn /usr/bin/python3
gyp info spawn args [
gyp info spawn args '/home/semiott/.nvm/versions/node/v14.0.0/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/home/semiott/Apps/eth-apps/PolgarNet/scaffold-eth/node_modules/keytar/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/home/semiott/.nvm/versions/node/v14.0.0/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/home/semiott/.cache/node-gyp/14.0.0/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/home/semiott/.cache/node-gyp/14.0.0',
gyp info spawn args '-Dnode_gyp_dir=/home/semiott/.nvm/versions/node/v14.0.0/lib/node_modules/npm/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/home/semiott/.cache/node-gyp/14.0.0/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/home/semiott/Apps/eth-apps/PolgarNet/scaffold-eth/node_modules/keytar',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
**Package libsecret-1 was not found in the pkg-config search path.
Perhaps you should add the directory containing `libsecret-1.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libsecret-1' found
gyp: Call to 'pkg-config --cflags libsecret-1' returned exit status 1 while in binding.gyp. while trying to load binding.gyp
gyp ERR! configure error
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/home/semiott/.nvm/versions/node/v14.0.0/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:351:16)**
**gyp ERR! stack at ChildProcess.emit (events.js:315:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:276:12)
gyp ERR! System Linux 5.11.0-31-generic
gyp ERR! command \"/home/semiott/.nvm/versions/node/v14.0.0/bin/node\" \"/home/semiott/.nvm/versions/node/v14.0.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js\" \"rebuild\"
gyp ERR! cwd /home/semiott/Apps/eth-apps/PolgarNet/scaffold-eth/node_modules/keytar**
success Saved lockfile.
Done in 253.64s.
|
process
|
yarn toolkit install build error on ubuntu hello scaffold team i am trying to do yarn installation of scaffold eth on ubuntu request for help to address this issue yarn install resolving packages fetching packages warning pattern is trying to unpack in the same destination home semiott cache yarn npm apollo client integrity node modules apollo client as pattern this could result in non deterministic behavior skipping info fsevents the platform linux is incompatible with this module info fsevents is an optional dependency and failed compatibility check excluding it from installation info fsevents the platform linux is incompatible with this module info fsevents is an optional dependency and failed compatibility check excluding it from installation info fsevents the platform linux is incompatible with this module info fsevents is an optional dependency and failed compatibility check excluding it from installation linking dependencies warning workspace aggregator scaffold eth hardhat nomiclabs hardhat waffle has incorrect peer dependency nomiclabs hardhat ethers warning workspace aggregator scaffold eth hardhat hardhat deploy has unmet peer dependency ethersproject hardware wallets warning workspace aggregator scaffold eth hardhat eslint config airbnb has unmet peer dependency eslint plugin import warning workspace aggregator scaffold eth hardhat eslint config airbnb has unmet peer dependency eslint plugin jsx warning workspace aggregator scaffold eth hardhat eslint config airbnb has unmet peer dependency eslint plugin react warning workspace aggregator scaffold eth hardhat eslint config airbnb has unmet peer dependency eslint plugin react hooks warning workspace aggregator scaffold eth hardhat eslint plugin prettier has unmet peer dependency prettier warning workspace aggregator scaffold eth react app testing library user event has incorrect peer dependency testing library dom warning workspace aggregator scaffold eth react app uniswap sdk has unmet peer dependency ethersproject address beta warning workspace aggregator scaffold eth react app uniswap sdk has unmet peer dependency ethersproject contracts beta warning workspace aggregator scaffold eth react app uniswap sdk has unmet peer dependency ethersproject networks beta warning workspace aggregator scaffold eth react app uniswap sdk has unmet peer dependency ethersproject providers beta warning workspace aggregator scaffold eth react app uniswap sdk has unmet peer dependency ethersproject solidity beta warning workspace aggregator scaffold eth react app arb ts has incorrect peer dependency ethers warning workspace aggregator scaffold eth react app eth hooks has unmet peer dependency ethersproject address warning workspace aggregator scaffold eth react app eth hooks has unmet peer dependency ethersproject bignumber warning workspace aggregator scaffold eth react app eth hooks has unmet peer dependency ethersproject constants warning workspace aggregator scaffold eth react app eth hooks has unmet peer dependency ethersproject contracts warning workspace aggregator scaffold eth react app eth hooks has unmet peer dependency ethersproject providers warning workspace aggregator scaffold eth react app eth hooks has unmet peer dependency ethersproject units warning workspace aggregator scaffold eth react app eth hooks has incorrect peer dependency react warning workspace aggregator scaffold eth react app graphiql has incorrect peer dependency graphql warning workspace aggregator scaffold eth react app graphiql has incorrect peer dependency react warning workspace aggregator scaffold eth react app graphiql has incorrect peer dependency react dom warning workspace aggregator scaffold eth react app react qr reader has incorrect peer dependency react warning workspace aggregator scaffold eth react app react qr reader has incorrect peer dependency react dom warning workspace aggregator scaffold eth hardhat eslint config airbnb eslint config airbnb base has unmet peer dependency eslint plugin import warning workspace aggregator scaffold eth react app graphiql graphiql toolkit has incorrect peer dependency graphql warning workspace aggregator scaffold eth react app graphiql codemirror graphql has incorrect peer dependency graphql warning workspace aggregator scaffold eth react app graphiql graphql language service has incorrect peer dependency graphql warning workspace aggregator scaffold eth react app styled components has unmet peer dependency react is warning workspace aggregator scaffold eth react app graphiql codemirror graphql graphql language service interface has incorrect peer dependency graphql warning workspace aggregator scaffold eth react app graphiql codemirror graphql graphql language service parser has incorrect peer dependency graphql warning workspace aggregator scaffold eth react app graphiql graphql language service graphql language service types has incorrect peer dependency graphql warning workspace aggregator scaffold eth react app react scripts typescript eslint eslint plugin tsutils has unmet peer dependency typescript dev dev dev dev dev beta dev beta warning workspace aggregator scaffold eth react app walletlink eth block tracker babel plugin transform runtime has unmet peer dependency babel core warning workspace aggregator scaffold eth hardhat ethereum waffle ethereum waffle compiler typechain ts essentials has unmet peer dependency typescript warning workspace aggregator scaffold eth react app graphiql codemirror graphql graphql language service interface graphql language service utils has incorrect peer dependency graphql building fresh packages ⢀ ⢀ ursa optional ⢀ keccak ⢀ keytar warning error running install script for optional dependency home semiott apps eth apps polgarnet scaffold eth node modules keytar command failed exit code command prebuild install node gyp rebuild arguments directory home semiott apps eth apps polgarnet scaffold eth node modules keytar output prebuild install warn install no prebuilt binaries found target runtime node arch libc platform linux gyp info it worked if it ends with ok gyp info using node gyp gyp info using node linux gyp info find python using python version found at usr bin gyp info spawn usr bin gyp info spawn args gyp info spawn args home semiott nvm versions node lib node modules npm node modules node gyp gyp gyp main py gyp info spawn args binding gyp gyp info spawn args f gyp info spawn args make gyp info spawn args i gyp info spawn args home semiott apps eth apps polgarnet scaffold eth node modules keytar build config gypi gyp info spawn args i gyp info spawn args home semiott nvm versions node lib node modules npm node modules node gyp addon gypi gyp info spawn args i gyp info spawn args home semiott cache node gyp include node common gypi gyp info spawn args dlibrary shared library gyp info spawn args dvisibility default gyp info spawn args dnode root dir home semiott cache node gyp gyp info spawn args dnode gyp dir home semiott nvm versions node lib node modules npm node modules node gyp gyp info spawn args dnode lib file home semiott cache node gyp target arch node lib gyp info spawn args dmodule root dir home semiott apps eth apps polgarnet scaffold eth node modules keytar gyp info spawn args dnode engine gyp info spawn args depth gyp info spawn args no parallel gyp info spawn args generator output gyp info spawn args build gyp info spawn args goutput dir gyp info spawn args package libsecret was not found in the pkg config search path perhaps you should add the directory containing libsecret pc to the pkg config path environment variable no package libsecret found gyp call to pkg config cflags libsecret returned exit status while in binding gyp while trying to load binding gyp gyp err configure error gyp err stack error gyp failed with exit code gyp err stack at childprocess oncpexit home semiott nvm versions node lib node modules npm node modules node gyp lib configure js gyp err stack at childprocess emit events js gyp err stack at process childprocess handle onexit internal child process js gyp err system linux generic gyp err command home semiott nvm versions node bin node home semiott nvm versions node lib node modules npm node modules node gyp bin node gyp js rebuild gyp err cwd home semiott apps eth apps polgarnet scaffold eth node modules keytar success saved lockfile done in
| 1
|
189,075
| 6,793,702,701
|
IssuesEvent
|
2017-11-01 08:54:39
|
kel85uk/ewoms
|
https://api.github.com/repos/kel85uk/ewoms
|
closed
|
RSCONSTT keyword not recognized
|
help wanted High Priority question
|
Hi @nairr , I am running into trouble of the keyword RSCONSTT not being recognized in the example file C32... of the aquifer example. Is this something we need to be concerned with?
Also, with the AQUTAB, how am I able to access the non-dimensional parameters?
|
1.0
|
RSCONSTT keyword not recognized - Hi @nairr , I am running into trouble of the keyword RSCONSTT not being recognized in the example file C32... of the aquifer example. Is this something we need to be concerned with?
Also, with the AQUTAB, how am I able to access the non-dimensional parameters?
|
non_process
|
rsconstt keyword not recognized hi nairr i am running into trouble of the keyword rsconstt not being recognized in the example file of the aquifer example is this something we need to be concerned with also with the aqutab how am i able to access the non dimensional parameters
| 0
|
143,466
| 11,566,192,679
|
IssuesEvent
|
2020-02-20 12:00:40
|
knative/serving
|
https://api.github.com/repos/knative/serving
|
closed
|
Incorporate SOURCE_DATE_EPOCH into release.sh
|
area/test-and-release kind/cleanup kind/feature lifecycle/rotten
|
<!--
/area test-and-release
/kind cleanup
/kind dev
-->
## Expected Behavior
We should be timestamping release images based on when they were produced.
## Actual Behavior
`ko` and `bazel` use the Unix epoch, so images show up as decades old.
This is because they bias towards byte-for-byte reproducible builds, and a changing timestamp would add seconds-to-minutes to developer iteration cycles (seconds for reuploads, minutes when things are unnecessarily redeployed).
Both tools now respect the SOURCE_DATE_EPOCH environment variable that is being standardized by the reproduciblebuild.org folks.
Thanks to @jszroberto for adding this to `ko` [here](https://github.com/google/go-containerregistry/pull/146)!
|
1.0
|
Incorporate SOURCE_DATE_EPOCH into release.sh - <!--
/area test-and-release
/kind cleanup
/kind dev
-->
## Expected Behavior
We should be timestamping release images based on when they were produced.
## Actual Behavior
`ko` and `bazel` use the Unix epoch, so images show up as decades old.
This is because they bias towards byte-for-byte reproducible builds, and a changing timestamp would add seconds-to-minutes to developer iteration cycles (seconds for reuploads, minutes when things are unnecessarily redeployed).
Both tools now respect the SOURCE_DATE_EPOCH environment variable that is being standardized by the reproduciblebuild.org folks.
Thanks to @jszroberto for adding this to `ko` [here](https://github.com/google/go-containerregistry/pull/146)!
|
non_process
|
incorporate source date epoch into release sh area test and release kind cleanup kind dev expected behavior we should be timestamping release images based on when they were produced actual behavior ko and bazel use the unix epoch so images show up as decades old this is because they bias towards byte for byte reproducible builds and a changing timestamp would add seconds to minutes to developer iteration cycles seconds for reuploads minutes when things are unnecessarily redeployed both tools now respect the source date epoch environment variable that is being standardized by the reproduciblebuild org folks thanks to jszroberto for adding this to ko
| 0
|
7,293
| 10,439,967,794
|
IssuesEvent
|
2019-09-18 07:41:15
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
GRASS r.series produces wrong results when using rasters with float values
|
Bug Processing
|
When GRASS r.series is used with multiple rasters with float values, r.series does not work as expected. I tried “count” and “average” and the results are wrong (count gives wrong number, average gives a raster with 0 values). I tried the same set of rasters with QGIS 2.18.28 and the results are as expected.
QGIS-Version: 3.8.1-Zanzibar
Windows 10 (10.0)
|
1.0
|
GRASS r.series produces wrong results when using rasters with float values - When GRASS r.series is used with multiple rasters with float values, r.series does not work as expected. I tried “count” and “average” and the results are wrong (count gives wrong number, average gives a raster with 0 values). I tried the same set of rasters with QGIS 2.18.28 and the results are as expected.
QGIS-Version: 3.8.1-Zanzibar
Windows 10 (10.0)
|
process
|
grass r series produces wrong results when using rasters with float values when grass r series is used with multiple rasters with float values r series does not work as expected i tried “count” and “average” and the results are wrong count gives wrong number average gives a raster with values i tried the same set of rasters with qgis and the results are as expected qgis version zanzibar windows
| 1
|
5,254
| 8,042,627,426
|
IssuesEvent
|
2018-07-31 08:46:30
|
dzhw/zofar
|
https://api.github.com/repos/dzhw/zofar
|
closed
|
bug: intruction
|
category: technical.processes et: 1 prio: 1 status: testing type: bug
|
class for introduction and instruction are mixed up in the class intruction
introductions behave like instructions with this class.
|
1.0
|
bug: intruction - class for introduction and instruction are mixed up in the class intruction
introductions behave like instructions with this class.
|
process
|
bug intruction class for introduction and instruction are mixed up in the class intruction introductions behave like instructions with this class
| 1
|
9,006
| 12,121,648,697
|
IssuesEvent
|
2020-04-22 09:38:39
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Support copy/paste of objects in modeler
|
Feature Request Processing
|
Author Name: **Magnus Nilsson** (Magnus Nilsson)
Original Redmine Issue: [5479](https://issues.qgis.org/issues/5479)
Redmine category:processing/modeller
Assignee: Victor Olaya
---
I´d like to see support for copy/paste of objects in modeler. Sometimes you need multiple versions of the same tool, so being able to copy/paste would help.
---
Related issue(s): #24190 (duplicates)
Redmine related issue(s): [16280](https://issues.qgis.org/issues/16280)
---
|
1.0
|
Support copy/paste of objects in modeler - Author Name: **Magnus Nilsson** (Magnus Nilsson)
Original Redmine Issue: [5479](https://issues.qgis.org/issues/5479)
Redmine category:processing/modeller
Assignee: Victor Olaya
---
I´d like to see support for copy/paste of objects in modeler. Sometimes you need multiple versions of the same tool, so being able to copy/paste would help.
---
Related issue(s): #24190 (duplicates)
Redmine related issue(s): [16280](https://issues.qgis.org/issues/16280)
---
|
process
|
support copy paste of objects in modeler author name magnus nilsson magnus nilsson original redmine issue redmine category processing modeller assignee victor olaya i´d like to see support for copy paste of objects in modeler sometimes you need multiple versions of the same tool so being able to copy paste would help related issue s duplicates redmine related issue s
| 1
|
6,197
| 9,105,420,578
|
IssuesEvent
|
2019-02-20 20:45:11
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Process kill tree - No such process when killing firefox
|
area-System.Diagnostics.Process
|
Hey there,
we've started a new process and would like to use the new `Kill(true)` method for killing an entire process tree. However we encountered a special behaviour when trying to kill firefox or chrome.
After calling the `Kill(true)` method of a process object starting firefox or chrome we get an `Win32Exception - No such process` exception.
```
Unhandled Exception: System.AggregateException: Not all processes in process tree could be terminated. (No such process) (No such process) (No such process) ---> System.ComponentModel.Win32Exception: No such process
at System.Diagnostics.Process.Stop()
at System.Diagnostics.Process.KillTree()
--- End of inner exception stack trace ---
at System.Diagnostics.Process.Kill(Boolean entireProcessTree)
at MagicApplication.Daemon.ProcessManagement.ManagedProcess.StopAsync(CancellationToken cancellationToken) in /home/user/Repositories/MagicApplication/MagicApplication.Daemon/ProcessManagement/ManagedProcess.cs:line 249
at MagicApplication.Daemon.Daemon.StopAsync() in /home/user/Repositories/MagicApplication/MagicApplication.Daemon/Daemon.cs:line 34
at MagicApplication.Daemon.Program.Main(String[] args) in /home/user/Repositories/MagicApplication/MagicApplication.Daemon/Program.cs:line 39
at MagicApplication.Daemon.Program.<Main>(String[] args)
```
When understanding the source code right it seems that first the child process list is retrieved and afterwards the main process is killed.
https://github.com/dotnet/corefx/blob/81bd671efda5db20fae1eb381aeff2cf8ea727ac/src/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs#L99
But firefox automatically clean's up it's children so that the `Stop()` call on the entries of the child process list fails with a non zero exit code: `ESRCH`. This fires a `Win32Exception` and not as handled an `InvalidOperationException`
OS: Ubuntu 18.04
|
1.0
|
Process kill tree - No such process when killing firefox - Hey there,
we've started a new process and would like to use the new `Kill(true)` method for killing an entire process tree. However we encountered a special behaviour when trying to kill firefox or chrome.
After calling the `Kill(true)` method of a process object starting firefox or chrome we get an `Win32Exception - No such process` exception.
```
Unhandled Exception: System.AggregateException: Not all processes in process tree could be terminated. (No such process) (No such process) (No such process) ---> System.ComponentModel.Win32Exception: No such process
at System.Diagnostics.Process.Stop()
at System.Diagnostics.Process.KillTree()
--- End of inner exception stack trace ---
at System.Diagnostics.Process.Kill(Boolean entireProcessTree)
at MagicApplication.Daemon.ProcessManagement.ManagedProcess.StopAsync(CancellationToken cancellationToken) in /home/user/Repositories/MagicApplication/MagicApplication.Daemon/ProcessManagement/ManagedProcess.cs:line 249
at MagicApplication.Daemon.Daemon.StopAsync() in /home/user/Repositories/MagicApplication/MagicApplication.Daemon/Daemon.cs:line 34
at MagicApplication.Daemon.Program.Main(String[] args) in /home/user/Repositories/MagicApplication/MagicApplication.Daemon/Program.cs:line 39
at MagicApplication.Daemon.Program.<Main>(String[] args)
```
When understanding the source code right it seems that first the child process list is retrieved and afterwards the main process is killed.
https://github.com/dotnet/corefx/blob/81bd671efda5db20fae1eb381aeff2cf8ea727ac/src/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs#L99
But firefox automatically clean's up it's children so that the `Stop()` call on the entries of the child process list fails with a non zero exit code: `ESRCH`. This fires a `Win32Exception` and not as handled an `InvalidOperationException`
OS: Ubuntu 18.04
|
process
|
process kill tree no such process when killing firefox hey there we ve started a new process and would like to use the new kill true method for killing an entire process tree however we encountered a special behaviour when trying to kill firefox or chrome after calling the kill true method of a process object starting firefox or chrome we get an no such process exception unhandled exception system aggregateexception not all processes in process tree could be terminated no such process no such process no such process system componentmodel no such process at system diagnostics process stop at system diagnostics process killtree end of inner exception stack trace at system diagnostics process kill boolean entireprocesstree at magicapplication daemon processmanagement managedprocess stopasync cancellationtoken cancellationtoken in home user repositories magicapplication magicapplication daemon processmanagement managedprocess cs line at magicapplication daemon daemon stopasync in home user repositories magicapplication magicapplication daemon daemon cs line at magicapplication daemon program main string args in home user repositories magicapplication magicapplication daemon program cs line at magicapplication daemon program string args when understanding the source code right it seems that first the child process list is retrieved and afterwards the main process is killed but firefox automatically clean s up it s children so that the stop call on the entries of the child process list fails with a non zero exit code esrch this fires a and not as handled an invalidoperationexception os ubuntu
| 1
|
121,102
| 12,104,492,258
|
IssuesEvent
|
2020-04-20 20:18:56
|
perslab/CELLEX
|
https://api.github.com/repos/perslab/CELLEX
|
opened
|
Update parameter name in CELLEX wiki
|
documentation
|
the [CELLEX workflow wiki page](https://github.com/perslab/CELLEX/wiki/CELLEX-workflow) gives the following example of running `ESObject`:
`cellex.ESObject(df=data, annotation=metadata, normalize=False, verbose=True)`
however the name of the first argument should be 'data'
|
1.0
|
Update parameter name in CELLEX wiki - the [CELLEX workflow wiki page](https://github.com/perslab/CELLEX/wiki/CELLEX-workflow) gives the following example of running `ESObject`:
`cellex.ESObject(df=data, annotation=metadata, normalize=False, verbose=True)`
however the name of the first argument should be 'data'
|
non_process
|
update parameter name in cellex wiki the gives the following example of running esobject cellex esobject df data annotation metadata normalize false verbose true however the name of the first argument should be data
| 0
|
20,677
| 27,349,029,805
|
IssuesEvent
|
2023-02-27 08:09:34
|
Graylog2/graylog2-server
|
https://api.github.com/repos/Graylog2/graylog2-server
|
closed
|
Excessive WARN logging of pipeline rule output
|
processing bug triaged
|
We have recently noted that excessive WARN logging can negatively impact the customer experience, ref: https://github.com/Graylog2/graylog-plugin-enterprise/issues/3481
Today we observed WARN output for a pipeline rule that was emitting approximately ~400 messages per second, and for quite some time:

This was specifically noted in a Graylog Cloud installation, where customer does not otherwise have visibility this is happening.
In terms of pipeline rule outcome adding information to the message, this does not seem to have negatively impacted customer's data, so this message was more or less emitting into the void without consequence otherwise.
HS-1395765433
|
1.0
|
Excessive WARN logging of pipeline rule output - We have recently noted that excessive WARN logging can negatively impact the customer experience, ref: https://github.com/Graylog2/graylog-plugin-enterprise/issues/3481
Today we observed WARN output for a pipeline rule that was emitting approximately ~400 messages per second, and for quite some time:

This was specifically noted in a Graylog Cloud installation, where customer does not otherwise have visibility this is happening.
In terms of pipeline rule outcome adding information to the message, this does not seem to have negatively impacted customer's data, so this message was more or less emitting into the void without consequence otherwise.
HS-1395765433
|
process
|
excessive warn logging of pipeline rule output we have recently noted that excessive warn logging can negatively impact the customer experience ref today we observed warn output for a pipeline rule that was emitting approximately messages per second and for quite some time this was specifically noted in a graylog cloud installation where customer does not otherwise have visibility this is happening in terms of pipeline rule outcome adding information to the message this does not seem to have negatively impacted customer s data so this message was more or less emitting into the void without consequence otherwise hs
| 1
|
281,319
| 30,888,701,641
|
IssuesEvent
|
2023-08-04 01:42:33
|
nidhi7598/linux-4.1.15_CVE-2019-10220
|
https://api.github.com/repos/nidhi7598/linux-4.1.15_CVE-2019-10220
|
reopened
|
CVE-2017-7187 (High) detected in linuxlinux-4.4.302
|
Mend: dependency security vulnerability
|
## CVE-2017-7187 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.4.302</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.1.15_CVE-2019-10220/commit/6a0d304d962ca933d73f507ce02157ef2791851c">6a0d304d962ca933d73f507ce02157ef2791851c</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The sg_ioctl function in drivers/scsi/sg.c in the Linux kernel through 4.10.4 allows local users to cause a denial of service (stack-based buffer overflow) or possibly have unspecified other impact via a large command size in an SG_NEXT_CMD_LEN ioctl call, leading to out-of-bounds write access in the sg_write function.
<p>Publish Date: 2017-03-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-7187>CVE-2017-7187</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7187">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7187</a></p>
<p>Release Date: 2017-03-20</p>
<p>Fix Resolution: v4.11-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-7187 (High) detected in linuxlinux-4.4.302 - ## CVE-2017-7187 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.4.302</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.1.15_CVE-2019-10220/commit/6a0d304d962ca933d73f507ce02157ef2791851c">6a0d304d962ca933d73f507ce02157ef2791851c</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The sg_ioctl function in drivers/scsi/sg.c in the Linux kernel through 4.10.4 allows local users to cause a denial of service (stack-based buffer overflow) or possibly have unspecified other impact via a large command size in an SG_NEXT_CMD_LEN ioctl call, leading to out-of-bounds write access in the sg_write function.
<p>Publish Date: 2017-03-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-7187>CVE-2017-7187</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7187">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7187</a></p>
<p>Release Date: 2017-03-20</p>
<p>Fix Resolution: v4.11-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers scsi sg c vulnerability details the sg ioctl function in drivers scsi sg c in the linux kernel through allows local users to cause a denial of service stack based buffer overflow or possibly have unspecified other impact via a large command size in an sg next cmd len ioctl call leading to out of bounds write access in the sg write function publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
27,590
| 21,951,932,124
|
IssuesEvent
|
2022-05-24 08:41:48
|
zer0Kerbal/ProbiTronics
|
https://api.github.com/repos/zer0Kerbal/ProbiTronics
|
closed
|
Asset Updates
|
issue: texture issue: sound issue: config type: infrastructure
|
### Asset Updates
* [ ] create Assets/ folder
* [ ] convert from mesh to MODEL
* [ ] rename
* [ ] models to unique names
* [ ] textures to unique names
* [ ] update
* [ ] model pointers (.png et al to .dds)
* [ ] model texture pointers to new names
* [ ] relocate assets to Assets/
* [ ] eliminate
* [ ] duplicate textures
* [ ] duplicate models
* [ ] relocate part.cfg to Parts/
* [x] #34
|
1.0
|
Asset Updates - ### Asset Updates
* [ ] create Assets/ folder
* [ ] convert from mesh to MODEL
* [ ] rename
* [ ] models to unique names
* [ ] textures to unique names
* [ ] update
* [ ] model pointers (.png et al to .dds)
* [ ] model texture pointers to new names
* [ ] relocate assets to Assets/
* [ ] eliminate
* [ ] duplicate textures
* [ ] duplicate models
* [ ] relocate part.cfg to Parts/
* [x] #34
|
non_process
|
asset updates asset updates create assets folder convert from mesh to model rename models to unique names textures to unique names update model pointers png et al to dds model texture pointers to new names relocate assets to assets eliminate duplicate textures duplicate models relocate part cfg to parts
| 0
|
455,701
| 13,131,696,860
|
IssuesEvent
|
2020-08-06 17:29:41
|
kubernetes-sigs/cluster-api-provider-vsphere
|
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-vsphere
|
closed
|
waitForIPAddresses does not appear to be working for v0.7.0-beta.0
|
kind/bug priority/awaiting-more-evidence
|
/kind bug
Increased machine deployment to initiate creation of new vSphere machine. The vSphereMachineTemplate is configured with `dhcp4: true` and no `ipAddrs`. The vsphereVM was created and the VMware Guest immediately created in vSphere.
```yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: VSphereMachineTemplate
metadata:
name: ts-sharedplatform-west-rck-nonprod
namespace: default
spec:
template:
spec:
network:
devices:
- dhcp4: true
dhcp6: false
gateway4: 10.7.7.254
nameservers:
- 10.10.10.10
networkName: /rck/network/Distributed\ Switches/rck-nonprod-lin-clu-01/rck3236trunk
```
**What did you expect to happen:**
waitForIPAddresses should hold up the creation of the VM until the vspheremachine `ipAddrs` is set.
```log
I0806 15:13:19.891770 1 controller.go:272] controller-runtime/controller "msg"="Successfully Reconciled" "controller"="vspheremachine" "name"="ts-sharedplatform-west-rck-nonprod-rrsmc" "namespace"="default"
I0806 15:13:19.911529 1 vspheremachine_controller.go:579] capv-controller-manager/vspheremachine-controller/default/ts-sharedplatform-west-rck-nonprod-rrsmc "msg"="status.ready not found" "vmGVK"="infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereVM" "vmName"="ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm" "vmNamespace"="default"
I0806 15:13:19.914615 1 vspheremachine_controller.go:355] capv-controller-manager/vspheremachine-controller/default/ts-sharedplatform-west-rck-nonprod-rrsmc "msg"="waiting for ready state"
I0806 15:13:19.915849 1 controller.go:272] controller-runtime/controller "msg"="Successfully Reconciled" "controller"="vspheremachine" "name"="ts-sharedplatform-west-rck-nonprod-rrsmc" "namespace"="default"
I0806 15:13:20.565732 1 util.go:84] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm "msg"="using inventory path to find vm" "path"="/rck/vm/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm"
I0806 15:13:20.606828 1 clone.go:46] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm/vcenter "msg"="starting clone process"
I0806 15:13:20.606859 1 clone.go:50] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm/vcenter "msg"="applied bootstrap data to VM clone spec"
I0806 15:13:20.716227 1 clone.go:65] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm/vcenter "msg"="linked clone requested"
I0806 15:13:20.716256 1 clone.go:69] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm/vcenter "msg"="searching for current snapshot"
I0806 15:13:21.486484 1 clone.go:177] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm/vcenter "msg"="cloning machine" "cloneType"="fullClone" "name"="ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm" "namespace"="default"
I0806 15:13:21.505804 1 vspheremachine_controller.go:579] capv-controller-manager/vspheremachine-controller/default/ts-sharedplatform-west-rck-nonprod-rrsmc "msg"="status.ready not found" "vmGVK"="infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereVM" "vmName"="ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm" "vmNamespace"="default"
I0806 15:13:21.505823 1 vspheremachine_controller.go:355] capv-controller-manager/vspheremachine-controller/default/ts-sharedplatform-west-rck-nonprod-rrsmc "msg"="waiting for ready state"
```
**Anything else you would like to add:**
Management cluster was upgrade with `infrastructure-components.yaml` and not deployed new.
**Environment:**
- Cluster-api-provider-vsphere version: v0.7.0-beta.0
|
1.0
|
waitForIPAddresses does not appear to be working for v0.7.0-beta.0 - /kind bug
Increased machine deployment to initiate creation of new vSphere machine. The vSphereMachineTemplate is configured with `dhcp4: true` and no `ipAddrs`. The vsphereVM was created and the VMware Guest immediately created in vSphere.
```yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: VSphereMachineTemplate
metadata:
name: ts-sharedplatform-west-rck-nonprod
namespace: default
spec:
template:
spec:
network:
devices:
- dhcp4: true
dhcp6: false
gateway4: 10.7.7.254
nameservers:
- 10.10.10.10
networkName: /rck/network/Distributed\ Switches/rck-nonprod-lin-clu-01/rck3236trunk
```
**What did you expect to happen:**
waitForIPAddresses should hold up the creation of the VM until the vspheremachine `ipAddrs` is set.
```log
I0806 15:13:19.891770 1 controller.go:272] controller-runtime/controller "msg"="Successfully Reconciled" "controller"="vspheremachine" "name"="ts-sharedplatform-west-rck-nonprod-rrsmc" "namespace"="default"
I0806 15:13:19.911529 1 vspheremachine_controller.go:579] capv-controller-manager/vspheremachine-controller/default/ts-sharedplatform-west-rck-nonprod-rrsmc "msg"="status.ready not found" "vmGVK"="infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereVM" "vmName"="ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm" "vmNamespace"="default"
I0806 15:13:19.914615 1 vspheremachine_controller.go:355] capv-controller-manager/vspheremachine-controller/default/ts-sharedplatform-west-rck-nonprod-rrsmc "msg"="waiting for ready state"
I0806 15:13:19.915849 1 controller.go:272] controller-runtime/controller "msg"="Successfully Reconciled" "controller"="vspheremachine" "name"="ts-sharedplatform-west-rck-nonprod-rrsmc" "namespace"="default"
I0806 15:13:20.565732 1 util.go:84] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm "msg"="using inventory path to find vm" "path"="/rck/vm/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm"
I0806 15:13:20.606828 1 clone.go:46] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm/vcenter "msg"="starting clone process"
I0806 15:13:20.606859 1 clone.go:50] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm/vcenter "msg"="applied bootstrap data to VM clone spec"
I0806 15:13:20.716227 1 clone.go:65] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm/vcenter "msg"="linked clone requested"
I0806 15:13:20.716256 1 clone.go:69] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm/vcenter "msg"="searching for current snapshot"
I0806 15:13:21.486484 1 clone.go:177] capv-controller-manager/vspherevm-controller/default/ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm/vcenter "msg"="cloning machine" "cloneType"="fullClone" "name"="ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm" "namespace"="default"
I0806 15:13:21.505804 1 vspheremachine_controller.go:579] capv-controller-manager/vspheremachine-controller/default/ts-sharedplatform-west-rck-nonprod-rrsmc "msg"="status.ready not found" "vmGVK"="infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereVM" "vmName"="ts-sharedplatform-west-rck-nonprod-md-0-7dcb5c6fc6-865lm" "vmNamespace"="default"
I0806 15:13:21.505823 1 vspheremachine_controller.go:355] capv-controller-manager/vspheremachine-controller/default/ts-sharedplatform-west-rck-nonprod-rrsmc "msg"="waiting for ready state"
```
**Anything else you would like to add:**
Management cluster was upgrade with `infrastructure-components.yaml` and not deployed new.
**Environment:**
- Cluster-api-provider-vsphere version: v0.7.0-beta.0
|
non_process
|
waitforipaddresses does not appear to be working for beta kind bug increased machine deployment to initiate creation of new vsphere machine the vspheremachinetemplate is configured with true and no ipaddrs the vspherevm was created and the vmware guest immediately created in vsphere yaml apiversion infrastructure cluster x io kind vspheremachinetemplate metadata name ts sharedplatform west rck nonprod namespace default spec template spec network devices true false nameservers networkname rck network distributed switches rck nonprod lin clu what did you expect to happen waitforipaddresses should hold up the creation of the vm until the vspheremachine ipaddrs is set log controller go controller runtime controller msg successfully reconciled controller vspheremachine name ts sharedplatform west rck nonprod rrsmc namespace default vspheremachine controller go capv controller manager vspheremachine controller default ts sharedplatform west rck nonprod rrsmc msg status ready not found vmgvk infrastructure cluster x io kind vspherevm vmname ts sharedplatform west rck nonprod md vmnamespace default vspheremachine controller go capv controller manager vspheremachine controller default ts sharedplatform west rck nonprod rrsmc msg waiting for ready state controller go controller runtime controller msg successfully reconciled controller vspheremachine name ts sharedplatform west rck nonprod rrsmc namespace default util go capv controller manager vspherevm controller default ts sharedplatform west rck nonprod md msg using inventory path to find vm path rck vm ts sharedplatform west rck nonprod md clone go capv controller manager vspherevm controller default ts sharedplatform west rck nonprod md vcenter msg starting clone process clone go capv controller manager vspherevm controller default ts sharedplatform west rck nonprod md vcenter msg applied bootstrap data to vm clone spec clone go capv controller manager vspherevm controller default ts sharedplatform west rck nonprod md vcenter msg linked clone requested clone go capv controller manager vspherevm controller default ts sharedplatform west rck nonprod md vcenter msg searching for current snapshot clone go capv controller manager vspherevm controller default ts sharedplatform west rck nonprod md vcenter msg cloning machine clonetype fullclone name ts sharedplatform west rck nonprod md namespace default vspheremachine controller go capv controller manager vspheremachine controller default ts sharedplatform west rck nonprod rrsmc msg status ready not found vmgvk infrastructure cluster x io kind vspherevm vmname ts sharedplatform west rck nonprod md vmnamespace default vspheremachine controller go capv controller manager vspheremachine controller default ts sharedplatform west rck nonprod rrsmc msg waiting for ready state anything else you would like to add management cluster was upgrade with infrastructure components yaml and not deployed new environment cluster api provider vsphere version beta
| 0
|
4,816
| 7,703,330,492
|
IssuesEvent
|
2018-05-21 07:59:35
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Function called twice when location keyword used
|
AREA: server SYSTEM: resource processing TYPE: bug
|
Consider test ``page.html`` with following content:
```
<script>
var getMyLocation = function() {
document.write("getMyLocation();");
return "1355 Market Street";
};
var updateLocation = function(location) {
location = getMyLocation();
return location;
};
updateLocation("399 Fremont Street");
</script>
```
When this page is opened directly in a browser, it outputs:
``getMyLocation();``.
When this page is opened via TestCafe, it outputs:
``getMyLocation();getMyLocation();``.
Function ``getMyLocation()`` is called twice. Depending on it's nature it can turn the application under test into a completely unpredictable state.
|
1.0
|
Function called twice when location keyword used - Consider test ``page.html`` with following content:
```
<script>
var getMyLocation = function() {
document.write("getMyLocation();");
return "1355 Market Street";
};
var updateLocation = function(location) {
location = getMyLocation();
return location;
};
updateLocation("399 Fremont Street");
</script>
```
When this page is opened directly in a browser, it outputs:
``getMyLocation();``.
When this page is opened via TestCafe, it outputs:
``getMyLocation();getMyLocation();``.
Function ``getMyLocation()`` is called twice. Depending on it's nature it can turn the application under test into a completely unpredictable state.
|
process
|
function called twice when location keyword used consider test page html with following content var getmylocation function document write getmylocation return market street var updatelocation function location location getmylocation return location updatelocation fremont street when this page is opened directly in a browser it outputs getmylocation when this page is opened via testcafe it outputs getmylocation getmylocation function getmylocation is called twice depending on it s nature it can turn the application under test into a completely unpredictable state
| 1
|
685,310
| 23,452,253,442
|
IssuesEvent
|
2022-08-16 04:55:10
|
unicef-drp/GeoSight
|
https://api.github.com/repos/unicef-drp/GeoSight
|
closed
|
Swipe tool or button to hide indicator layer
|
:clock2: 2 🦂 Medium Priority BATCH 2
|
When looking on a map, user may need to switch off the indicator layer to be able to easily see the basemap or context layers.
This could be achieved by a dedicated button on a map (1) or by adding a swipe tool (2) or both.
Problem:

- some layers, e.g. areas of control, are not easily visibile. user may want to turn off / hide the indicator layer to easily see the basemap or context layers.
Solution:
1) Hide Indicator layer button
2) Swipe Tool

|
1.0
|
Swipe tool or button to hide indicator layer - When looking on a map, user may need to switch off the indicator layer to be able to easily see the basemap or context layers.
This could be achieved by a dedicated button on a map (1) or by adding a swipe tool (2) or both.
Problem:

- some layers, e.g. areas of control, are not easily visibile. user may want to turn off / hide the indicator layer to easily see the basemap or context layers.
Solution:
1) Hide Indicator layer button
2) Swipe Tool

|
non_process
|
swipe tool or button to hide indicator layer when looking on a map user may need to switch off the indicator layer to be able to easily see the basemap or context layers this could be achieved by a dedicated button on a map or by adding a swipe tool or both problem some layers e g areas of control are not easily visibile user may want to turn off hide the indicator layer to easily see the basemap or context layers solution hide indicator layer button swipe tool
| 0
|
802,349
| 28,933,404,074
|
IssuesEvent
|
2023-05-09 02:42:53
|
milvus-io/milvus
|
https://api.github.com/repos/milvus-io/milvus
|
closed
|
[Bug]: [Nightly]Nightly test has taken more time on average than before and sometimes failed for timeout
|
kind/bug priority/critical-urgent ci/e2e
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version: 43a9e17
- Deployment mode(standalone or cluster):standalone
- MQ type(rocksmq, pulsar or kafka): rocksmq
- SDK version(e.g. pymilvus v2.0.0rc2):2.4.0.dev7
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
```
### Current Behavior
Nightly test has taken more time on average than before and sometimes failed for timeout.
Milvus not panic.
Latest:
https://jenkins.milvus.io:18080/blue/organizations/jenkins/Milvus%20Nightly%20CI/detail/master/340/pipeline/123
### Expected Behavior
work as before
### Steps To Reproduce
_No response_
### Milvus Log
[artifacts-milvus-standalone-nightly-340-pymilvus-e2e-logs.tar.gz](https://jenkins.milvus.io:18080/job/Milvus%20Nightly%20CI/job/master/340/artifact/artifacts-milvus-standalone-nightly-340-pymilvus-e2e-logs.tar.gz)
### Anything else?
_No response_
|
1.0
|
[Bug]: [Nightly]Nightly test has taken more time on average than before and sometimes failed for timeout - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version: 43a9e17
- Deployment mode(standalone or cluster):standalone
- MQ type(rocksmq, pulsar or kafka): rocksmq
- SDK version(e.g. pymilvus v2.0.0rc2):2.4.0.dev7
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
```
### Current Behavior
Nightly test has taken more time on average than before and sometimes failed for timeout.
Milvus not panic.
Latest:
https://jenkins.milvus.io:18080/blue/organizations/jenkins/Milvus%20Nightly%20CI/detail/master/340/pipeline/123
### Expected Behavior
work as before
### Steps To Reproduce
_No response_
### Milvus Log
[artifacts-milvus-standalone-nightly-340-pymilvus-e2e-logs.tar.gz](https://jenkins.milvus.io:18080/job/Milvus%20Nightly%20CI/job/master/340/artifact/artifacts-milvus-standalone-nightly-340-pymilvus-e2e-logs.tar.gz)
### Anything else?
_No response_
|
non_process
|
nightly test has taken more time on average than before and sometimes failed for timeout is there an existing issue for this i have searched the existing issues environment markdown milvus version deployment mode standalone or cluster standalone mq type rocksmq pulsar or kafka rocksmq sdk version e g pymilvus os ubuntu or centos cpu memory gpu others current behavior nightly test has taken more time on average than before and sometimes failed for timeout milvus not panic latest expected behavior work as before steps to reproduce no response milvus log anything else no response
| 0
|
9,423
| 12,417,350,256
|
IssuesEvent
|
2020-05-22 20:29:21
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Add a distance analysis tool
|
Feature Request Processing
|
Author Name: **Giovanni Manghi** (@gioman)
Original Redmine Issue: [9950](https://issues.qgis.org/issues/9950)
Redmine category:processing/core
Assignee: Victor Olaya
---
Among QGIS out of the box tools we really miss a tool that can computes min (and max) distances between features of the same layer or different layers.
The distance matrix tool just works for points layers and does not create a line output representing the min distances.
The v.distance GRASS module work only to compute distances from point to other features, it is anyway not working in Processing because this module does not produce a new output but modifies the input (there is a ticket about this issue).
To computes min/max distances the user is left with the Spatialite/PostGIS solution, that is out of reach for most of the people.
PS
SL/PostGIS seems anyway the only solution when the number of features is relatively high, as both the distance matrix tool and native grass v.distance are really slow.
|
1.0
|
Add a distance analysis tool - Author Name: **Giovanni Manghi** (@gioman)
Original Redmine Issue: [9950](https://issues.qgis.org/issues/9950)
Redmine category:processing/core
Assignee: Victor Olaya
---
Among QGIS out of the box tools we really miss a tool that can computes min (and max) distances between features of the same layer or different layers.
The distance matrix tool just works for points layers and does not create a line output representing the min distances.
The v.distance GRASS module work only to compute distances from point to other features, it is anyway not working in Processing because this module does not produce a new output but modifies the input (there is a ticket about this issue).
To computes min/max distances the user is left with the Spatialite/PostGIS solution, that is out of reach for most of the people.
PS
SL/PostGIS seems anyway the only solution when the number of features is relatively high, as both the distance matrix tool and native grass v.distance are really slow.
|
process
|
add a distance analysis tool author name giovanni manghi gioman original redmine issue redmine category processing core assignee victor olaya among qgis out of the box tools we really miss a tool that can computes min and max distances between features of the same layer or different layers the distance matrix tool just works for points layers and does not create a line output representing the min distances the v distance grass module work only to compute distances from point to other features it is anyway not working in processing because this module does not produce a new output but modifies the input there is a ticket about this issue to computes min max distances the user is left with the spatialite postgis solution that is out of reach for most of the people ps sl postgis seems anyway the only solution when the number of features is relatively high as both the distance matrix tool and native grass v distance are really slow
| 1
|
494,095
| 14,245,120,390
|
IssuesEvent
|
2020-11-19 08:10:07
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.biccamera.com - site is not usable
|
browser-firefox engine-gecko os-mac priority-normal
|
<!-- @browser: Firefox 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:79.0) Gecko/20100101 Firefox/79.0 -->
<!-- @reported_with: unknown -->
**URL**: https://www.biccamera.com/bc/item/8636715/
**Browser / Version**: Firefox 79.0
**Operating System**: Mac OS X 10.15
**Tested Another Browser**: Yes Safari 14.0.1 and Vivaldi 3.4
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
1. Go to https://www.biccamera.com/bc/item/8636715/
2. Just below the price information, click on the link that says "在庫のある店舗を探す".
3. Enjoy the cool (unintended) special effects! Keyboard shortcuts don't work to close the window/browser so you have to kill the browser in Activity Monitor. See the video here: https://imgur.com/tFdIGJB
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/e8222cbb-b3c4-4f40-b70f-c880dff75705.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.biccamera.com - site is not usable - <!-- @browser: Firefox 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:79.0) Gecko/20100101 Firefox/79.0 -->
<!-- @reported_with: unknown -->
**URL**: https://www.biccamera.com/bc/item/8636715/
**Browser / Version**: Firefox 79.0
**Operating System**: Mac OS X 10.15
**Tested Another Browser**: Yes Safari 14.0.1 and Vivaldi 3.4
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
1. Go to https://www.biccamera.com/bc/item/8636715/
2. Just below the price information, click on the link that says "在庫のある店舗を探す".
3. Enjoy the cool (unintended) special effects! Keyboard shortcuts don't work to close the window/browser so you have to kill the browser in Activity Monitor. See the video here: https://imgur.com/tFdIGJB
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/e8222cbb-b3c4-4f40-b70f-c880dff75705.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
site is not usable url browser version firefox operating system mac os x tested another browser yes safari and vivaldi problem type site is not usable description page not loading correctly steps to reproduce go to just below the price information click on the link that says 在庫のある店舗を探す enjoy the cool unintended special effects keyboard shortcuts don t work to close the window browser so you have to kill the browser in activity monitor see the video here view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
80,077
| 7,739,059,839
|
IssuesEvent
|
2018-05-28 14:19:53
|
italia/spid
|
https://api.github.com/repos/italia/spid
|
closed
|
Controllo metadata - Comune di Robilante
|
metadata nuovo md test
|
Buongiorno,
per conto del Comune di Robilante, richiediamo la verifica dei metadati pubblicati all'url:
https://sportellodigitale.comune.robilante.cn.it/004185/SPID/metadata
Grazie e cordiali saluti
Federico Albesano
|
1.0
|
Controllo metadata - Comune di Robilante - Buongiorno,
per conto del Comune di Robilante, richiediamo la verifica dei metadati pubblicati all'url:
https://sportellodigitale.comune.robilante.cn.it/004185/SPID/metadata
Grazie e cordiali saluti
Federico Albesano
|
non_process
|
controllo metadata comune di robilante buongiorno per conto del comune di robilante richiediamo la verifica dei metadati pubblicati all url grazie e cordiali saluti federico albesano
| 0
|
16,225
| 20,760,407,996
|
IssuesEvent
|
2022-03-15 15:42:01
|
googleapis/python-datastream
|
https://api.github.com/repos/googleapis/python-datastream
|
closed
|
Release as stable
|
type: process api: datastream
|
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface **RELEASE ON/AFTER: March 3, 2022**
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
1.0
|
Release as stable - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface **RELEASE ON/AFTER: March 3, 2022**
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
process
|
release as stable required days elapsed since last beta release with new api surface release on after march server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga
| 1
|
5,233
| 8,033,252,119
|
IssuesEvent
|
2018-07-29 02:44:00
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
grabABI: improvements
|
apps-all status-inprocess type-enhancement
|
It should run with a filename as well as an address. If the user gives it a file name, it should insist on an address, and take either an ABI file or a .sol file.
It should be able to run completely without a running node. (As should makeClass.)
If it takes a .sol file and --generate is on, it should copy the source code of the function into a comment just before the c++ code.
|
1.0
|
grabABI: improvements - It should run with a filename as well as an address. If the user gives it a file name, it should insist on an address, and take either an ABI file or a .sol file.
It should be able to run completely without a running node. (As should makeClass.)
If it takes a .sol file and --generate is on, it should copy the source code of the function into a comment just before the c++ code.
|
process
|
grababi improvements it should run with a filename as well as an address if the user gives it a file name it should insist on an address and take either an abi file or a sol file it should be able to run completely without a running node as should makeclass if it takes a sol file and generate is on it should copy the source code of the function into a comment just before the c code
| 1
|
18,632
| 24,580,379,530
|
IssuesEvent
|
2022-10-13 15:11:46
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] [Consent API] Consent pdf is not getting displayed in the participant manager
|
Bug Blocker P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Install the mobile app
2. Sign in / Sign up
3. Enroll to the study
4. Now, Go back to PM
5. Go to participants details screen and Verify
**AR:** Consent pdf is not getting displayed in the participant manager
**ER:** Consent pdf should get displayed in the participant manager

|
3.0
|
[PM] [Consent API] Consent pdf is not getting displayed in the participant manager - **Steps:**
1. Install the mobile app
2. Sign in / Sign up
3. Enroll to the study
4. Now, Go back to PM
5. Go to participants details screen and Verify
**AR:** Consent pdf is not getting displayed in the participant manager
**ER:** Consent pdf should get displayed in the participant manager

|
process
|
consent pdf is not getting displayed in the participant manager steps install the mobile app sign in sign up enroll to the study now go back to pm go to participants details screen and verify ar consent pdf is not getting displayed in the participant manager er consent pdf should get displayed in the participant manager
| 1
|
400,326
| 27,278,859,507
|
IssuesEvent
|
2023-02-23 08:29:56
|
mercedes-benz/sechub
|
https://api.github.com/repos/mercedes-benz/sechub
|
closed
|
Client documentation: Update docs+examples for "data section"
|
documentation client
|
## To do:
Update documentation
- getting started
- client docs
- examples
to reflect that the data section in the JSON configfile is now the standard way to go.
Also:
- document listJobs action
- overhaul client documentation
|
1.0
|
Client documentation: Update docs+examples for "data section" - ## To do:
Update documentation
- getting started
- client docs
- examples
to reflect that the data section in the JSON configfile is now the standard way to go.
Also:
- document listJobs action
- overhaul client documentation
|
non_process
|
client documentation update docs examples for data section to do update documentation getting started client docs examples to reflect that the data section in the json configfile is now the standard way to go also document listjobs action overhaul client documentation
| 0
|
11,418
| 14,244,484,540
|
IssuesEvent
|
2020-11-19 07:00:46
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Mark tables for hiding/deletion in Data Explorer
|
p1 team:data processing
|
### Description
Ability to mark tables in Data Explorer "Tables" view for hiding or deletion when logs are no longer being ingested from that source.
Link to feature: https://runpanther.productboard.com/feature-board/planning/features/6196424
### Related Services
Data Explorer
### Designs
TBD
### Acceptance Criteria
- in Data Explorer "Tables" view, enable the ability to select tables to perform the following actions:
- "hide" table, which would grey out the option and indicate that the table is unused
- "delete" table, which would eliminate the table from the list of tables entirely.
|
1.0
|
Mark tables for hiding/deletion in Data Explorer - ### Description
Ability to mark tables in Data Explorer "Tables" view for hiding or deletion when logs are no longer being ingested from that source.
Link to feature: https://runpanther.productboard.com/feature-board/planning/features/6196424
### Related Services
Data Explorer
### Designs
TBD
### Acceptance Criteria
- in Data Explorer "Tables" view, enable the ability to select tables to perform the following actions:
- "hide" table, which would grey out the option and indicate that the table is unused
- "delete" table, which would eliminate the table from the list of tables entirely.
|
process
|
mark tables for hiding deletion in data explorer description ability to mark tables in data explorer tables view for hiding or deletion when logs are no longer being ingested from that source link to feature related services data explorer designs tbd acceptance criteria in data explorer tables view enable the ability to select tables to perform the following actions hide table which would grey out the option and indicate that the table is unused delete table which would eliminate the table from the list of tables entirely
| 1
|
70,734
| 8,576,336,435
|
IssuesEvent
|
2018-11-12 20:03:27
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
Hover label/breadcrumb is confusingly positioned for floating blocks
|
Needs Design Feedback UI Components [Type] Enhancement
|
## The issue
When a block is using float alignment (the Image block, for example), the block label/breadcrumb shown on hover is not displayed in the right spot. It is always shown on the far right, regardless of the width of the floated block.

## Related issues
- #6288
- #7492
- #7500
|
1.0
|
Hover label/breadcrumb is confusingly positioned for floating blocks - ## The issue
When a block is using float alignment (the Image block, for example), the block label/breadcrumb shown on hover is not displayed in the right spot. It is always shown on the far right, regardless of the width of the floated block.

## Related issues
- #6288
- #7492
- #7500
|
non_process
|
hover label breadcrumb is confusingly positioned for floating blocks the issue when a block is using float alignment the image block for example the block label breadcrumb shown on hover is not displayed in the right spot it is always shown on the far right regardless of the width of the floated block related issues
| 0
|
195,255
| 15,504,963,464
|
IssuesEvent
|
2021-03-11 14:51:21
|
MarlinFirmware/Marlin
|
https://api.github.com/repos/MarlinFirmware/Marlin
|
closed
|
[BUG] (Testing)
|
Needs: Discussion Needs: Documentation Needs: More Data Needs: Patch Needs: Testing Needs: Work
|
<!--
Please follow the instructions below. Failure to do so may result in your issue being closed.
### Before Reporting a Bug
1. Test with the `bugfix-2.0.x` branch to see whether the issue still exists.
2. Get troubleshooting help from the Marlin community to confirm it's a bug and not just a configuration error. Links at https://github.com/MarlinFirmware/Marlin/issues/new/choose
### Instructions
1. Fill out every section of the template below.
2. Always attach configuration files, regardless of whether you think they are involved.
3. Read and understand Marlin's Code of Conduct. By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/MarlinFirmware/Marlin/blob/master/.github/code_of_conduct.md
-->
### Bug Description
<!-- Describe the bug in this section. (You can remove this invisible comment.) -->
### Configuration Files
**Required:** Include a ZIP file containing `Configuration.h` and `Configuration_adv.h`.
If you've made any other modifications describe them in detail here.
### Steps to Reproduce
<!-- Describe the steps needed to reproduce the issue. (You can remove this invisible comment.) -->
1. [First Step]
2. [Second Step]
3. [and so on...]
**Expected behavior:**
<!-- Describe what you expected to happen here. (You can remove this invisible comment.) -->
**Actual behavior:**
<!-- Describe what actually happens here. (You can remove this invisible comment.) -->
#### Additional Information
* Provide pictures or links to videos that clearly demonstrate the issue.
* See [Contributing to Marlin](https://github.com/MarlinFirmware/Marlin/blob/2.0.x/.github/contributing.md) for additional guidelines.
|
1.0
|
[BUG] (Testing) - <!--
Please follow the instructions below. Failure to do so may result in your issue being closed.
### Before Reporting a Bug
1. Test with the `bugfix-2.0.x` branch to see whether the issue still exists.
2. Get troubleshooting help from the Marlin community to confirm it's a bug and not just a configuration error. Links at https://github.com/MarlinFirmware/Marlin/issues/new/choose
### Instructions
1. Fill out every section of the template below.
2. Always attach configuration files, regardless of whether you think they are involved.
3. Read and understand Marlin's Code of Conduct. By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/MarlinFirmware/Marlin/blob/master/.github/code_of_conduct.md
-->
### Bug Description
<!-- Describe the bug in this section. (You can remove this invisible comment.) -->
### Configuration Files
**Required:** Include a ZIP file containing `Configuration.h` and `Configuration_adv.h`.
If you've made any other modifications describe them in detail here.
### Steps to Reproduce
<!-- Describe the steps needed to reproduce the issue. (You can remove this invisible comment.) -->
1. [First Step]
2. [Second Step]
3. [and so on...]
**Expected behavior:**
<!-- Describe what you expected to happen here. (You can remove this invisible comment.) -->
**Actual behavior:**
<!-- Describe what actually happens here. (You can remove this invisible comment.) -->
#### Additional Information
* Provide pictures or links to videos that clearly demonstrate the issue.
* See [Contributing to Marlin](https://github.com/MarlinFirmware/Marlin/blob/2.0.x/.github/contributing.md) for additional guidelines.
|
non_process
|
testing please follow the instructions below failure to do so may result in your issue being closed before reporting a bug test with the bugfix x branch to see whether the issue still exists get troubleshooting help from the marlin community to confirm it s a bug and not just a configuration error links at instructions fill out every section of the template below always attach configuration files regardless of whether you think they are involved read and understand marlin s code of conduct by filing an issue you are expected to comply with it including treating everyone with respect bug description configuration files required include a zip file containing configuration h and configuration adv h if you ve made any other modifications describe them in detail here steps to reproduce expected behavior actual behavior additional information provide pictures or links to videos that clearly demonstrate the issue see for additional guidelines
| 0
|
798,981
| 28,300,496,899
|
IssuesEvent
|
2023-04-10 05:22:17
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
[Nightly CI Failures] Failures detected for google-cloud-redis-v1beta1
|
type: bug priority: p1 nightly failure
|
At 2023-04-09 08:55:43 UTC, detected failures in google-cloud-redis-v1beta1 for: yard
report_key_8c94c8444343a70104016196cb9dfdee
|
1.0
|
[Nightly CI Failures] Failures detected for google-cloud-redis-v1beta1 - At 2023-04-09 08:55:43 UTC, detected failures in google-cloud-redis-v1beta1 for: yard
report_key_8c94c8444343a70104016196cb9dfdee
|
non_process
|
failures detected for google cloud redis at utc detected failures in google cloud redis for yard report key
| 0
|
5,597
| 8,454,024,057
|
IssuesEvent
|
2018-10-20 21:28:18
|
rchain/bounties
|
https://api.github.com/repos/rchain/bounties
|
closed
|
Improved Invoice Submission Process and Protocol
|
invoice-process zz-Operations
|
### Benefit to RChain
Currently in our bounty system invoices are required but are time consuming to prepare and error prone. We propose a process and protocol with similar security properties, better accuracy and reliability than manual preparation, and improved ease of use.
### Budget and Objective
What is the purpose of an invoice? It’s fundamentally a simple agreement between the Contributor and our Cooperative acknowledging the correctness of certain data and that completion of payment completes the voted rewards process. Since all of the data that goes into an invoice is known already in the rewardsApp and ram_registry, it makes sense that we should use our technology to assemble the data to the contributor, and have a simple action such as “Confirm/Dispute” finalize the invoice.
The proposed system relies on a simple 3 step protocol between Contributor and InvoiceApp.
1. InvoiceApp offers to Contributor partially invoice including all invoice_data but not signed.
2. Contributor agrees (or disagrees) and indicates that by “signing”, which will be represented by a field “signed_by_contributor”
3. InvoiceApp reports to Contributor that the invoice has been paid, by including a transactionId, and notifies Contributor.
The detailed implementation of this protocol are described below. In fact we propose two implementations, the one discussed here (meant to be implemented immediately) and another to follow, a rholang based upgrade which we plan to release at the same time as Mercury. Blockchain implementation will require both stable mainnet AND signing key, aka Self Sovereign ID or Cooperative voting ID or rchain wallet.
For the current implementation we will continue to rely on email authentication. Step 1 will implemented as an email message containing a private link to the InvoiceApp which is private to the Contributor and specific to the pay_period. Step 2 will be implemented as a click (agree/disagree) on the secure link to InvoiceApp. Step 3 will be implemented by email.
For full details see updated [Usage and Code Overview](https://docs.google.com/document/d/1-lPFVxnfdFEwz56cfJVm6CpbwYDPYC0CUAESeYdxULA/) or the original [InvoiceApp Specification](https://docs.google.com/document/d/1l3Q3Uq8XvFBZTouwL9L7LV0lisxn79s2Cu4s33yMqD8/)
source: https://github.com/whereyouatwimm/rchain-invoice
_Estimated Budget of Task:_ **$[11,400]** 92 story points
September management of system and design improvements: 34 story points ($5100)
Notes: goal is an InvoiceApp WORKING PROTOTYPE which may be used fop September invoices by any Contributor who does not want to use the current manual system.
_Estimated Timeline Required to Complete the Task:_ **[23 days Working Prototype]**
_How will we measure completion?_ **[Tested by team, Available for trial in Sep Voting, Agreed to by Finance]**
Team:
@kovmargo, @golovach-ivan, @azazime , @allancto, @help_wanted
### Legal
_Task Submitter shall not submit Tasks that will involve RHOC being transacted in any manner that (i) jeopardizes RHOC’s status as a software access token or other relevant and applicable description of the RHOC as an “asset”—not a security— or (2) violates, in any manner, applicable U.S. Securities laws._
|
1.0
|
Improved Invoice Submission Process and Protocol - ### Benefit to RChain
Currently in our bounty system invoices are required but are time consuming to prepare and error prone. We propose a process and protocol with similar security properties, better accuracy and reliability than manual preparation, and improved ease of use.
### Budget and Objective
What is the purpose of an invoice? It’s fundamentally a simple agreement between the Contributor and our Cooperative acknowledging the correctness of certain data and that completion of payment completes the voted rewards process. Since all of the data that goes into an invoice is known already in the rewardsApp and ram_registry, it makes sense that we should use our technology to assemble the data to the contributor, and have a simple action such as “Confirm/Dispute” finalize the invoice.
The proposed system relies on a simple 3 step protocol between Contributor and InvoiceApp.
1. InvoiceApp offers to Contributor partially invoice including all invoice_data but not signed.
2. Contributor agrees (or disagrees) and indicates that by “signing”, which will be represented by a field “signed_by_contributor”
3. InvoiceApp reports to Contributor that the invoice has been paid, by including a transactionId, and notifies Contributor.
The detailed implementation of this protocol are described below. In fact we propose two implementations, the one discussed here (meant to be implemented immediately) and another to follow, a rholang based upgrade which we plan to release at the same time as Mercury. Blockchain implementation will require both stable mainnet AND signing key, aka Self Sovereign ID or Cooperative voting ID or rchain wallet.
For the current implementation we will continue to rely on email authentication. Step 1 will implemented as an email message containing a private link to the InvoiceApp which is private to the Contributor and specific to the pay_period. Step 2 will be implemented as a click (agree/disagree) on the secure link to InvoiceApp. Step 3 will be implemented by email.
For full details see updated [Usage and Code Overview](https://docs.google.com/document/d/1-lPFVxnfdFEwz56cfJVm6CpbwYDPYC0CUAESeYdxULA/) or the original [InvoiceApp Specification](https://docs.google.com/document/d/1l3Q3Uq8XvFBZTouwL9L7LV0lisxn79s2Cu4s33yMqD8/)
source: https://github.com/whereyouatwimm/rchain-invoice
_Estimated Budget of Task:_ **$[11,400]** 92 story points
September management of system and design improvements: 34 story points ($5100)
Notes: goal is an InvoiceApp WORKING PROTOTYPE which may be used fop September invoices by any Contributor who does not want to use the current manual system.
_Estimated Timeline Required to Complete the Task:_ **[23 days Working Prototype]**
_How will we measure completion?_ **[Tested by team, Available for trial in Sep Voting, Agreed to by Finance]**
Team:
@kovmargo, @golovach-ivan, @azazime , @allancto, @help_wanted
### Legal
_Task Submitter shall not submit Tasks that will involve RHOC being transacted in any manner that (i) jeopardizes RHOC’s status as a software access token or other relevant and applicable description of the RHOC as an “asset”—not a security— or (2) violates, in any manner, applicable U.S. Securities laws._
|
process
|
improved invoice submission process and protocol benefit to rchain currently in our bounty system invoices are required but are time consuming to prepare and error prone we propose a process and protocol with similar security properties better accuracy and reliability than manual preparation and improved ease of use budget and objective what is the purpose of an invoice it’s fundamentally a simple agreement between the contributor and our cooperative acknowledging the correctness of certain data and that completion of payment completes the voted rewards process since all of the data that goes into an invoice is known already in the rewardsapp and ram registry it makes sense that we should use our technology to assemble the data to the contributor and have a simple action such as “confirm dispute” finalize the invoice the proposed system relies on a simple step protocol between contributor and invoiceapp invoiceapp offers to contributor partially invoice including all invoice data but not signed contributor agrees or disagrees and indicates that by “signing” which will be represented by a field “signed by contributor” invoiceapp reports to contributor that the invoice has been paid by including a transactionid and notifies contributor the detailed implementation of this protocol are described below in fact we propose two implementations the one discussed here meant to be implemented immediately and another to follow a rholang based upgrade which we plan to release at the same time as mercury blockchain implementation will require both stable mainnet and signing key aka self sovereign id or cooperative voting id or rchain wallet for the current implementation we will continue to rely on email authentication step will implemented as an email message containing a private link to the invoiceapp which is private to the contributor and specific to the pay period step will be implemented as a click agree disagree on the secure link to invoiceapp step will be implemented by email for full details see updated or the original source estimated budget of task story points september management of system and design improvements story points notes goal is an invoiceapp working prototype which may be used fop september invoices by any contributor who does not want to use the current manual system estimated timeline required to complete the task how will we measure completion team kovmargo golovach ivan azazime allancto help wanted legal task submitter shall not submit tasks that will involve rhoc being transacted in any manner that i jeopardizes rhoc’s status as a software access token or other relevant and applicable description of the rhoc as an “asset”—not a security— or violates in any manner applicable u s securities laws
| 1
|
4,795
| 7,689,140,333
|
IssuesEvent
|
2018-05-17 11:47:07
|
gvwilson/h2tp
|
https://api.github.com/repos/gvwilson/h2tp
|
opened
|
Ch06 Juha Sorva
|
Ch06 Process
|
- Deciding what to teach (use authentic tasks): This point here is the problematic one: how to provide authenticity when the learners are novices? How to follow the "phonicsy" advice from the previous chapter and still be authentic and motivating? Parsons problems and MCQ (and worked examples, even) aren’t the most authentic things. Perhaps you could discuss this tension a bit more somewhere in the chapter? I expect it’s something that many teachers (novice and expert) struggle with. (Cf. what I wrote in the previous chapter about recent work in CLT and the principles we used in "Research-Based Design of the First Weeks of CS1".)
|
1.0
|
Ch06 Juha Sorva - - Deciding what to teach (use authentic tasks): This point here is the problematic one: how to provide authenticity when the learners are novices? How to follow the "phonicsy" advice from the previous chapter and still be authentic and motivating? Parsons problems and MCQ (and worked examples, even) aren’t the most authentic things. Perhaps you could discuss this tension a bit more somewhere in the chapter? I expect it’s something that many teachers (novice and expert) struggle with. (Cf. what I wrote in the previous chapter about recent work in CLT and the principles we used in "Research-Based Design of the First Weeks of CS1".)
|
process
|
juha sorva deciding what to teach use authentic tasks this point here is the problematic one how to provide authenticity when the learners are novices how to follow the phonicsy advice from the previous chapter and still be authentic and motivating parsons problems and mcq and worked examples even aren’t the most authentic things perhaps you could discuss this tension a bit more somewhere in the chapter i expect it’s something that many teachers novice and expert struggle with cf what i wrote in the previous chapter about recent work in clt and the principles we used in research based design of the first weeks of
| 1
|
10,515
| 13,285,603,270
|
IssuesEvent
|
2020-08-24 08:23:44
|
prisma/prisma-engines
|
https://api.github.com/repos/prisma/prisma-engines
|
opened
|
Native types: allow specifying unsigned integer types on MySQL
|
engines/data model parser process/candidate team/engines
|
Integer column types on MySQL [can be defined as UNSIGNED](https://dev.mysql.com/doc/refman/8.0/en/integer-types.html), changing the range of acceptable values. This should be exposed on native integer types in the schema on MySQL.
|
1.0
|
Native types: allow specifying unsigned integer types on MySQL - Integer column types on MySQL [can be defined as UNSIGNED](https://dev.mysql.com/doc/refman/8.0/en/integer-types.html), changing the range of acceptable values. This should be exposed on native integer types in the schema on MySQL.
|
process
|
native types allow specifying unsigned integer types on mysql integer column types on mysql changing the range of acceptable values this should be exposed on native integer types in the schema on mysql
| 1
|
136,093
| 12,700,308,284
|
IssuesEvent
|
2020-06-22 16:07:35
|
root-project/web
|
https://api.github.com/repos/root-project/web
|
closed
|
Better "Get ROOT" section
|
documentation enhancement requires decision
|
The current user experience when trying to get ROOT on their computers is more confusing than it could be, and there is no mention of a few distribution channels. The "install ROOT" button on the homepage as well as the "Download" button in the top bar bring you to the [download section](https://root-project.github.io/web/download/) of the website, where users are confronted with a choice between a generic ROOT v6.20 link (which brings you to a page where downloads are only one of the sub-options), building the dev version from sources, generic nightlies link, docker, or building from sources again.
The proposal is to substitute `/download` with a refurbished `/install` section. When users land there, they see "Current ROOT stable version: v6.20" and a list of possible installation methods, each linking to more detailed instructions:
* download a pre-compiled binary
* any ROOT version, including nightly builds
* several Linux flavours, MacOS and Windows
* use ROOT's distributions on CVMFS
* any ROOT version, including nightly builds
* centos7, slc6
* use ROOT on CERN lxplus (do we want this? is it officially supported or just happens to be a recent ROOT version now)
* run ROOT in a Docker container
* compile ROOT from source (for developers and users requiring custom sets of options)
**Supported by the community**
* packages for Gentoo, Fedora and Arch
* homebrew package for MacOS
* conda packages for Linux and MacOS
Maintainers of community packages should be credited, but we should link to external resources for installation instruction as well as user support to make it clear who to report issues to.
|
1.0
|
Better "Get ROOT" section - The current user experience when trying to get ROOT on their computers is more confusing than it could be, and there is no mention of a few distribution channels. The "install ROOT" button on the homepage as well as the "Download" button in the top bar bring you to the [download section](https://root-project.github.io/web/download/) of the website, where users are confronted with a choice between a generic ROOT v6.20 link (which brings you to a page where downloads are only one of the sub-options), building the dev version from sources, generic nightlies link, docker, or building from sources again.
The proposal is to substitute `/download` with a refurbished `/install` section. When users land there, they see "Current ROOT stable version: v6.20" and a list of possible installation methods, each linking to more detailed instructions:
* download a pre-compiled binary
* any ROOT version, including nightly builds
* several Linux flavours, MacOS and Windows
* use ROOT's distributions on CVMFS
* any ROOT version, including nightly builds
* centos7, slc6
* use ROOT on CERN lxplus (do we want this? is it officially supported or just happens to be a recent ROOT version now)
* run ROOT in a Docker container
* compile ROOT from source (for developers and users requiring custom sets of options)
**Supported by the community**
* packages for Gentoo, Fedora and Arch
* homebrew package for MacOS
* conda packages for Linux and MacOS
Maintainers of community packages should be credited, but we should link to external resources for installation instruction as well as user support to make it clear who to report issues to.
|
non_process
|
better get root section the current user experience when trying to get root on their computers is more confusing than it could be and there is no mention of a few distribution channels the install root button on the homepage as well as the download button in the top bar bring you to the of the website where users are confronted with a choice between a generic root link which brings you to a page where downloads are only one of the sub options building the dev version from sources generic nightlies link docker or building from sources again the proposal is to substitute download with a refurbished install section when users land there they see current root stable version and a list of possible installation methods each linking to more detailed instructions download a pre compiled binary any root version including nightly builds several linux flavours macos and windows use root s distributions on cvmfs any root version including nightly builds use root on cern lxplus do we want this is it officially supported or just happens to be a recent root version now run root in a docker container compile root from source for developers and users requiring custom sets of options supported by the community packages for gentoo fedora and arch homebrew package for macos conda packages for linux and macos maintainers of community packages should be credited but we should link to external resources for installation instruction as well as user support to make it clear who to report issues to
| 0
|
96,223
| 10,926,291,330
|
IssuesEvent
|
2019-11-22 14:25:56
|
roundcube/roundcubemail
|
https://api.github.com/repos/roundcube/roundcubemail
|
closed
|
rouncube need ctype extension
|
C: Documentation C: Installer bug
|
The [install requirement](https://github.com/roundcube/roundcubemail/wiki/Install-Requirements) or installation process didn't mention/didn't check ctype extension as requirement. Some mail message will failed to read. The log is:
```
PHP Fatal error: Uncaught Error: Call to undefined function Masterminds\HTML5\Parser\ctype_alpha()
```
php: 7.3.11
|
1.0
|
rouncube need ctype extension - The [install requirement](https://github.com/roundcube/roundcubemail/wiki/Install-Requirements) or installation process didn't mention/didn't check ctype extension as requirement. Some mail message will failed to read. The log is:
```
PHP Fatal error: Uncaught Error: Call to undefined function Masterminds\HTML5\Parser\ctype_alpha()
```
php: 7.3.11
|
non_process
|
rouncube need ctype extension the or installation process didn t mention didn t check ctype extension as requirement some mail message will failed to read the log is php fatal error uncaught error call to undefined function masterminds parser ctype alpha php
| 0
|
67,152
| 8,080,944,207
|
IssuesEvent
|
2018-08-08 00:28:37
|
techlahoma/user-groups
|
https://api.github.com/repos/techlahoma/user-groups
|
closed
|
Book StarSpace46 for Design Tech OKC | 2018-08-29
|
UG/Design Tech OKC scheduling
|
What: Design Tech OKC: Big Meetup
When: 08/29/2018 11:30 am
Where: StarSpace 46
Check meetup for RSVP count: https://www.meetup.com/Design-Tech-OKC/events/253247709/
cc @nexocentric @vianka-a
|
1.0
|
Book StarSpace46 for Design Tech OKC | 2018-08-29 - What: Design Tech OKC: Big Meetup
When: 08/29/2018 11:30 am
Where: StarSpace 46
Check meetup for RSVP count: https://www.meetup.com/Design-Tech-OKC/events/253247709/
cc @nexocentric @vianka-a
|
non_process
|
book for design tech okc what design tech okc big meetup when am where starspace check meetup for rsvp count cc nexocentric vianka a
| 0
|
40,240
| 8,755,304,838
|
IssuesEvent
|
2018-12-14 14:30:29
|
AlexBolot/PopulationSimulator
|
https://api.github.com/repos/AlexBolot/PopulationSimulator
|
reopened
|
Créer des classes Finder
|
new code structure
|
Une classe abstraite `Finder`
Une première classe fille `PersonFinder`
|
1.0
|
Créer des classes Finder - Une classe abstraite `Finder`
Une première classe fille `PersonFinder`
|
non_process
|
créer des classes finder une classe abstraite finder une première classe fille personfinder
| 0
|
8,339
| 11,497,799,889
|
IssuesEvent
|
2020-02-12 10:42:20
|
18F/tts-tech-portfolio
|
https://api.github.com/repos/18F/tts-tech-portfolio
|
closed
|
Update Tech Portfolio ProjectBoard.md doc
|
Jan2020-inperson epic: internal workflow/procedures workflow: process
|
## Background information
The TTS Tech Portfolio has initially laid out the workflow procedures in the document and now that the flow is being executed, there are some holes in understanding and documentation.
supporting docs
[Tech Portfolio workflow proposal - 2020-01-03](https://docs.google.com/document/d/1GF4BR9X5lhOJq2j_sMDgzdqrl_WsgGadgnFlsstsIJk/edit#
)
## User stories
As a TTS Tech Portfolio member, i want to clearly understand what I am doing when I am running a ceremony
As a TTS Tech Portfolio member, I have a certain idea of how things should go and I want to provide those ideas to all.
## Implementation
- [ ] clarify [priorities](https://github.com/18F/tts-tech-portfolio/issues/289)
- [ ] clarify [grooming](https://github.com/18F/tts-tech-portfolio/issues/314) sections in doc
- [ ] clarity around [assignment](https://github.com/18F/tts-tech-portfolio/issues/257)
- [ ] revisit size [labeling](https://github.com/18F/tts-tech-portfolio/issues/296)
- [ ] clarify when labeling [happens](https://gsa-tts.slack.com/archives/GP559GCLD/p1578085991010000)
- [ ] resolve conflict of moving things from New Issues and Icebox/backlog during the Planning ceremony
- [ ] [clean up epics](https://github.com/18F/tts-tech-portfolio/issues/284)
- [ ] do we need Entrance Criteria _and_ Exit Criteria for columns, or could we [consolidate to one](https://docs.google.com/document/d/1GF4BR9X5lhOJq2j_sMDgzdqrl_WsgGadgnFlsstsIJk/edit#bookmark=id.zcz3m725yy88)?
- [x] feedback label
Overcome by events
- [ ] clarify that epics should be ordered before ordering Backlog
- [ ] when do cards move from icebox to backlog? Per [comment](https://docs.google.com/document/d/1GF4BR9X5lhOJq2j_sMDgzdqrl_WsgGadgnFlsstsIJk/edit?disco=AAAAEG1nsUc)
- [ ] entrance criteria for Backlog Per [comment](https://docs.google.com/document/d/1GF4BR9X5lhOJq2j_sMDgzdqrl_WsgGadgnFlsstsIJk/edit?disco=AAAAEG1nsUc)
## Acceptance criteria
- Agree on the tasks that are no longer relevant
- Demo updates at Review
|
1.0
|
Update Tech Portfolio ProjectBoard.md doc - ## Background information
The TTS Tech Portfolio has initially laid out the workflow procedures in the document and now that the flow is being executed, there are some holes in understanding and documentation.
supporting docs
[Tech Portfolio workflow proposal - 2020-01-03](https://docs.google.com/document/d/1GF4BR9X5lhOJq2j_sMDgzdqrl_WsgGadgnFlsstsIJk/edit#
)
## User stories
As a TTS Tech Portfolio member, i want to clearly understand what I am doing when I am running a ceremony
As a TTS Tech Portfolio member, I have a certain idea of how things should go and I want to provide those ideas to all.
## Implementation
- [ ] clarify [priorities](https://github.com/18F/tts-tech-portfolio/issues/289)
- [ ] clarify [grooming](https://github.com/18F/tts-tech-portfolio/issues/314) sections in doc
- [ ] clarity around [assignment](https://github.com/18F/tts-tech-portfolio/issues/257)
- [ ] revisit size [labeling](https://github.com/18F/tts-tech-portfolio/issues/296)
- [ ] clarify when labeling [happens](https://gsa-tts.slack.com/archives/GP559GCLD/p1578085991010000)
- [ ] resolve conflict of moving things from New Issues and Icebox/backlog during the Planning ceremony
- [ ] [clean up epics](https://github.com/18F/tts-tech-portfolio/issues/284)
- [ ] do we need Entrance Criteria _and_ Exit Criteria for columns, or could we [consolidate to one](https://docs.google.com/document/d/1GF4BR9X5lhOJq2j_sMDgzdqrl_WsgGadgnFlsstsIJk/edit#bookmark=id.zcz3m725yy88)?
- [x] feedback label
Overcome by events
- [ ] clarify that epics should be ordered before ordering Backlog
- [ ] when do cards move from icebox to backlog? Per [comment](https://docs.google.com/document/d/1GF4BR9X5lhOJq2j_sMDgzdqrl_WsgGadgnFlsstsIJk/edit?disco=AAAAEG1nsUc)
- [ ] entrance criteria for Backlog Per [comment](https://docs.google.com/document/d/1GF4BR9X5lhOJq2j_sMDgzdqrl_WsgGadgnFlsstsIJk/edit?disco=AAAAEG1nsUc)
## Acceptance criteria
- Agree on the tasks that are no longer relevant
- Demo updates at Review
|
process
|
update tech portfolio projectboard md doc background information the tts tech portfolio has initially laid out the workflow procedures in the document and now that the flow is being executed there are some holes in understanding and documentation supporting docs user stories as a tts tech portfolio member i want to clearly understand what i am doing when i am running a ceremony as a tts tech portfolio member i have a certain idea of how things should go and i want to provide those ideas to all implementation clarify clarify sections in doc clarity around revisit size clarify when labeling resolve conflict of moving things from new issues and icebox backlog during the planning ceremony do we need entrance criteria and exit criteria for columns or could we feedback label overcome by events clarify that epics should be ordered before ordering backlog when do cards move from icebox to backlog per entrance criteria for backlog per acceptance criteria agree on the tasks that are no longer relevant demo updates at review
| 1
|
32,606
| 12,132,649,782
|
IssuesEvent
|
2020-04-23 07:40:39
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Additional Information
|
Pri2 assigned-to-author doc-enhancement security-center/svc triaged
|
Hi team
Your documentation doesnt go over types of scans you can run as part of this offering? How can you run them? Anyway of customising the scanning mechanisms? Also in regards to the agent itself, are there ways to tune its performance?
Many thanks
Sam
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: da20ded6-a387-fac7-795d-ebf02f0d35d3
* Version Independent ID: 67d85a3c-97a7-d997-705c-dbf954ac75b0
* Content: [Vulnerability assessment in Azure Security Center](https://docs.microsoft.com/en-us/azure/security-center/security-center-vulnerability-assessment-recommendations#feedback)
* Content Source: [articles/security-center/security-center-vulnerability-assessment-recommendations.md](https://github.com/Microsoft/azure-docs/blob/master/articles/security-center/security-center-vulnerability-assessment-recommendations.md)
* Service: **security-center**
* GitHub Login: @memildin
* Microsoft Alias: **memildin**
|
True
|
Additional Information - Hi team
Your documentation doesnt go over types of scans you can run as part of this offering? How can you run them? Anyway of customising the scanning mechanisms? Also in regards to the agent itself, are there ways to tune its performance?
Many thanks
Sam
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: da20ded6-a387-fac7-795d-ebf02f0d35d3
* Version Independent ID: 67d85a3c-97a7-d997-705c-dbf954ac75b0
* Content: [Vulnerability assessment in Azure Security Center](https://docs.microsoft.com/en-us/azure/security-center/security-center-vulnerability-assessment-recommendations#feedback)
* Content Source: [articles/security-center/security-center-vulnerability-assessment-recommendations.md](https://github.com/Microsoft/azure-docs/blob/master/articles/security-center/security-center-vulnerability-assessment-recommendations.md)
* Service: **security-center**
* GitHub Login: @memildin
* Microsoft Alias: **memildin**
|
non_process
|
additional information hi team your documentation doesnt go over types of scans you can run as part of this offering how can you run them anyway of customising the scanning mechanisms also in regards to the agent itself are there ways to tune its performance many thanks sam document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service security center github login memildin microsoft alias memildin
| 0
|
71,680
| 15,207,900,826
|
IssuesEvent
|
2021-02-17 01:17:25
|
billmcchesney1/hadoop
|
https://api.github.com/repos/billmcchesney1/hadoop
|
opened
|
WS-2018-0074 (Medium) detected in bl-0.9.4.tgz
|
security vulnerability
|
## WS-2018-0074 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bl-0.9.4.tgz</b></p></summary>
<p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p>
<p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-0.9.4.tgz">https://registry.npmjs.org/bl/-/bl-0.9.4.tgz</a></p>
<p>Path to dependency file: hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json</p>
<p>Path to vulnerable library: hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/leek/node_modules/request/node_modules/bl/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-1.13.14.tgz (Root Library)
- leek-0.0.18.tgz
- request-2.53.0.tgz
- :x: **bl-0.9.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p>
<p>Found in base branch: <b>trunk</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of bl before 0.9.5 and 1.0.1 are vulnerable to memory exposure.
bl.append(number) in the affected bl versions passes a number to Buffer constructor, appending a chunk of uninitialized memory
<p>Publish Date: 2018-04-25
<p>URL: <a href=https://github.com/rvagg/bl/pull/22>WS-2018-0074</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/rvagg/bl/pull/22">https://github.com/rvagg/bl/pull/22</a></p>
<p>Release Date: 2018-01-27</p>
<p>Fix Resolution: 0.9.5,1.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"bl","packageVersion":"0.9.4","packageFilePaths":["/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"ember-cli:1.13.14;leek:0.0.18;request:2.53.0;bl:0.9.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.9.5,1.0.1"}],"baseBranches":["trunk"],"vulnerabilityIdentifier":"WS-2018-0074","vulnerabilityDetails":"Versions of bl before 0.9.5 and 1.0.1 are vulnerable to memory exposure.\n\nbl.append(number) in the affected bl versions passes a number to Buffer constructor, appending a chunk of uninitialized memory","vulnerabilityUrl":"https://github.com/rvagg/bl/pull/22","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2018-0074 (Medium) detected in bl-0.9.4.tgz - ## WS-2018-0074 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bl-0.9.4.tgz</b></p></summary>
<p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p>
<p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-0.9.4.tgz">https://registry.npmjs.org/bl/-/bl-0.9.4.tgz</a></p>
<p>Path to dependency file: hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json</p>
<p>Path to vulnerable library: hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/leek/node_modules/request/node_modules/bl/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-1.13.14.tgz (Root Library)
- leek-0.0.18.tgz
- request-2.53.0.tgz
- :x: **bl-0.9.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p>
<p>Found in base branch: <b>trunk</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of bl before 0.9.5 and 1.0.1 are vulnerable to memory exposure.
bl.append(number) in the affected bl versions passes a number to Buffer constructor, appending a chunk of uninitialized memory
<p>Publish Date: 2018-04-25
<p>URL: <a href=https://github.com/rvagg/bl/pull/22>WS-2018-0074</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/rvagg/bl/pull/22">https://github.com/rvagg/bl/pull/22</a></p>
<p>Release Date: 2018-01-27</p>
<p>Fix Resolution: 0.9.5,1.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"bl","packageVersion":"0.9.4","packageFilePaths":["/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"ember-cli:1.13.14;leek:0.0.18;request:2.53.0;bl:0.9.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.9.5,1.0.1"}],"baseBranches":["trunk"],"vulnerabilityIdentifier":"WS-2018-0074","vulnerabilityDetails":"Versions of bl before 0.9.5 and 1.0.1 are vulnerable to memory exposure.\n\nbl.append(number) in the affected bl versions passes a number to Buffer constructor, appending a chunk of uninitialized memory","vulnerabilityUrl":"https://github.com/rvagg/bl/pull/22","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
ws medium detected in bl tgz ws medium severity vulnerability vulnerable library bl tgz buffer list collect buffers and access with a standard readable buffer interface streamable too library home page a href path to dependency file hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp package json path to vulnerable library hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules leek node modules request node modules bl package json dependency hierarchy ember cli tgz root library leek tgz request tgz x bl tgz vulnerable library found in head commit a href found in base branch trunk vulnerability details versions of bl before and are vulnerable to memory exposure bl append number in the affected bl versions passes a number to buffer constructor appending a chunk of uninitialized memory publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree ember cli leek request bl isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier ws vulnerabilitydetails versions of bl before and are vulnerable to memory exposure n nbl append number in the affected bl versions passes a number to buffer constructor appending a chunk of uninitialized memory vulnerabilityurl
| 0
|
632,436
| 20,196,922,251
|
IssuesEvent
|
2022-02-11 11:31:26
|
QCDIS/NaaVRE
|
https://api.github.com/repos/QCDIS/NaaVRE
|
closed
|
withParam value could not be parsed as a JSON list
|
bug Priority: Hight time: 1
|
**Describe the bug**
Executing the workflow from laserfarm the Fetch Laz Files-1 fails with:
'withParam value could not be parsed as a JSON list: {"laz_files": ["C_18HZ2.LAZ", "C_19HZ2.LAZ", "C_01GN2.LAZ", "C_50GZ2.LAZ"]}: json: cannot unmarshal object into Go value of type []v1alpha1.Item'
|
1.0
|
withParam value could not be parsed as a JSON list - **Describe the bug**
Executing the workflow from laserfarm the Fetch Laz Files-1 fails with:
'withParam value could not be parsed as a JSON list: {"laz_files": ["C_18HZ2.LAZ", "C_19HZ2.LAZ", "C_01GN2.LAZ", "C_50GZ2.LAZ"]}: json: cannot unmarshal object into Go value of type []v1alpha1.Item'
|
non_process
|
withparam value could not be parsed as a json list describe the bug executing the workflow from laserfarm the fetch laz files fails with withparam value could not be parsed as a json list laz files json cannot unmarshal object into go value of type item
| 0
|
218,946
| 24,419,366,835
|
IssuesEvent
|
2022-10-05 18:50:57
|
jgeraigery/experian-java---Sample-Scan
|
https://api.github.com/repos/jgeraigery/experian-java---Sample-Scan
|
closed
|
jackson-databind-2.12.6.1.jar: 2 vulnerabilities (highest severity is: 7.5) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.12.6.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p>
<p>Path to vulnerable library: /sterxml/jackson/core/jackson-databind/2.12.6.1/jackson-databind-2.12.6.1.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java---Sample-Scan/commit/4a89fb617192079a7eeea61093fb9f469a760b60">4a89fb617192079a7eeea61093fb9f469a760b60</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-42004](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42004) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | jackson-databind-2.12.6.1.jar | Direct | com.fasterxml.jackson.core:jackson-databind:2.13.4 | ✅ |
| [CVE-2022-42003](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42003) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | jackson-databind-2.12.6.1.jar | Direct | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-42004</summary>
### Vulnerable Library - <b>jackson-databind-2.12.6.1.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p>
<p>Path to vulnerable library: /sterxml/jackson/core/jackson-databind/2.12.6.1/jackson-databind-2.12.6.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.12.6.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java---Sample-Scan/commit/4a89fb617192079a7eeea61093fb9f469a760b60">4a89fb617192079a7eeea61093fb9f469a760b60</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42004>CVE-2022-42004</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.13.4</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-42003</summary>
### Vulnerable Library - <b>jackson-databind-2.12.6.1.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p>
<p>Path to vulnerable library: /sterxml/jackson/core/jackson-databind/2.12.6.1/jackson-databind-2.12.6.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.12.6.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java---Sample-Scan/commit/4a89fb617192079a7eeea61093fb9f469a760b60">4a89fb617192079a7eeea61093fb9f469a760b60</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled.
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42003>CVE-2022-42003</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
True
|
jackson-databind-2.12.6.1.jar: 2 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.12.6.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p>
<p>Path to vulnerable library: /sterxml/jackson/core/jackson-databind/2.12.6.1/jackson-databind-2.12.6.1.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java---Sample-Scan/commit/4a89fb617192079a7eeea61093fb9f469a760b60">4a89fb617192079a7eeea61093fb9f469a760b60</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-42004](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42004) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | jackson-databind-2.12.6.1.jar | Direct | com.fasterxml.jackson.core:jackson-databind:2.13.4 | ✅ |
| [CVE-2022-42003](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42003) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | jackson-databind-2.12.6.1.jar | Direct | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-42004</summary>
### Vulnerable Library - <b>jackson-databind-2.12.6.1.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p>
<p>Path to vulnerable library: /sterxml/jackson/core/jackson-databind/2.12.6.1/jackson-databind-2.12.6.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.12.6.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java---Sample-Scan/commit/4a89fb617192079a7eeea61093fb9f469a760b60">4a89fb617192079a7eeea61093fb9f469a760b60</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42004>CVE-2022-42004</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.13.4</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-42003</summary>
### Vulnerable Library - <b>jackson-databind-2.12.6.1.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p>
<p>Path to vulnerable library: /sterxml/jackson/core/jackson-databind/2.12.6.1/jackson-databind-2.12.6.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.12.6.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java---Sample-Scan/commit/4a89fb617192079a7eeea61093fb9f469a760b60">4a89fb617192079a7eeea61093fb9f469a760b60</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled.
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42003>CVE-2022-42003</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
non_process
|
jackson databind jar vulnerabilities highest severity is autoclosed vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file mavenworkspace bis services lib bis services base pom xml path to vulnerable library sterxml jackson core jackson databind jackson databind jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high jackson databind jar direct com fasterxml jackson core jackson databind high jackson databind jar direct n a details cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file mavenworkspace bis services lib bis services base pom xml path to vulnerable library sterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in beandeserializer deserializefromarray to prevent use of deeply nested arrays an application is vulnerable only with certain customized choices for deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind rescue worker helmet automatic remediation is available for this issue cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file mavenworkspace bis services lib bis services base pom xml path to vulnerable library sterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting when the unwrap single value arrays feature is enabled publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href rescue worker helmet automatic remediation is available for this issue
| 0
|
512,405
| 14,896,088,294
|
IssuesEvent
|
2021-01-21 09:57:11
|
Snapmaker/Luban
|
https://api.github.com/repos/Snapmaker/Luban
|
closed
|
[BUG] Luban blanks out on connecting via wifi
|
Priority: High Type: Fix
|
When I try to connect Luban via wifi to my Snapmaker A250, first of all, it almost never finds the printer by sarching, I have to enter the IP address manually. When I hit connect, the normal screen appears with the message to confirm on the printer. As soon as I confirm the connection, Luban blanks completely out, meaning, I only have a white screen and I have to close the window and reopen it.
I am using Luban 3.12.3 on a MacBook Pro with macOS 10.15.5
As I saw in the forum, this bug also is present on Windows: [forum post](https://forum.snapmaker.com/t/luban-3-12-3-problems/12667/5?u=rkreienbuehl)
|
1.0
|
[BUG] Luban blanks out on connecting via wifi - When I try to connect Luban via wifi to my Snapmaker A250, first of all, it almost never finds the printer by sarching, I have to enter the IP address manually. When I hit connect, the normal screen appears with the message to confirm on the printer. As soon as I confirm the connection, Luban blanks completely out, meaning, I only have a white screen and I have to close the window and reopen it.
I am using Luban 3.12.3 on a MacBook Pro with macOS 10.15.5
As I saw in the forum, this bug also is present on Windows: [forum post](https://forum.snapmaker.com/t/luban-3-12-3-problems/12667/5?u=rkreienbuehl)
|
non_process
|
luban blanks out on connecting via wifi when i try to connect luban via wifi to my snapmaker first of all it almost never finds the printer by sarching i have to enter the ip address manually when i hit connect the normal screen appears with the message to confirm on the printer as soon as i confirm the connection luban blanks completely out meaning i only have a white screen and i have to close the window and reopen it i am using luban on a macbook pro with macos as i saw in the forum this bug also is present on windows
| 0
|
21,401
| 29,269,328,855
|
IssuesEvent
|
2023-05-24 00:17:17
|
ethereum/EIPs
|
https://api.github.com/repos/ethereum/EIPs
|
closed
|
How do we handle ancient Solidity versions in old final EIPs?
|
w-stale question r-process r-eips
|
### Proposed Change
For example, in EIP-20, we have a very old Solidity version and the interfaces won't compile with recent Solidity versions.
We probably shouldn't rewrite old EIPs, but having our standards rot is also less than useful.
Some ideas:
- Publish a new EIP with the same text, and updated Solidity.
- Put a warning in the rendered Jekyll.
|
1.0
|
How do we handle ancient Solidity versions in old final EIPs? - ### Proposed Change
For example, in EIP-20, we have a very old Solidity version and the interfaces won't compile with recent Solidity versions.
We probably shouldn't rewrite old EIPs, but having our standards rot is also less than useful.
Some ideas:
- Publish a new EIP with the same text, and updated Solidity.
- Put a warning in the rendered Jekyll.
|
process
|
how do we handle ancient solidity versions in old final eips proposed change for example in eip we have a very old solidity version and the interfaces won t compile with recent solidity versions we probably shouldn t rewrite old eips but having our standards rot is also less than useful some ideas publish a new eip with the same text and updated solidity put a warning in the rendered jekyll
| 1
|
18,958
| 24,920,800,112
|
IssuesEvent
|
2022-10-30 23:11:39
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Witchita from "Love"
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Witchita
Type (film/tv show): TV Show
Film or show in which it appears: Love
Is the parent film/show streaming anywhere? Yes, Netflix
About when in the parent film/show does it appear?
Various points
Actual footage of the film/show can be seen (yes/no)?
Yes, in many episodes (S2E6 at the beginning).
Also, here is just a fake "trailer" from the show for the "new" season of Witchita: https://www.facebook.com/watch/?v=1295123593930927
|
1.0
|
Add Witchita from "Love" - Please add as much of the following info as you can:
Title: Witchita
Type (film/tv show): TV Show
Film or show in which it appears: Love
Is the parent film/show streaming anywhere? Yes, Netflix
About when in the parent film/show does it appear?
Various points
Actual footage of the film/show can be seen (yes/no)?
Yes, in many episodes (S2E6 at the beginning).
Also, here is just a fake "trailer" from the show for the "new" season of Witchita: https://www.facebook.com/watch/?v=1295123593930927
|
process
|
add witchita from love please add as much of the following info as you can title witchita type film tv show tv show film or show in which it appears love is the parent film show streaming anywhere yes netflix about when in the parent film show does it appear various points actual footage of the film show can be seen yes no yes in many episodes at the beginning also here is just a fake trailer from the show for the new season of witchita
| 1
|
14,680
| 17,797,223,044
|
IssuesEvent
|
2021-09-01 00:39:43
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
opened
|
Synchronize dashboards and rules to Grafana Cloud
|
enhancement P2 process
|
### Problem
We have Grafana dashboards, Loki rules, and Alertmanager rules that we currently automate to our Kubernetes cluster. After consolidating metrics and logs to Grafana Cloud, we need a way to synchronize those same artifacts to Grafana Cloud.
### Solution
- Create a workflow that does the following on GA tag:
- Use [cortex-rules-action](https://github.com/grafana/cortex-rules-action) to synchronize alert rules
- Use [cortex-rules-action](https://github.com/grafana/cortex-rules-action) to synchronize loki rules
- Use [Grafana REST API]( https://grafana.com/docs/grafana-cloud/how-do-i/find-and-use-dashboards/) to upload dashboards
### Alternatives
Manually copy and paste
|
1.0
|
Synchronize dashboards and rules to Grafana Cloud - ### Problem
We have Grafana dashboards, Loki rules, and Alertmanager rules that we currently automate to our Kubernetes cluster. After consolidating metrics and logs to Grafana Cloud, we need a way to synchronize those same artifacts to Grafana Cloud.
### Solution
- Create a workflow that does the following on GA tag:
- Use [cortex-rules-action](https://github.com/grafana/cortex-rules-action) to synchronize alert rules
- Use [cortex-rules-action](https://github.com/grafana/cortex-rules-action) to synchronize loki rules
- Use [Grafana REST API]( https://grafana.com/docs/grafana-cloud/how-do-i/find-and-use-dashboards/) to upload dashboards
### Alternatives
Manually copy and paste
|
process
|
synchronize dashboards and rules to grafana cloud problem we have grafana dashboards loki rules and alertmanager rules that we currently automate to our kubernetes cluster after consolidating metrics and logs to grafana cloud we need a way to synchronize those same artifacts to grafana cloud solution create a workflow that does the following on ga tag use to synchronize alert rules use to synchronize loki rules use to upload dashboards alternatives manually copy and paste
| 1
|
46,469
| 11,844,766,091
|
IssuesEvent
|
2020-03-24 06:45:52
|
jiwuming/jiwuming.github.io
|
https://api.github.com/repos/jiwuming/jiwuming.github.io
|
opened
|
使用 Xcode build 命令打包与 fir-cli 自动发布项目 | 岐
|
/2018/08/05/xcodebuild/ gitalk
|
http://jiwuming.com/2018/08/05/xcodebuild/
由于最近需要开发的工程有多个环境, 又是企业式发布 app 到 fir 上, 每次打一个包实在是太麻烦了, 而且需要不停的更换证书和描述文件, 所以决定把打包操作用 shell 执行。记录一下操作过程。
|
1.0
|
使用 Xcode build 命令打包与 fir-cli 自动发布项目 | 岐 - http://jiwuming.com/2018/08/05/xcodebuild/
由于最近需要开发的工程有多个环境, 又是企业式发布 app 到 fir 上, 每次打一个包实在是太麻烦了, 而且需要不停的更换证书和描述文件, 所以决定把打包操作用 shell 执行。记录一下操作过程。
|
non_process
|
使用 xcode build 命令打包与 fir cli 自动发布项目 岐 由于最近需要开发的工程有多个环境 又是企业式发布 app 到 fir 上 每次打一个包实在是太麻烦了 而且需要不停的更换证书和描述文件 所以决定把打包操作用 shell 执行。记录一下操作过程。
| 0
|
20,648
| 27,324,590,287
|
IssuesEvent
|
2023-02-25 00:04:27
|
cse442-at-ub/project_s23-team-infinity
|
https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity
|
reopened
|
Check the basis of React
|
Processing Task Sprint 1
|
1. Install the React environment.
2. Try the js syntax.
3. Learn the components of React.
4. Try to write a simple login page with React.
|
1.0
|
Check the basis of React - 1. Install the React environment.
2. Try the js syntax.
3. Learn the components of React.
4. Try to write a simple login page with React.
|
process
|
check the basis of react install the react environment try the js syntax learn the components of react try to write a simple login page with react
| 1
|
1,579
| 4,173,790,137
|
IssuesEvent
|
2016-06-21 11:56:45
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
modulation by host of viral RNA binding
|
binding multiorganism processes New term request PARL-UCL viruses
|
Another new term request for PMID: 25116364 to capture that human NUCKS1 alters the binding of viral Tat protein to the TAR (trans-activating response) RNA element:
modulation by host of viral RNA binding ; GO:NEW
is_a: modulation by host of viral molecular function ; GO:0044868
is_a: regulation of RNA binding ; GO:1905214
A process in which a host organism modulates the frequency, rate or extent of a viral protein binding to RNA.
PMID:25116364, GOC:PARL, GOC:bf
Will add in, if no objections by @dosumis or anyone else.
|
1.0
|
modulation by host of viral RNA binding -
Another new term request for PMID: 25116364 to capture that human NUCKS1 alters the binding of viral Tat protein to the TAR (trans-activating response) RNA element:
modulation by host of viral RNA binding ; GO:NEW
is_a: modulation by host of viral molecular function ; GO:0044868
is_a: regulation of RNA binding ; GO:1905214
A process in which a host organism modulates the frequency, rate or extent of a viral protein binding to RNA.
PMID:25116364, GOC:PARL, GOC:bf
Will add in, if no objections by @dosumis or anyone else.
|
process
|
modulation by host of viral rna binding another new term request for pmid to capture that human alters the binding of viral tat protein to the tar trans activating response rna element modulation by host of viral rna binding go new is a modulation by host of viral molecular function go is a regulation of rna binding go a process in which a host organism modulates the frequency rate or extent of a viral protein binding to rna pmid goc parl goc bf will add in if no objections by dosumis or anyone else
| 1
|
3,794
| 6,776,181,232
|
IssuesEvent
|
2017-10-27 16:50:03
|
mattermost/mattermost-developer-documentation
|
https://api.github.com/repos/mattermost/mattermost-developer-documentation
|
closed
|
Need better way to commit build
|
process
|
Right now you need to run `hugo` in the `/site` directory which will dump the built site into `/docs` which is pretty messy and will likely make rebasing a pain.
|
1.0
|
Need better way to commit build - Right now you need to run `hugo` in the `/site` directory which will dump the built site into `/docs` which is pretty messy and will likely make rebasing a pain.
|
process
|
need better way to commit build right now you need to run hugo in the site directory which will dump the built site into docs which is pretty messy and will likely make rebasing a pain
| 1
|
10,757
| 13,549,206,167
|
IssuesEvent
|
2020-09-17 07:51:29
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
closed
|
Add `upcase` and `downcase` functions to remap syntax
|
domain: mapping domain: processing type: enhancement
|
We need to add an `upcase` function to the new remap syntax.
## Example
Given the following event:
```json
{
"message": "Hello world"
}
```
And this remap instruction set:
```
.message = upcase(.message)
```
It should produce an event like:
```json
{
"message": "HELLO WORLD"
}
```
## Requirements
- [x] Add an `upcase` function that operates on strings only.
- [x] Add an `downcase` function that operates on strings only.
- [ ] Throw an error if invalid value type is passed.
- [ ] Document these functions.
|
1.0
|
Add `upcase` and `downcase` functions to remap syntax - We need to add an `upcase` function to the new remap syntax.
## Example
Given the following event:
```json
{
"message": "Hello world"
}
```
And this remap instruction set:
```
.message = upcase(.message)
```
It should produce an event like:
```json
{
"message": "HELLO WORLD"
}
```
## Requirements
- [x] Add an `upcase` function that operates on strings only.
- [x] Add an `downcase` function that operates on strings only.
- [ ] Throw an error if invalid value type is passed.
- [ ] Document these functions.
|
process
|
add upcase and downcase functions to remap syntax we need to add an upcase function to the new remap syntax example given the following event json message hello world and this remap instruction set message upcase message it should produce an event like json message hello world requirements add an upcase function that operates on strings only add an downcase function that operates on strings only throw an error if invalid value type is passed document these functions
| 1
|
11,478
| 14,344,778,875
|
IssuesEvent
|
2020-11-28 16:01:25
|
ontop/ontop
|
https://api.github.com/repos/ontop/ontop
|
opened
|
Support anonymous blank nodes in OBDA mappings
|
status: requested topic: mapping processing type: enhancement
|
Feature requested by J. Van Noten on the mailing list
> In the document on SOSA and SSN (https://www.w3.org/TR/vocab-ssn/#iphone_barometer-sosa), I find the following example fragment for the creation of an individual:
> ```
> <Observation/83985> a sosa:Observation ;
> sosa:hasFeatureOfInterest <apartment/134> ;
> sosa:hasResult [
> a qudt-1-1:QuantityValue ;
> qudt-1-1:unit qudt-unit-1-1:DegreeCelsius ;
> qudt-1-1:numericValue "22.4"^^xsd:double ] .
> ```
> In this example, the individual of QuantityValue is anonymous: no specific IRI required.
>
> I have a similar situation, where I want to take those values from a database.
> Theoretically, that would lead to the following target part of an OBDA mapping:
> ```
> :observation/{observationID} a sosa:Observation ;
> sosa:hasFeatureOfInterest :apartment/{apartmentID} ;
> sosa:hasResult [
> a qudt-1-1:QuantityValue ;
> qudt-1-1:unit qudt-unit-1-1:DegreeCelsius ;
> qudt-1-1:numericValue {observedTemperature}^^xsd:double ] .
> ```
>
> Unfortunately, this syntax does not seem to be allowed.
> The only solution I found is to split this in two mappings (again, I show only the target part):
> ```
> :observation/{observationID}> a sosa:Observation ;
> sosa:hasFeatureOfInterest <apartment/{apartmentID}> ;
> sosa:hasResult :value/{observedTemperature} .
>
> :value/{observedTemperature} a qudt-1-1:QuantityValue ;
> qudt-1-1:unit qudt-unit-1-1:DegreeCelsius ;
> qudt-1-1:numericValue {observedTemperature}^^xsd:double .
> ```
> or alternatively, both written in one target specification.
>
To make it easier to implement at the parser level, we could only support anonymous blank nodes as objects, not as subjects.
The parser could create a blank node template with a randomly generated prefix that would take as variables the columns appearing in the subject and in the properties and objects inside the anonymous blank node block.
Note that R2RML does not support this feature.
|
1.0
|
Support anonymous blank nodes in OBDA mappings - Feature requested by J. Van Noten on the mailing list
> In the document on SOSA and SSN (https://www.w3.org/TR/vocab-ssn/#iphone_barometer-sosa), I find the following example fragment for the creation of an individual:
> ```
> <Observation/83985> a sosa:Observation ;
> sosa:hasFeatureOfInterest <apartment/134> ;
> sosa:hasResult [
> a qudt-1-1:QuantityValue ;
> qudt-1-1:unit qudt-unit-1-1:DegreeCelsius ;
> qudt-1-1:numericValue "22.4"^^xsd:double ] .
> ```
> In this example, the individual of QuantityValue is anonymous: no specific IRI required.
>
> I have a similar situation, where I want to take those values from a database.
> Theoretically, that would lead to the following target part of an OBDA mapping:
> ```
> :observation/{observationID} a sosa:Observation ;
> sosa:hasFeatureOfInterest :apartment/{apartmentID} ;
> sosa:hasResult [
> a qudt-1-1:QuantityValue ;
> qudt-1-1:unit qudt-unit-1-1:DegreeCelsius ;
> qudt-1-1:numericValue {observedTemperature}^^xsd:double ] .
> ```
>
> Unfortunately, this syntax does not seem to be allowed.
> The only solution I found is to split this in two mappings (again, I show only the target part):
> ```
> :observation/{observationID}> a sosa:Observation ;
> sosa:hasFeatureOfInterest <apartment/{apartmentID}> ;
> sosa:hasResult :value/{observedTemperature} .
>
> :value/{observedTemperature} a qudt-1-1:QuantityValue ;
> qudt-1-1:unit qudt-unit-1-1:DegreeCelsius ;
> qudt-1-1:numericValue {observedTemperature}^^xsd:double .
> ```
> or alternatively, both written in one target specification.
>
To make it easier to implement at the parser level, we could only support anonymous blank nodes as objects, not as subjects.
The parser could create a blank node template with a randomly generated prefix that would take as variables the columns appearing in the subject and in the properties and objects inside the anonymous blank node block.
Note that R2RML does not support this feature.
|
process
|
support anonymous blank nodes in obda mappings feature requested by j van noten on the mailing list in the document on sosa and ssn i find the following example fragment for the creation of an individual a sosa observation sosa hasfeatureofinterest sosa hasresult a qudt quantityvalue qudt unit qudt unit degreecelsius qudt numericvalue xsd double in this example the individual of quantityvalue is anonymous no specific iri required i have a similar situation where i want to take those values from a database theoretically that would lead to the following target part of an obda mapping observation observationid a sosa observation sosa hasfeatureofinterest apartment apartmentid sosa hasresult a qudt quantityvalue qudt unit qudt unit degreecelsius qudt numericvalue observedtemperature xsd double unfortunately this syntax does not seem to be allowed the only solution i found is to split this in two mappings again i show only the target part observation observationid a sosa observation sosa hasfeatureofinterest sosa hasresult value observedtemperature value observedtemperature a qudt quantityvalue qudt unit qudt unit degreecelsius qudt numericvalue observedtemperature xsd double or alternatively both written in one target specification to make it easier to implement at the parser level we could only support anonymous blank nodes as objects not as subjects the parser could create a blank node template with a randomly generated prefix that would take as variables the columns appearing in the subject and in the properties and objects inside the anonymous blank node block note that does not support this feature
| 1
|
9,191
| 12,228,872,784
|
IssuesEvent
|
2020-05-03 21:19:18
|
chfor183/data_science_articles
|
https://api.github.com/repos/chfor183/data_science_articles
|
opened
|
Data Quality
|
Data Data Preprocessing Evaluation
|
## TL;DR
Bias
## Key Takeaways
- 1
- 2
## Useful Code Snippets
```
function test() {
console.log("notice the blank line before this function?");
}
```
## Articles/Ressources
https://towardsdatascience.com/assessing-the-quality-of-data-e5e996a1681b
https://towardsdatascience.com/bias-what-it-means-in-the-big-data-world-6e64893e92a1
|
1.0
|
Data Quality - ## TL;DR
Bias
## Key Takeaways
- 1
- 2
## Useful Code Snippets
```
function test() {
console.log("notice the blank line before this function?");
}
```
## Articles/Ressources
https://towardsdatascience.com/assessing-the-quality-of-data-e5e996a1681b
https://towardsdatascience.com/bias-what-it-means-in-the-big-data-world-6e64893e92a1
|
process
|
data quality tl dr bias key takeaways useful code snippets function test console log notice the blank line before this function articles ressources
| 1
|
3,443
| 6,538,264,758
|
IssuesEvent
|
2017-09-01 04:49:44
|
amaster507/ifbmt
|
https://api.github.com/repos/amaster507/ifbmt
|
opened
|
My Calling List Workflow - John Tinkle
|
idea process
|
I don't know what you have in mind concerning a _call list_, but here is what my "idea" would be.
Right now I am using a spreadsheet. Each state their own page, cities in alphabetical order. As I call, I mark the date and the response (CB-call back, VM- voicemail, SEC-secretary, SP- send packet). It works, decently, but I have to constantly go back and forth to map to make sure I'm not traveling back and forth across state unnecessarily.
Ideal, to me, would be to set a hub zip code, and have a list built with a set mile range. And have a drop down list for responses.
|
1.0
|
My Calling List Workflow - John Tinkle - I don't know what you have in mind concerning a _call list_, but here is what my "idea" would be.
Right now I am using a spreadsheet. Each state their own page, cities in alphabetical order. As I call, I mark the date and the response (CB-call back, VM- voicemail, SEC-secretary, SP- send packet). It works, decently, but I have to constantly go back and forth to map to make sure I'm not traveling back and forth across state unnecessarily.
Ideal, to me, would be to set a hub zip code, and have a list built with a set mile range. And have a drop down list for responses.
|
process
|
my calling list workflow john tinkle i don t know what you have in mind concerning a call list but here is what my idea would be right now i am using a spreadsheet each state their own page cities in alphabetical order as i call i mark the date and the response cb call back vm voicemail sec secretary sp send packet it works decently but i have to constantly go back and forth to map to make sure i m not traveling back and forth across state unnecessarily ideal to me would be to set a hub zip code and have a list built with a set mile range and have a drop down list for responses
| 1
|
204,346
| 23,239,510,972
|
IssuesEvent
|
2022-08-03 14:29:45
|
turkdevops/angular
|
https://api.github.com/repos/turkdevops/angular
|
closed
|
CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz - autoclosed
|
security vulnerability
|
## CVE-2020-7693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p>
<p>Path to dependency file: /integration/cli-hello-world-ivy-i18n/package.json</p>
<p>Path to vulnerable library: /integration/cli-hello-world-ivy-i18n/node_modules/sockjs/package.json,/integration/cli-hello-world-ivy-minimal/node_modules/sockjs/package.json,/integration/cli-hello-world/node_modules/sockjs/package.json,/integration/cli-hello-world-ivy-compat/node_modules/sockjs/package.json,/integration/cli-hello-world-lazy/node_modules/sockjs/package.json,/integration/cli-hello-world-lazy-rollup/node_modules/sockjs/package.json,/integration/ng_update_migrations/node_modules/sockjs/package.json</p>
<p>
Dependency Hierarchy:
- @angular-devkit/build-angular@file:../../node_modules/-0.900.0-rc.11.tgz (Root Library)
- webpack-dev-server-3.9.0.tgz
- :x: **sockjs-0.3.19.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/angular/commit/b01d51dcecf180551144cf9c7b013185d013d761">b01d51dcecf180551144cf9c7b013185d013d761</a></p>
<p>Found in base branch: <b>labs/router</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: sockjs - 0.3.20</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz - autoclosed - ## CVE-2020-7693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p>
<p>Path to dependency file: /integration/cli-hello-world-ivy-i18n/package.json</p>
<p>Path to vulnerable library: /integration/cli-hello-world-ivy-i18n/node_modules/sockjs/package.json,/integration/cli-hello-world-ivy-minimal/node_modules/sockjs/package.json,/integration/cli-hello-world/node_modules/sockjs/package.json,/integration/cli-hello-world-ivy-compat/node_modules/sockjs/package.json,/integration/cli-hello-world-lazy/node_modules/sockjs/package.json,/integration/cli-hello-world-lazy-rollup/node_modules/sockjs/package.json,/integration/ng_update_migrations/node_modules/sockjs/package.json</p>
<p>
Dependency Hierarchy:
- @angular-devkit/build-angular@file:../../node_modules/-0.900.0-rc.11.tgz (Root Library)
- webpack-dev-server-3.9.0.tgz
- :x: **sockjs-0.3.19.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/angular/commit/b01d51dcecf180551144cf9c7b013185d013d761">b01d51dcecf180551144cf9c7b013185d013d761</a></p>
<p>Found in base branch: <b>labs/router</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: sockjs - 0.3.20</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in sockjs tgz autoclosed cve medium severity vulnerability vulnerable library sockjs tgz sockjs node is a server counterpart of sockjs client a javascript library that provides a websocket like object in the browser sockjs gives you a coherent cross browser javascript api which creates a low latency full duplex cross domain communication library home page a href path to dependency file integration cli hello world ivy package json path to vulnerable library integration cli hello world ivy node modules sockjs package json integration cli hello world ivy minimal node modules sockjs package json integration cli hello world node modules sockjs package json integration cli hello world ivy compat node modules sockjs package json integration cli hello world lazy node modules sockjs package json integration cli hello world lazy rollup node modules sockjs package json integration ng update migrations node modules sockjs package json dependency hierarchy angular devkit build angular file node modules rc tgz root library webpack dev server tgz x sockjs tgz vulnerable library found in head commit a href found in base branch labs router vulnerability details incorrect handling of upgrade header with the value websocket leads in crashing of containers hosting sockjs apps this affects the package sockjs before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution sockjs step up your open source security game with mend
| 0
|
667,728
| 22,498,696,695
|
IssuesEvent
|
2022-06-23 09:49:12
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
nrf_qspi_nor: Inconsistent state of HOLD and WP for QSPI command execution causes hang on startup for some flash chips
|
bug priority: low platform: nRF area: QSPI
|
**Describe the bug**
In `drivers/flash/nrf_qspi_nor.c`, the function `qspi_send_cmd` by default sets the IO2 and IO3 pins on the QSPI bus to high:
```
nrf_qspi_cinstr_conf_t cinstr_cfg = {
.opcode = cmd->op_code,
.length = xfer_len,
.io2_level = true,
.io3_level = true,
.wipwait = false,
.wren = wren,
};
```
ref: https://github.com/zephyrproject-rtos/zephyr/blob/da6549c452bf0fdc98c16474b097ece87b14e000/drivers/flash/nrf_qspi_nor.c#L470
Later in this file, the function `qspi_device_uninit` calls the function `nrfx_qspi_mem_busy_check` to execute a status register read on the flash to see if the WIP bit is set. It then waits in an idle loop while the bit is set
qspi_device_uninit: https://github.com/zephyrproject-rtos/zephyr/blob/da6549c452bf0fdc98c16474b097ece87b14e000/drivers/flash/nrf_qspi_nor.c#L400
nrfx_qspi_mem_busy_check: https://github.com/NordicSemiconductor/nrfx/blob/55305292a2a8e4149869951451311452f4566e9a/drivers/src/nrfx_qspi.c#L478
The problem I have found is that in this `nrfx` function, the qspi configuration is initialised to have `io2_level` and `io3_level` as false through the `NRFX_QSPI_DEFAULT_CINSTR` macro which is defined as such:
```
/** @brief QSPI custom instruction helper with the default configuration. */
#define NRFX_QSPI_DEFAULT_CINSTR(opc, len) \
{ \
.opcode = (opc), \
.length = (len), \
.io2_level = false, \
.io3_level = false, \
.wipwait = false, \
.wren = false \
}
```
here: https://github.com/NordicSemiconductor/nrfx/blob/55305292a2a8e4149869951451311452f4566e9a/drivers/include/nrfx_qspi.h#L124
--
I have discovered this because I am working on a device which has a Puya Semiconductor [P25Q16H](https://files.seeedstudio.com/wiki/github_weiruanexample/Flash_P25Q16H-UXH-IR_Datasheet.pdf) which appears to have an undocumented issue where the HOLD pin behaviour is unusual - I am not exactly sure where this behaviour is standardised, but in most devices HOLD pin is supposed to be ignored for status commands I think. However, for this device it is not, so the MISO pin is held high when HOLD is low, causing the status register to always read as 0xFF. The LSB of this read is the Write In Progress (WIP) bit, and so the `qspi_device_uninit` hangs forever because the device never reads the WIP bit as 0.
My suggestions to fix the problem are (either would solve this):
1. Remove calls to `nrfx_qspi_mem_busy_check` and instead call `qspi_send_cmd` with the ReaD Status Register (RDSR) command
2. Change `nrfx` library to default both IO2 and IO3 lines to high
**Expected behavior**
QSPI peripheral should hold IO2 and IO3 high (ie, deasserted) during all QSPI operations that do not use those pins.
**Additional context**
I patched my local `nrfx` library to set IO2 and IO3 to high in `NRFX_QSPI_DEFAULT_CINSTR` and the P25Q16H flash chip started working fine. I ran the `samples/subsys/fs/littlefs` sample and everything worked great.
The board is a custom one ([seeedstudio_xiao_ble](https://www.seeedstudio.com/Seeed-XIAO-BLE-nRF52840-p-5201.html) with the nrf52840 and P25Q16H)
|
1.0
|
nrf_qspi_nor: Inconsistent state of HOLD and WP for QSPI command execution causes hang on startup for some flash chips - **Describe the bug**
In `drivers/flash/nrf_qspi_nor.c`, the function `qspi_send_cmd` by default sets the IO2 and IO3 pins on the QSPI bus to high:
```
nrf_qspi_cinstr_conf_t cinstr_cfg = {
.opcode = cmd->op_code,
.length = xfer_len,
.io2_level = true,
.io3_level = true,
.wipwait = false,
.wren = wren,
};
```
ref: https://github.com/zephyrproject-rtos/zephyr/blob/da6549c452bf0fdc98c16474b097ece87b14e000/drivers/flash/nrf_qspi_nor.c#L470
Later in this file, the function `qspi_device_uninit` calls the function `nrfx_qspi_mem_busy_check` to execute a status register read on the flash to see if the WIP bit is set. It then waits in an idle loop while the bit is set
qspi_device_uninit: https://github.com/zephyrproject-rtos/zephyr/blob/da6549c452bf0fdc98c16474b097ece87b14e000/drivers/flash/nrf_qspi_nor.c#L400
nrfx_qspi_mem_busy_check: https://github.com/NordicSemiconductor/nrfx/blob/55305292a2a8e4149869951451311452f4566e9a/drivers/src/nrfx_qspi.c#L478
The problem I have found is that in this `nrfx` function, the qspi configuration is initialised to have `io2_level` and `io3_level` as false through the `NRFX_QSPI_DEFAULT_CINSTR` macro which is defined as such:
```
/** @brief QSPI custom instruction helper with the default configuration. */
#define NRFX_QSPI_DEFAULT_CINSTR(opc, len) \
{ \
.opcode = (opc), \
.length = (len), \
.io2_level = false, \
.io3_level = false, \
.wipwait = false, \
.wren = false \
}
```
here: https://github.com/NordicSemiconductor/nrfx/blob/55305292a2a8e4149869951451311452f4566e9a/drivers/include/nrfx_qspi.h#L124
--
I have discovered this because I am working on a device which has a Puya Semiconductor [P25Q16H](https://files.seeedstudio.com/wiki/github_weiruanexample/Flash_P25Q16H-UXH-IR_Datasheet.pdf) which appears to have an undocumented issue where the HOLD pin behaviour is unusual - I am not exactly sure where this behaviour is standardised, but in most devices HOLD pin is supposed to be ignored for status commands I think. However, for this device it is not, so the MISO pin is held high when HOLD is low, causing the status register to always read as 0xFF. The LSB of this read is the Write In Progress (WIP) bit, and so the `qspi_device_uninit` hangs forever because the device never reads the WIP bit as 0.
My suggestions to fix the problem are (either would solve this):
1. Remove calls to `nrfx_qspi_mem_busy_check` and instead call `qspi_send_cmd` with the ReaD Status Register (RDSR) command
2. Change `nrfx` library to default both IO2 and IO3 lines to high
**Expected behavior**
QSPI peripheral should hold IO2 and IO3 high (ie, deasserted) during all QSPI operations that do not use those pins.
**Additional context**
I patched my local `nrfx` library to set IO2 and IO3 to high in `NRFX_QSPI_DEFAULT_CINSTR` and the P25Q16H flash chip started working fine. I ran the `samples/subsys/fs/littlefs` sample and everything worked great.
The board is a custom one ([seeedstudio_xiao_ble](https://www.seeedstudio.com/Seeed-XIAO-BLE-nRF52840-p-5201.html) with the nrf52840 and P25Q16H)
|
non_process
|
nrf qspi nor inconsistent state of hold and wp for qspi command execution causes hang on startup for some flash chips describe the bug in drivers flash nrf qspi nor c the function qspi send cmd by default sets the and pins on the qspi bus to high nrf qspi cinstr conf t cinstr cfg opcode cmd op code length xfer len level true level true wipwait false wren wren ref later in this file the function qspi device uninit calls the function nrfx qspi mem busy check to execute a status register read on the flash to see if the wip bit is set it then waits in an idle loop while the bit is set qspi device uninit nrfx qspi mem busy check the problem i have found is that in this nrfx function the qspi configuration is initialised to have level and level as false through the nrfx qspi default cinstr macro which is defined as such brief qspi custom instruction helper with the default configuration define nrfx qspi default cinstr opc len opcode opc length len level false level false wipwait false wren false here i have discovered this because i am working on a device which has a puya semiconductor which appears to have an undocumented issue where the hold pin behaviour is unusual i am not exactly sure where this behaviour is standardised but in most devices hold pin is supposed to be ignored for status commands i think however for this device it is not so the miso pin is held high when hold is low causing the status register to always read as the lsb of this read is the write in progress wip bit and so the qspi device uninit hangs forever because the device never reads the wip bit as my suggestions to fix the problem are either would solve this remove calls to nrfx qspi mem busy check and instead call qspi send cmd with the read status register rdsr command change nrfx library to default both and lines to high expected behavior qspi peripheral should hold and high ie deasserted during all qspi operations that do not use those pins additional context i patched my local nrfx library to set and to high in nrfx qspi default cinstr and the flash chip started working fine i ran the samples subsys fs littlefs sample and everything worked great the board is a custom one with the and
| 0
|
383,697
| 26,561,800,517
|
IssuesEvent
|
2023-01-20 16:27:39
|
GCTC-NTGC/gc-digital-talent
|
https://api.github.com/repos/GCTC-NTGC/gc-digital-talent
|
closed
|
Cookies - What they are and what's spawning them
|
documentation
|
## Acceptance Criteria
- [ ] List all the cookies on prod
- [ ] Document: Source, purpose, do we control it. For all cookies
- [ ] Add the table or document with this info to our Repo Readme
|
1.0
|
Cookies - What they are and what's spawning them - ## Acceptance Criteria
- [ ] List all the cookies on prod
- [ ] Document: Source, purpose, do we control it. For all cookies
- [ ] Add the table or document with this info to our Repo Readme
|
non_process
|
cookies what they are and what s spawning them acceptance criteria list all the cookies on prod document source purpose do we control it for all cookies add the table or document with this info to our repo readme
| 0
|
197,926
| 14,949,759,826
|
IssuesEvent
|
2021-01-26 12:02:38
|
hoprnet/hoprnet
|
https://api.github.com/repos/hoprnet/hoprnet
|
closed
|
Connection issues between 2 avado nodes.
|
bug manual test
|
## Expected Behavior
Two Avados behind same router should be able to `ping` one another.
## Current Behavior
```
2020-12-01T12:14:02.489Z hopr-core:crawler Contacted:
2020-12-01T12:14:02.489Z hopr-core:crawler - 16Uiu2HAmTTqMMoxU4m2f6PoHmgTc9QyGTsPR7t844715rmp2M2oh
2020-12-01T12:14:02.489Z hopr-core:crawler crawl complete
2020-12-01T12:14:02.490Z hoprd { type: 'log', msg: 'Crawled network', ts: '2020-12-01T12:14:02.490Z' }
2020-12-01T12:14:26.047Z hoprd:admin Message from client ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:26.047Z hoprd {
type: 'log',
msg: 'admin > ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz ',
ts: '2020-12-01T12:14:26.047Z'
}
2020-12-01T12:14:26.053Z hopr-core:verbose:heartbeat heartbeat connection error Error while dialing 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz (initial)
2020-12-01T12:14:29.053Z hopr-core:verbose:heartbeat heartbeat timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:29.054Z hoprd {
type: 'log',
msg: 'Could not ping node. Error was: Timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz.',
ts: '2020-12-01T12:14:29.054Z'
}
2020-12-01T12:14:31.428Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.430Z hopr-core:verbose:transport attempting to dial directly /ip4/62.171.148.205/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA
2020-12-01T12:14:31.431Z hopr-core:transport Attempting to dial /ip4/62.171.148.205/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA directly
2020-12-01T12:14:31.431Z hopr-core:transport dialing {"family":"ipv4","host":"62.171.148.205","transport":"tcp","port":9091}
2020-12-01T12:14:31.432Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.432Z hopr-core:verbose:transport attempting to dial directly /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA
2020-12-01T12:14:31.432Z hopr-core:transport Attempting to dial /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA directly
2020-12-01T12:14:31.432Z hopr-core:transport dialing {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.433Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.433Z hopr-core:verbose:transport attempting to dial directly /ip4/188.134.78.217/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b
2020-12-01T12:14:31.434Z hopr-core:transport Attempting to dial /ip4/188.134.78.217/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b directly
2020-12-01T12:14:31.434Z hopr-core:transport dialing {"family":"ipv4","host":"188.134.78.217","transport":"tcp","port":9091}
2020-12-01T12:14:31.434Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.435Z hopr-core:verbose:transport attempting to dial directly /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b
2020-12-01T12:14:31.435Z hopr-core:transport Attempting to dial /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b directly
2020-12-01T12:14:31.435Z hopr-core:transport dialing {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.435Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.436Z hopr-core:verbose:transport attempting to dial directly /ip4/198.58.115.144/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo
2020-12-01T12:14:31.436Z hopr-core:transport Attempting to dial /ip4/198.58.115.144/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo directly
2020-12-01T12:14:31.436Z hopr-core:transport dialing {"family":"ipv4","host":"198.58.115.144","transport":"tcp","port":9091}
2020-12-01T12:14:31.436Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.437Z hopr-core:verbose:transport attempting to dial directly /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo
2020-12-01T12:14:31.437Z hopr-core:transport Attempting to dial /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo directly
2020-12-01T12:14:31.437Z hopr-core:transport dialing {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.437Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.438Z hopr-core:verbose:transport attempting to dial directly /ip6/2600:3c00::f03c:92ff:fe5c:e727/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo
2020-12-01T12:14:31.438Z hopr-core:transport Attempting to dial /ip6/2600:3c00::f03c:92ff:fe5c:e727/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo directly
2020-12-01T12:14:31.438Z hopr-core:transport dialing {"family":"ipv6","host":"2600:3c00::f03c:92ff:fe5c:e727","transport":"tcp","port":9091}
2020-12-01T12:14:31.439Z hopr-core:verbose:transport Error connecting: Error: connect EADDRNOTAVAIL 2600:3c00::f03c:92ff:fe5c:e727:9091 - Local (:::0)
at internalConnect (net.js:835:16)
at defaultTriggerAsyncIdScope (internal/async_hooks.js:301:12)
at net.js:926:9
at processTicksAndRejections (internal/process/task_queues.js:75:11) {
errno: 'EADDRNOTAVAIL',
code: 'EADDRNOTAVAIL',
syscall: 'connect',
address: '2600:3c00::f03c:92ff:fe5c:e727',
port: 9091
}
2020-12-01T12:14:31.441Z hopr-core:verbose:transport Dial directly unexpected error Error: connection error 2600:3c00::f03c:92ff:fe5c:e727:9091: connect EADDRNOTAVAIL 2600:3c00::f03c:92ff:fe5c:e727:9091 - Local (:::0)
2020-12-01T12:14:31.442Z hopr-core:transport connection opened {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.443Z hopr-core:verbose:transport Establishing a direct connection to /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA was successful. Continuing with the handshakes.
2020-12-01T12:14:31.444Z hopr-core:transport connection opened {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.445Z hopr-core:verbose:transport Establishing a direct connection to /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b was successful. Continuing with the handshakes.
2020-12-01T12:14:31.445Z hopr-core:transport connection opened {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.446Z hopr-core:verbose:transport Establishing a direct connection to /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo was successful. Continuing with the handshakes.
2020-12-01T12:14:31.448Z hopr-core:transport:listener new inbound connection /ip4/127.0.0.1/tcp/41418
2020-12-01T12:14:31.456Z hopr-core:verbose:transport Dial directly unexpected error Error: Dialed to the wrong peer: IDs do not match!
2020-12-01T12:14:31.459Z hopr-core:transport:listener new inbound connection /ip4/127.0.0.1/tcp/41422
2020-12-01T12:14:31.460Z hopr-core:transport:listener:error inbound connection failed Error: Value is null
at Object.readLP (/app/node_modules/it-pb-rpc/src/index.js:37:27)
at processTicksAndRejections (internal/process/task_queues.js:85:5)
at exchange (/app/node_modules/libp2p-secio/src/handshake/exchange.js:18:15)
at handshake (/app/node_modules/libp2p-secio/src/handshake/index.js:11:3)
at Object.secure (/app/node_modules/libp2p-secio/src/index.js:22:3)
at Upgrader._encryptInbound (/app/node_modules/libp2p/src/upgrader.js:358:12)
at Upgrader.upgradeInbound (/app/node_modules/libp2p/src/upgrader.js:96:11)
at Listener.onTCPConnection (/app/node_modules/@hoprnet/hopr-core/src/network/transport/listener.ts:238:14) {
code: 'ERR_ENCRYPTION_FAILED'
}
2020-12-01T12:14:31.477Z hopr-core:transport:listener new inbound connection /ip4/127.0.0.1/tcp/41426
2020-12-01T12:14:31.481Z hopr-core:verbose:transport Dial directly unexpected error Error: Dialed to the wrong peer: IDs do not match!
2020-12-01T12:14:31.486Z hopr-core:verbose:transport Dial directly unexpected error Error: Dialed to the wrong peer: IDs do not match!
2020-12-01T12:14:31.486Z hopr-core:transport:listener:error inbound connection failed Error: Value is null
at Object.readLP (/app/node_modules/it-pb-rpc/src/index.js:37:27)
at processTicksAndRejections (internal/process/task_queues.js:85:5)
at exchange (/app/node_modules/libp2p-secio/src/handshake/exchange.js:18:15)
at handshake (/app/node_modules/libp2p-secio/src/handshake/index.js:11:3)
at Object.secure (/app/node_modules/libp2p-secio/src/index.js:22:3)
at Upgrader._encryptInbound (/app/node_modules/libp2p/src/upgrader.js:358:12)
at Upgrader.upgradeInbound (/app/node_modules/libp2p/src/upgrader.js:96:11)
at Listener.onTCPConnection (/app/node_modules/@hoprnet/hopr-core/src/network/transport/listener.ts:238:14) {
code: 'ERR_ENCRYPTION_FAILED'
}
2020-12-01T12:14:31.489Z hopr-core:transport:listener:error inbound connection failed Error: Value is null
at Object.readLP (/app/node_modules/it-pb-rpc/src/index.js:37:27)
at processTicksAndRejections (internal/process/task_queues.js:85:5)
at exchange (/app/node_modules/libp2p-secio/src/handshake/exchange.js:18:15)
at handshake (/app/node_modules/libp2p-secio/src/handshake/index.js:11:3)
at Object.secure (/app/node_modules/libp2p-secio/src/index.js:22:3)
at Upgrader._encryptInbound (/app/node_modules/libp2p/src/upgrader.js:358:12)
at Upgrader.upgradeInbound (/app/node_modules/libp2p/src/upgrader.js:96:11)
at Listener.onTCPConnection (/app/node_modules/@hoprnet/hopr-core/src/network/transport/listener.ts:238:14) {
code: 'ERR_ENCRYPTION_FAILED'
}
2020-12-01T12:14:31.813Z hopr-core:transport connection opened {"family":"ipv4","host":"62.171.148.205","transport":"tcp","port":9091}
2020-12-01T12:14:31.814Z hopr-core:verbose:transport Establishing a direct connection to /ip4/62.171.148.205/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA was successful. Continuing with the handshakes.
2020-12-01T12:14:31.950Z hopr-core:transport connection opened {"family":"ipv4","host":"198.58.115.144","transport":"tcp","port":9091}
2020-12-01T12:14:31.952Z hopr-core:verbose:transport Establishing a direct connection to /ip4/198.58.115.144/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo was successful. Continuing with the handshakes.
2020-12-01T12:14:32.859Z hopr-core:transport connection opened {"family":"ipv4","host":"188.134.78.217","transport":"tcp","port":9091}
2020-12-01T12:14:32.860Z hopr-core:verbose:transport Establishing a direct connection to /ip4/188.134.78.217/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b was successful. Continuing with the handshakes.
2020-12-01T12:14:33.626Z hopr-core:verbose:transport outbound direct connection /ip4/188.134.78.217/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b upgraded
2020-12-01T12:14:34.020Z hopr-core:verbose:heartbeat heartbeat connection error AggregateError while dialing 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz (subsequent) true
2020-12-01T12:14:34.852Z hopr-core:verbose:transport outbound direct connection /ip4/198.58.115.144/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo upgraded
2020-12-01T12:14:37.251Z hopr-core:verbose:transport outbound direct connection /ip4/62.171.148.205/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA upgraded
2020-12-01T12:14:38.996Z hoprd {
type: 'connected',
msg: '16Uiu2HAmTTqMMoxU4m2f6PoHmgTc9QyGTsPR7t844715rmp2M2oh,16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b,16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo,16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA',
ts: '2020-12-01T12:14:38.996Z'
}
2020-12-01T12:14:38.999Z hoprd {
type: 'log',
msg: 'Process stats: mem 88056k (max: 117.890625k) cputime: 4564334',
ts: '2020-12-01T12:14:38.999Z'
}
2020-12-01T12:14:40.995Z hoprd:admin Message from client ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:40.995Z hoprd {
type: 'log',
msg: 'admin > ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz',
ts: '2020-12-01T12:14:40.995Z'
}
2020-12-01T12:14:41.000Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:41.002Z hopr-core:verbose:transport attempting to dial directly /ip4/82.217.43.184/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:41.002Z hopr-core:transport Attempting to dial /ip4/82.217.43.184/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz directly
2020-12-01T12:14:41.003Z hopr-core:transport dialing {"family":"ipv4","host":"82.217.43.184","transport":"tcp","port":9091}
2020-12-01T12:14:41.003Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:41.004Z hopr-core:verbose:transport attempting to dial directly /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:41.005Z hopr-core:transport Attempting to dial /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz directly
2020-12-01T12:14:41.005Z hopr-core:transport dialing {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:41.006Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:41.007Z hopr-core:verbose:transport attempting to dial directly /ip4/172.33.0.4/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:41.008Z hopr-core:transport Attempting to dial /ip4/172.33.0.4/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz directly
2020-12-01T12:14:41.008Z hopr-core:transport dialing {"family":"ipv4","host":"172.33.0.4","transport":"tcp","port":9091}
2020-12-01T12:14:41.009Z hopr-core:transport connection opened {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:41.011Z hopr-core:verbose:transport Establishing a direct connection to /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz was successful. Continuing with the handshakes.
2020-12-01T12:14:41.023Z hopr-core:transport:listener new inbound connection /ip4/127.0.0.1/tcp/41608
2020-12-01T12:14:41.024Z hopr-core:verbose:transport Error connecting: Error: connect ECONNREFUSED 82.217.43.184:9091
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1056:14) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '82.217.43.184',
port: 9091
}
2020-12-01T12:14:41.025Z hopr-core:verbose:transport dialing with relay /ip4/82.217.43.184/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:41.033Z hopr-core:verbose:transport Dial directly unexpected error Error: Dialed to the wrong peer: IDs do not match!
2020-12-01T12:14:41.036Z hopr-core:transport:listener:error inbound connection failed Error: Value is null
at Object.readLP (/app/node_modules/it-pb-rpc/src/index.js:37:27)
at processTicksAndRejections (internal/process/task_queues.js:85:5)
at exchange (/app/node_modules/libp2p-secio/src/handshake/exchange.js:18:15)
at handshake (/app/node_modules/libp2p-secio/src/handshake/index.js:11:3)
at Object.secure (/app/node_modules/libp2p-secio/src/index.js:22:3)
at Upgrader._encryptInbound (/app/node_modules/libp2p/src/upgrader.js:358:12)
at Upgrader.upgradeInbound (/app/node_modules/libp2p/src/upgrader.js:96:11)
at Listener.onTCPConnection (/app/node_modules/@hoprnet/hopr-core/src/network/transport/listener.ts:238:14) {
code: 'ERR_ENCRYPTION_FAILED'
}
2020-12-01T12:14:44.000Z hopr-core:transport connection aborted {"family":"ipv4","host":"172.33.0.4","transport":"tcp","port":9091}
2020-12-01T12:14:44.001Z hopr-core:verbose:heartbeat heartbeat timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:44.006Z hopr-core:verbose:transport dialing with relay /ip4/172.33.0.4/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:44.008Z hoprd {
type: 'log',
msg: 'Could not ping node. Error was: Timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz.',
ts: '2020-12-01T12:14:44.008Z'
}
2020-12-01T12:14:50.404Z hoprd:admin Message from client ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:50.405Z hoprd {
type: 'log',
msg: 'admin > ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz',
ts: '2020-12-01T12:14:50.404Z'
}
2020-12-01T12:14:50.922Z hopr-core:heartbeat Checking nodes older than 1606824787921
2020-12-01T12:14:50.923Z hopr-core:heartbeat ping 16Uiu2HAmTTqMMoxU4m2f6PoHmgTc9QyGTsPR7t844715rmp2M2oh
2020-12-01T12:14:51.537Z hopr-core:heartbeat ping success to 16Uiu2HAmTTqMMoxU4m2f6PoHmgTc9QyGTsPR7t844715rmp2M2oh
2020-12-01T12:14:51.537Z hopr-core:network-peers current nodes:
2020-12-01T12:14:51.537Z hopr-core:network-peers id: 16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b, q: 0.2
2020-12-01T12:14:51.538Z hopr-core:network-peers id: 16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA, q: 0.2
2020-12-01T12:14:51.538Z hopr-core:network-peers id: 16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo, q: 0.2
2020-12-01T12:14:51.538Z hopr-core:network-peers id: 16Uiu2HAmTTqMMoxU4m2f6PoHmgTc9QyGTsPR7t844715rmp2M2oh, q: 1
2020-12-01T12:14:51.655Z hopr-core:transport received answer OK
2020-12-01T12:14:51.684Z hopr-core:transport RelayConnection: after stream switch sink operation 1
2020-12-01T12:14:52.225Z hopr-core:transport relayed connection established
2020-12-01T12:14:52.684Z hopr-core:transport ending WebRTC upgrade due error: undefined
fallback to relayed connection
2020-12-01T12:14:53.410Z hopr-core:verbose:heartbeat heartbeat timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:53.411Z hoprd {
type: 'log',
msg: 'Could not ping node. Error was: Timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz.',
ts: '2020-12-01T12:14:53.411Z'
}
2020-12-01T12:14:57.827Z hopr-core:verbose:heartbeat aborted but no error
2020-12-01T12:14:57.827Z hopr-core:verbose:heartbeat aborted
```
## Steps to Reproduce
Running 2 Avado nodes in same network, one trying to ping the other.
## Context (Environment)
Both Avados running on version 1.56.2 /ipfs/Qmcu4trV9qYRSA7fvoQD7HMF12ccFsfBZXCUw1a3WBFzkb
|
1.0
|
Connection issues between 2 avado nodes. - ## Expected Behavior
Two Avados behind same router should be able to `ping` one another.
## Current Behavior
```
2020-12-01T12:14:02.489Z hopr-core:crawler Contacted:
2020-12-01T12:14:02.489Z hopr-core:crawler - 16Uiu2HAmTTqMMoxU4m2f6PoHmgTc9QyGTsPR7t844715rmp2M2oh
2020-12-01T12:14:02.489Z hopr-core:crawler crawl complete
2020-12-01T12:14:02.490Z hoprd { type: 'log', msg: 'Crawled network', ts: '2020-12-01T12:14:02.490Z' }
2020-12-01T12:14:26.047Z hoprd:admin Message from client ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:26.047Z hoprd {
type: 'log',
msg: 'admin > ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz ',
ts: '2020-12-01T12:14:26.047Z'
}
2020-12-01T12:14:26.053Z hopr-core:verbose:heartbeat heartbeat connection error Error while dialing 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz (initial)
2020-12-01T12:14:29.053Z hopr-core:verbose:heartbeat heartbeat timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:29.054Z hoprd {
type: 'log',
msg: 'Could not ping node. Error was: Timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz.',
ts: '2020-12-01T12:14:29.054Z'
}
2020-12-01T12:14:31.428Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.430Z hopr-core:verbose:transport attempting to dial directly /ip4/62.171.148.205/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA
2020-12-01T12:14:31.431Z hopr-core:transport Attempting to dial /ip4/62.171.148.205/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA directly
2020-12-01T12:14:31.431Z hopr-core:transport dialing {"family":"ipv4","host":"62.171.148.205","transport":"tcp","port":9091}
2020-12-01T12:14:31.432Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.432Z hopr-core:verbose:transport attempting to dial directly /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA
2020-12-01T12:14:31.432Z hopr-core:transport Attempting to dial /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA directly
2020-12-01T12:14:31.432Z hopr-core:transport dialing {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.433Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.433Z hopr-core:verbose:transport attempting to dial directly /ip4/188.134.78.217/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b
2020-12-01T12:14:31.434Z hopr-core:transport Attempting to dial /ip4/188.134.78.217/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b directly
2020-12-01T12:14:31.434Z hopr-core:transport dialing {"family":"ipv4","host":"188.134.78.217","transport":"tcp","port":9091}
2020-12-01T12:14:31.434Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.435Z hopr-core:verbose:transport attempting to dial directly /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b
2020-12-01T12:14:31.435Z hopr-core:transport Attempting to dial /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b directly
2020-12-01T12:14:31.435Z hopr-core:transport dialing {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.435Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.436Z hopr-core:verbose:transport attempting to dial directly /ip4/198.58.115.144/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo
2020-12-01T12:14:31.436Z hopr-core:transport Attempting to dial /ip4/198.58.115.144/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo directly
2020-12-01T12:14:31.436Z hopr-core:transport dialing {"family":"ipv4","host":"198.58.115.144","transport":"tcp","port":9091}
2020-12-01T12:14:31.436Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.437Z hopr-core:verbose:transport attempting to dial directly /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo
2020-12-01T12:14:31.437Z hopr-core:transport Attempting to dial /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo directly
2020-12-01T12:14:31.437Z hopr-core:transport dialing {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.437Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:31.438Z hopr-core:verbose:transport attempting to dial directly /ip6/2600:3c00::f03c:92ff:fe5c:e727/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo
2020-12-01T12:14:31.438Z hopr-core:transport Attempting to dial /ip6/2600:3c00::f03c:92ff:fe5c:e727/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo directly
2020-12-01T12:14:31.438Z hopr-core:transport dialing {"family":"ipv6","host":"2600:3c00::f03c:92ff:fe5c:e727","transport":"tcp","port":9091}
2020-12-01T12:14:31.439Z hopr-core:verbose:transport Error connecting: Error: connect EADDRNOTAVAIL 2600:3c00::f03c:92ff:fe5c:e727:9091 - Local (:::0)
at internalConnect (net.js:835:16)
at defaultTriggerAsyncIdScope (internal/async_hooks.js:301:12)
at net.js:926:9
at processTicksAndRejections (internal/process/task_queues.js:75:11) {
errno: 'EADDRNOTAVAIL',
code: 'EADDRNOTAVAIL',
syscall: 'connect',
address: '2600:3c00::f03c:92ff:fe5c:e727',
port: 9091
}
2020-12-01T12:14:31.441Z hopr-core:verbose:transport Dial directly unexpected error Error: connection error 2600:3c00::f03c:92ff:fe5c:e727:9091: connect EADDRNOTAVAIL 2600:3c00::f03c:92ff:fe5c:e727:9091 - Local (:::0)
2020-12-01T12:14:31.442Z hopr-core:transport connection opened {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.443Z hopr-core:verbose:transport Establishing a direct connection to /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA was successful. Continuing with the handshakes.
2020-12-01T12:14:31.444Z hopr-core:transport connection opened {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.445Z hopr-core:verbose:transport Establishing a direct connection to /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b was successful. Continuing with the handshakes.
2020-12-01T12:14:31.445Z hopr-core:transport connection opened {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:31.446Z hopr-core:verbose:transport Establishing a direct connection to /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo was successful. Continuing with the handshakes.
2020-12-01T12:14:31.448Z hopr-core:transport:listener new inbound connection /ip4/127.0.0.1/tcp/41418
2020-12-01T12:14:31.456Z hopr-core:verbose:transport Dial directly unexpected error Error: Dialed to the wrong peer: IDs do not match!
2020-12-01T12:14:31.459Z hopr-core:transport:listener new inbound connection /ip4/127.0.0.1/tcp/41422
2020-12-01T12:14:31.460Z hopr-core:transport:listener:error inbound connection failed Error: Value is null
at Object.readLP (/app/node_modules/it-pb-rpc/src/index.js:37:27)
at processTicksAndRejections (internal/process/task_queues.js:85:5)
at exchange (/app/node_modules/libp2p-secio/src/handshake/exchange.js:18:15)
at handshake (/app/node_modules/libp2p-secio/src/handshake/index.js:11:3)
at Object.secure (/app/node_modules/libp2p-secio/src/index.js:22:3)
at Upgrader._encryptInbound (/app/node_modules/libp2p/src/upgrader.js:358:12)
at Upgrader.upgradeInbound (/app/node_modules/libp2p/src/upgrader.js:96:11)
at Listener.onTCPConnection (/app/node_modules/@hoprnet/hopr-core/src/network/transport/listener.ts:238:14) {
code: 'ERR_ENCRYPTION_FAILED'
}
2020-12-01T12:14:31.477Z hopr-core:transport:listener new inbound connection /ip4/127.0.0.1/tcp/41426
2020-12-01T12:14:31.481Z hopr-core:verbose:transport Dial directly unexpected error Error: Dialed to the wrong peer: IDs do not match!
2020-12-01T12:14:31.486Z hopr-core:verbose:transport Dial directly unexpected error Error: Dialed to the wrong peer: IDs do not match!
2020-12-01T12:14:31.486Z hopr-core:transport:listener:error inbound connection failed Error: Value is null
at Object.readLP (/app/node_modules/it-pb-rpc/src/index.js:37:27)
at processTicksAndRejections (internal/process/task_queues.js:85:5)
at exchange (/app/node_modules/libp2p-secio/src/handshake/exchange.js:18:15)
at handshake (/app/node_modules/libp2p-secio/src/handshake/index.js:11:3)
at Object.secure (/app/node_modules/libp2p-secio/src/index.js:22:3)
at Upgrader._encryptInbound (/app/node_modules/libp2p/src/upgrader.js:358:12)
at Upgrader.upgradeInbound (/app/node_modules/libp2p/src/upgrader.js:96:11)
at Listener.onTCPConnection (/app/node_modules/@hoprnet/hopr-core/src/network/transport/listener.ts:238:14) {
code: 'ERR_ENCRYPTION_FAILED'
}
2020-12-01T12:14:31.489Z hopr-core:transport:listener:error inbound connection failed Error: Value is null
at Object.readLP (/app/node_modules/it-pb-rpc/src/index.js:37:27)
at processTicksAndRejections (internal/process/task_queues.js:85:5)
at exchange (/app/node_modules/libp2p-secio/src/handshake/exchange.js:18:15)
at handshake (/app/node_modules/libp2p-secio/src/handshake/index.js:11:3)
at Object.secure (/app/node_modules/libp2p-secio/src/index.js:22:3)
at Upgrader._encryptInbound (/app/node_modules/libp2p/src/upgrader.js:358:12)
at Upgrader.upgradeInbound (/app/node_modules/libp2p/src/upgrader.js:96:11)
at Listener.onTCPConnection (/app/node_modules/@hoprnet/hopr-core/src/network/transport/listener.ts:238:14) {
code: 'ERR_ENCRYPTION_FAILED'
}
2020-12-01T12:14:31.813Z hopr-core:transport connection opened {"family":"ipv4","host":"62.171.148.205","transport":"tcp","port":9091}
2020-12-01T12:14:31.814Z hopr-core:verbose:transport Establishing a direct connection to /ip4/62.171.148.205/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA was successful. Continuing with the handshakes.
2020-12-01T12:14:31.950Z hopr-core:transport connection opened {"family":"ipv4","host":"198.58.115.144","transport":"tcp","port":9091}
2020-12-01T12:14:31.952Z hopr-core:verbose:transport Establishing a direct connection to /ip4/198.58.115.144/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo was successful. Continuing with the handshakes.
2020-12-01T12:14:32.859Z hopr-core:transport connection opened {"family":"ipv4","host":"188.134.78.217","transport":"tcp","port":9091}
2020-12-01T12:14:32.860Z hopr-core:verbose:transport Establishing a direct connection to /ip4/188.134.78.217/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b was successful. Continuing with the handshakes.
2020-12-01T12:14:33.626Z hopr-core:verbose:transport outbound direct connection /ip4/188.134.78.217/tcp/9091/p2p/16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b upgraded
2020-12-01T12:14:34.020Z hopr-core:verbose:heartbeat heartbeat connection error AggregateError while dialing 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz (subsequent) true
2020-12-01T12:14:34.852Z hopr-core:verbose:transport outbound direct connection /ip4/198.58.115.144/tcp/9091/p2p/16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo upgraded
2020-12-01T12:14:37.251Z hopr-core:verbose:transport outbound direct connection /ip4/62.171.148.205/tcp/9091/p2p/16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA upgraded
2020-12-01T12:14:38.996Z hoprd {
type: 'connected',
msg: '16Uiu2HAmTTqMMoxU4m2f6PoHmgTc9QyGTsPR7t844715rmp2M2oh,16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b,16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo,16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA',
ts: '2020-12-01T12:14:38.996Z'
}
2020-12-01T12:14:38.999Z hoprd {
type: 'log',
msg: 'Process stats: mem 88056k (max: 117.890625k) cputime: 4564334',
ts: '2020-12-01T12:14:38.999Z'
}
2020-12-01T12:14:40.995Z hoprd:admin Message from client ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:40.995Z hoprd {
type: 'log',
msg: 'admin > ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz',
ts: '2020-12-01T12:14:40.995Z'
}
2020-12-01T12:14:41.000Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:41.002Z hopr-core:verbose:transport attempting to dial directly /ip4/82.217.43.184/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:41.002Z hopr-core:transport Attempting to dial /ip4/82.217.43.184/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz directly
2020-12-01T12:14:41.003Z hopr-core:transport dialing {"family":"ipv4","host":"82.217.43.184","transport":"tcp","port":9091}
2020-12-01T12:14:41.003Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:41.004Z hopr-core:verbose:transport attempting to dial directly /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:41.005Z hopr-core:transport Attempting to dial /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz directly
2020-12-01T12:14:41.005Z hopr-core:transport dialing {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:41.006Z hopr-core:verbose:transport filtering multiaddrs
2020-12-01T12:14:41.007Z hopr-core:verbose:transport attempting to dial directly /ip4/172.33.0.4/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:41.008Z hopr-core:transport Attempting to dial /ip4/172.33.0.4/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz directly
2020-12-01T12:14:41.008Z hopr-core:transport dialing {"family":"ipv4","host":"172.33.0.4","transport":"tcp","port":9091}
2020-12-01T12:14:41.009Z hopr-core:transport connection opened {"family":"ipv4","host":"127.0.0.1","transport":"tcp","port":9091}
2020-12-01T12:14:41.011Z hopr-core:verbose:transport Establishing a direct connection to /ip4/127.0.0.1/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz was successful. Continuing with the handshakes.
2020-12-01T12:14:41.023Z hopr-core:transport:listener new inbound connection /ip4/127.0.0.1/tcp/41608
2020-12-01T12:14:41.024Z hopr-core:verbose:transport Error connecting: Error: connect ECONNREFUSED 82.217.43.184:9091
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1056:14) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '82.217.43.184',
port: 9091
}
2020-12-01T12:14:41.025Z hopr-core:verbose:transport dialing with relay /ip4/82.217.43.184/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:41.033Z hopr-core:verbose:transport Dial directly unexpected error Error: Dialed to the wrong peer: IDs do not match!
2020-12-01T12:14:41.036Z hopr-core:transport:listener:error inbound connection failed Error: Value is null
at Object.readLP (/app/node_modules/it-pb-rpc/src/index.js:37:27)
at processTicksAndRejections (internal/process/task_queues.js:85:5)
at exchange (/app/node_modules/libp2p-secio/src/handshake/exchange.js:18:15)
at handshake (/app/node_modules/libp2p-secio/src/handshake/index.js:11:3)
at Object.secure (/app/node_modules/libp2p-secio/src/index.js:22:3)
at Upgrader._encryptInbound (/app/node_modules/libp2p/src/upgrader.js:358:12)
at Upgrader.upgradeInbound (/app/node_modules/libp2p/src/upgrader.js:96:11)
at Listener.onTCPConnection (/app/node_modules/@hoprnet/hopr-core/src/network/transport/listener.ts:238:14) {
code: 'ERR_ENCRYPTION_FAILED'
}
2020-12-01T12:14:44.000Z hopr-core:transport connection aborted {"family":"ipv4","host":"172.33.0.4","transport":"tcp","port":9091}
2020-12-01T12:14:44.001Z hopr-core:verbose:heartbeat heartbeat timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:44.006Z hopr-core:verbose:transport dialing with relay /ip4/172.33.0.4/tcp/9091/p2p/16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:44.008Z hoprd {
type: 'log',
msg: 'Could not ping node. Error was: Timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz.',
ts: '2020-12-01T12:14:44.008Z'
}
2020-12-01T12:14:50.404Z hoprd:admin Message from client ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:50.405Z hoprd {
type: 'log',
msg: 'admin > ping 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz',
ts: '2020-12-01T12:14:50.404Z'
}
2020-12-01T12:14:50.922Z hopr-core:heartbeat Checking nodes older than 1606824787921
2020-12-01T12:14:50.923Z hopr-core:heartbeat ping 16Uiu2HAmTTqMMoxU4m2f6PoHmgTc9QyGTsPR7t844715rmp2M2oh
2020-12-01T12:14:51.537Z hopr-core:heartbeat ping success to 16Uiu2HAmTTqMMoxU4m2f6PoHmgTc9QyGTsPR7t844715rmp2M2oh
2020-12-01T12:14:51.537Z hopr-core:network-peers current nodes:
2020-12-01T12:14:51.537Z hopr-core:network-peers id: 16Uiu2HAkzBR5ZgektYP4WYo2BupXy3MsNBKXkowJpdGtRzQX1T3b, q: 0.2
2020-12-01T12:14:51.538Z hopr-core:network-peers id: 16Uiu2HAkxCGmioEp9YY59ESi26LfvgxzgZYjgQPbc5FwaXUjWHPA, q: 0.2
2020-12-01T12:14:51.538Z hopr-core:network-peers id: 16Uiu2HAkvhdwTsXinjpnwECP1Pnb7iXZ8u2K3u1MvSvn31WCDCvo, q: 0.2
2020-12-01T12:14:51.538Z hopr-core:network-peers id: 16Uiu2HAmTTqMMoxU4m2f6PoHmgTc9QyGTsPR7t844715rmp2M2oh, q: 1
2020-12-01T12:14:51.655Z hopr-core:transport received answer OK
2020-12-01T12:14:51.684Z hopr-core:transport RelayConnection: after stream switch sink operation 1
2020-12-01T12:14:52.225Z hopr-core:transport relayed connection established
2020-12-01T12:14:52.684Z hopr-core:transport ending WebRTC upgrade due error: undefined
fallback to relayed connection
2020-12-01T12:14:53.410Z hopr-core:verbose:heartbeat heartbeat timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz
2020-12-01T12:14:53.411Z hoprd {
type: 'log',
msg: 'Could not ping node. Error was: Timeout while querying 16Uiu2HAkxnXLwUrLQPgahUJbn6KigFECCKFxXgpH2NJp75KGKywz.',
ts: '2020-12-01T12:14:53.411Z'
}
2020-12-01T12:14:57.827Z hopr-core:verbose:heartbeat aborted but no error
2020-12-01T12:14:57.827Z hopr-core:verbose:heartbeat aborted
```
## Steps to Reproduce
Running 2 Avado nodes in same network, one trying to ping the other.
## Context (Environment)
Both Avados running on version 1.56.2 /ipfs/Qmcu4trV9qYRSA7fvoQD7HMF12ccFsfBZXCUw1a3WBFzkb
|
non_process
|
connection issues between avado nodes expected behavior two avados behind same router should be able to ping one another current behavior hopr core crawler contacted hopr core crawler hopr core crawler crawl complete hoprd type log msg crawled network ts hoprd admin message from client ping hoprd type log msg admin ping ts hopr core verbose heartbeat heartbeat connection error error while dialing initial hopr core verbose heartbeat heartbeat timeout while querying hoprd type log msg could not ping node error was timeout while querying ts hopr core verbose transport filtering multiaddrs hopr core verbose transport attempting to dial directly tcp hopr core transport attempting to dial tcp directly hopr core transport dialing family host transport tcp port hopr core verbose transport filtering multiaddrs hopr core verbose transport attempting to dial directly tcp hopr core transport attempting to dial tcp directly hopr core transport dialing family host transport tcp port hopr core verbose transport filtering multiaddrs hopr core verbose transport attempting to dial directly tcp hopr core transport attempting to dial tcp directly hopr core transport dialing family host transport tcp port hopr core verbose transport filtering multiaddrs hopr core verbose transport attempting to dial directly tcp hopr core transport attempting to dial tcp directly hopr core transport dialing family host transport tcp port hopr core verbose transport filtering multiaddrs hopr core verbose transport attempting to dial directly tcp hopr core transport attempting to dial tcp directly hopr core transport dialing family host transport tcp port hopr core verbose transport filtering multiaddrs hopr core verbose transport attempting to dial directly tcp hopr core transport attempting to dial tcp directly hopr core transport dialing family host transport tcp port hopr core verbose transport filtering multiaddrs hopr core verbose transport attempting to dial directly tcp hopr core transport attempting to dial tcp directly hopr core transport dialing family host transport tcp port hopr core verbose transport error connecting error connect eaddrnotavail local at internalconnect net js at defaulttriggerasyncidscope internal async hooks js at net js at processticksandrejections internal process task queues js errno eaddrnotavail code eaddrnotavail syscall connect address port hopr core verbose transport dial directly unexpected error error connection error connect eaddrnotavail local hopr core transport connection opened family host transport tcp port hopr core verbose transport establishing a direct connection to tcp was successful continuing with the handshakes hopr core transport connection opened family host transport tcp port hopr core verbose transport establishing a direct connection to tcp was successful continuing with the handshakes hopr core transport connection opened family host transport tcp port hopr core verbose transport establishing a direct connection to tcp was successful continuing with the handshakes hopr core transport listener new inbound connection tcp hopr core verbose transport dial directly unexpected error error dialed to the wrong peer ids do not match hopr core transport listener new inbound connection tcp hopr core transport listener error inbound connection failed error value is null at object readlp app node modules it pb rpc src index js at processticksandrejections internal process task queues js at exchange app node modules secio src handshake exchange js at handshake app node modules secio src handshake index js at object secure app node modules secio src index js at upgrader encryptinbound app node modules src upgrader js at upgrader upgradeinbound app node modules src upgrader js at listener ontcpconnection app node modules hoprnet hopr core src network transport listener ts code err encryption failed hopr core transport listener new inbound connection tcp hopr core verbose transport dial directly unexpected error error dialed to the wrong peer ids do not match hopr core verbose transport dial directly unexpected error error dialed to the wrong peer ids do not match hopr core transport listener error inbound connection failed error value is null at object readlp app node modules it pb rpc src index js at processticksandrejections internal process task queues js at exchange app node modules secio src handshake exchange js at handshake app node modules secio src handshake index js at object secure app node modules secio src index js at upgrader encryptinbound app node modules src upgrader js at upgrader upgradeinbound app node modules src upgrader js at listener ontcpconnection app node modules hoprnet hopr core src network transport listener ts code err encryption failed hopr core transport listener error inbound connection failed error value is null at object readlp app node modules it pb rpc src index js at processticksandrejections internal process task queues js at exchange app node modules secio src handshake exchange js at handshake app node modules secio src handshake index js at object secure app node modules secio src index js at upgrader encryptinbound app node modules src upgrader js at upgrader upgradeinbound app node modules src upgrader js at listener ontcpconnection app node modules hoprnet hopr core src network transport listener ts code err encryption failed hopr core transport connection opened family host transport tcp port hopr core verbose transport establishing a direct connection to tcp was successful continuing with the handshakes hopr core transport connection opened family host transport tcp port hopr core verbose transport establishing a direct connection to tcp was successful continuing with the handshakes hopr core transport connection opened family host transport tcp port hopr core verbose transport establishing a direct connection to tcp was successful continuing with the handshakes hopr core verbose transport outbound direct connection tcp upgraded hopr core verbose heartbeat heartbeat connection error aggregateerror while dialing subsequent true hopr core verbose transport outbound direct connection tcp upgraded hopr core verbose transport outbound direct connection tcp upgraded hoprd type connected msg ts hoprd type log msg process stats mem max cputime ts hoprd admin message from client ping hoprd type log msg admin ping ts hopr core verbose transport filtering multiaddrs hopr core verbose transport attempting to dial directly tcp hopr core transport attempting to dial tcp directly hopr core transport dialing family host transport tcp port hopr core verbose transport filtering multiaddrs hopr core verbose transport attempting to dial directly tcp hopr core transport attempting to dial tcp directly hopr core transport dialing family host transport tcp port hopr core verbose transport filtering multiaddrs hopr core verbose transport attempting to dial directly tcp hopr core transport attempting to dial tcp directly hopr core transport dialing family host transport tcp port hopr core transport connection opened family host transport tcp port hopr core verbose transport establishing a direct connection to tcp was successful continuing with the handshakes hopr core transport listener new inbound connection tcp hopr core verbose transport error connecting error connect econnrefused at tcpconnectwrap afterconnect net js errno econnrefused code econnrefused syscall connect address port hopr core verbose transport dialing with relay tcp hopr core verbose transport dial directly unexpected error error dialed to the wrong peer ids do not match hopr core transport listener error inbound connection failed error value is null at object readlp app node modules it pb rpc src index js at processticksandrejections internal process task queues js at exchange app node modules secio src handshake exchange js at handshake app node modules secio src handshake index js at object secure app node modules secio src index js at upgrader encryptinbound app node modules src upgrader js at upgrader upgradeinbound app node modules src upgrader js at listener ontcpconnection app node modules hoprnet hopr core src network transport listener ts code err encryption failed hopr core transport connection aborted family host transport tcp port hopr core verbose heartbeat heartbeat timeout while querying hopr core verbose transport dialing with relay tcp hoprd type log msg could not ping node error was timeout while querying ts hoprd admin message from client ping hoprd type log msg admin ping ts hopr core heartbeat checking nodes older than hopr core heartbeat ping hopr core heartbeat ping success to hopr core network peers current nodes hopr core network peers id q hopr core network peers id q hopr core network peers id q hopr core network peers id q hopr core transport received answer ok hopr core transport relayconnection after stream switch sink operation hopr core transport relayed connection established hopr core transport ending webrtc upgrade due error undefined fallback to relayed connection hopr core verbose heartbeat heartbeat timeout while querying hoprd type log msg could not ping node error was timeout while querying ts hopr core verbose heartbeat aborted but no error hopr core verbose heartbeat aborted steps to reproduce running avado nodes in same network one trying to ping the other context environment both avados running on version ipfs
| 0
|
22,161
| 30,705,285,610
|
IssuesEvent
|
2023-07-27 05:25:21
|
quark-engine/PETWorks-framework
|
https://api.github.com/repos/quark-engine/PETWorks-framework
|
closed
|
Fix the API for t-closeness computation.
|
issue-processing-state-06
|
The API for t-closeness is incorrectly implemented.
Executing the example code for t-closeness would produce an error.
|
1.0
|
Fix the API for t-closeness computation. - The API for t-closeness is incorrectly implemented.
Executing the example code for t-closeness would produce an error.
|
process
|
fix the api for t closeness computation the api for t closeness is incorrectly implemented executing the example code for t closeness would produce an error
| 1
|
784,252
| 27,562,932,585
|
IssuesEvent
|
2023-03-08 00:04:01
|
bmhco/shared
|
https://api.github.com/repos/bmhco/shared
|
opened
|
Trampoline?
|
enhancement question priority-3 T2h research discuss
|
Our son _loves_ jumping on trampolines; ❤️
which kid doesn't? 🤷♂️
Nobody we know owns a trampoline. In more outdoorsy countries like USA, AU or SA trampolines are _very_ common.
The only place he _was_ able to jump on a _real_ trampoline (not a mini/tiny one) was [animapark.pt](https://animapark.pt) ... 🎉
but they have raised the minimum age requirement so he won't be able to go back for a ***`while`*** ... ⏳ 😢
There isn't a place we can go - like in the UK/US there are _many_ indoor play venues.
The nearest play venue we can visit that has trampolines is PlayCenter in Avintes, Vila Nova de Gaia: https://playcenter.pt
<img width="1265" alt="image" src="https://user-images.githubusercontent.com/194400/223578175-1006d225-af19-4b33-8776-ec745d5711a0.png">
Probably not going to do many **`120km`** round-trips just for our son to bounce on a trampoline ... 💭
<img width="1203" alt="image" src="https://user-images.githubusercontent.com/194400/223578298-74934e7e-24d4-466e-ab38-d42eae8594ab.png">
As a child I used to _love_ jumping on trampolines; I could do it for _hours_. It's _very_ good exercise.
see: https://www.healthline.com/health/exercise-fitness/trampoline-exercises
If money was no object (i.e. I was _allowed_ to buy anything I want) I'd get a **`Springfree`** trampoline:
https://www.springfreetrampoline.com/trampoline-bundles/
<img width="1279" alt="image" src="https://user-images.githubusercontent.com/194400/223570641-ebbd07d7-338f-4be6-b9fa-629b73133587.png">
There are obvs _much_ cheaper options e.g:
https://www.decathlon.pt/p/trampolim-hexagonal-240-com-rede-de-protecao/_/R-p-301602
<img width="1297" alt="image" src="https://user-images.githubusercontent.com/194400/223581175-ebcb4d21-be2e-4b08-ada2-62f1ec4ce412.png">
In fact these _much_ cheaper trampolines are exactly what AnimaPark has:

They are "OK" for _supervised_ use but I wouldn't trust the safety of one for regular _unsupervised_ use. 💭
I know from first-hand experience that trampolines can be _very_ dangerous when used improperly. 🙃
Opening this issue just to capture the thoughts / discussion. 💬
Not going to buy one _yet_; don't worry. 👌
It's just been on my mind since we went to Decathlon a few weeks ago and enjoyed some family bounce time ...

<img width="1116" alt="image" src="https://user-images.githubusercontent.com/194400/223583275-b61bba2d-4c60-4f17-8468-86a66f1c6f8b.png">
Need to decide what our medium-term plan is first. 💭
Fairly certain _other_ children would _love_ visit @home/BMH if there was a trampoline ... 🎉
But I wouldn't risk _anyone's_ safety with a "cheap" trampoline knowing what I know ... 💭
|
1.0
|
Trampoline? - Our son _loves_ jumping on trampolines; ❤️
which kid doesn't? 🤷♂️
Nobody we know owns a trampoline. In more outdoorsy countries like USA, AU or SA trampolines are _very_ common.
The only place he _was_ able to jump on a _real_ trampoline (not a mini/tiny one) was [animapark.pt](https://animapark.pt) ... 🎉
but they have raised the minimum age requirement so he won't be able to go back for a ***`while`*** ... ⏳ 😢
There isn't a place we can go - like in the UK/US there are _many_ indoor play venues.
The nearest play venue we can visit that has trampolines is PlayCenter in Avintes, Vila Nova de Gaia: https://playcenter.pt
<img width="1265" alt="image" src="https://user-images.githubusercontent.com/194400/223578175-1006d225-af19-4b33-8776-ec745d5711a0.png">
Probably not going to do many **`120km`** round-trips just for our son to bounce on a trampoline ... 💭
<img width="1203" alt="image" src="https://user-images.githubusercontent.com/194400/223578298-74934e7e-24d4-466e-ab38-d42eae8594ab.png">
As a child I used to _love_ jumping on trampolines; I could do it for _hours_. It's _very_ good exercise.
see: https://www.healthline.com/health/exercise-fitness/trampoline-exercises
If money was no object (i.e. I was _allowed_ to buy anything I want) I'd get a **`Springfree`** trampoline:
https://www.springfreetrampoline.com/trampoline-bundles/
<img width="1279" alt="image" src="https://user-images.githubusercontent.com/194400/223570641-ebbd07d7-338f-4be6-b9fa-629b73133587.png">
There are obvs _much_ cheaper options e.g:
https://www.decathlon.pt/p/trampolim-hexagonal-240-com-rede-de-protecao/_/R-p-301602
<img width="1297" alt="image" src="https://user-images.githubusercontent.com/194400/223581175-ebcb4d21-be2e-4b08-ada2-62f1ec4ce412.png">
In fact these _much_ cheaper trampolines are exactly what AnimaPark has:

They are "OK" for _supervised_ use but I wouldn't trust the safety of one for regular _unsupervised_ use. 💭
I know from first-hand experience that trampolines can be _very_ dangerous when used improperly. 🙃
Opening this issue just to capture the thoughts / discussion. 💬
Not going to buy one _yet_; don't worry. 👌
It's just been on my mind since we went to Decathlon a few weeks ago and enjoyed some family bounce time ...

<img width="1116" alt="image" src="https://user-images.githubusercontent.com/194400/223583275-b61bba2d-4c60-4f17-8468-86a66f1c6f8b.png">
Need to decide what our medium-term plan is first. 💭
Fairly certain _other_ children would _love_ visit @home/BMH if there was a trampoline ... 🎉
But I wouldn't risk _anyone's_ safety with a "cheap" trampoline knowing what I know ... 💭
|
non_process
|
trampoline our son loves jumping on trampolines ❤️ which kid doesn t 🤷♂️ nobody we know owns a trampoline in more outdoorsy countries like usa au or sa trampolines are very common the only place he was able to jump on a real trampoline not a mini tiny one was 🎉 but they have raised the minimum age requirement so he won t be able to go back for a while ⏳ 😢 there isn t a place we can go like in the uk us there are many indoor play venues the nearest play venue we can visit that has trampolines is playcenter in avintes vila nova de gaia img width alt image src probably not going to do many round trips just for our son to bounce on a trampoline 💭 img width alt image src as a child i used to love jumping on trampolines i could do it for hours it s very good exercise see if money was no object i e i was allowed to buy anything i want i d get a springfree trampoline img width alt image src there are obvs much cheaper options e g img width alt image src in fact these much cheaper trampolines are exactly what animapark has they are ok for supervised use but i wouldn t trust the safety of one for regular unsupervised use 💭 i know from first hand experience that trampolines can be very dangerous when used improperly 🙃 opening this issue just to capture the thoughts discussion 💬 not going to buy one yet don t worry 👌 it s just been on my mind since we went to decathlon a few weeks ago and enjoyed some family bounce time img width alt image src need to decide what our medium term plan is first 💭 fairly certain other children would love visit home bmh if there was a trampoline 🎉 but i wouldn t risk anyone s safety with a cheap trampoline knowing what i know 💭
| 0
|
1,127
| 3,603,991,974
|
IssuesEvent
|
2016-02-03 21:09:28
|
clulab/reach
|
https://api.github.com/repos/clulab/reach
|
closed
|
Display trait not specific to Bio processing: move to Processors
|
processors
|
We discussed this briefly before the evaluation but the discussion was postponed. I don't see any reason that the Display Trait (with field displayLabel) is specific to BioNLP processing. Generalizing it by moving it to Processors seems like a useful optimization.
|
1.0
|
Display trait not specific to Bio processing: move to Processors - We discussed this briefly before the evaluation but the discussion was postponed. I don't see any reason that the Display Trait (with field displayLabel) is specific to BioNLP processing. Generalizing it by moving it to Processors seems like a useful optimization.
|
process
|
display trait not specific to bio processing move to processors we discussed this briefly before the evaluation but the discussion was postponed i don t see any reason that the display trait with field displaylabel is specific to bionlp processing generalizing it by moving it to processors seems like a useful optimization
| 1
|
38,748
| 12,599,167,677
|
IssuesEvent
|
2020-06-11 05:13:27
|
heholek/sheetjs
|
https://api.github.com/repos/heholek/sheetjs
|
opened
|
CVE-2020-7656 (Medium) detected in jquery-1.7.1.min.js
|
security vulnerability
|
## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/sheetjs/demos/angular2/node_modules/sockjs/examples/echo/index.html</p>
<p>Path to vulnerable library: /sheetjs/demos/angular2/node_modules/sockjs/examples/echo/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/heholek/sheetjs/commit/36d1c7c9812d499ecf2921399c1a626defa4b938">36d1c7c9812d499ecf2921399c1a626defa4b938</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7656">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7656</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: 1.9.0b1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7656 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/sheetjs/demos/angular2/node_modules/sockjs/examples/echo/index.html</p>
<p>Path to vulnerable library: /sheetjs/demos/angular2/node_modules/sockjs/examples/echo/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/heholek/sheetjs/commit/36d1c7c9812d499ecf2921399c1a626defa4b938">36d1c7c9812d499ecf2921399c1a626defa4b938</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7656">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7656</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: 1.9.0b1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm sheetjs demos node modules sockjs examples echo index html path to vulnerable library sheetjs demos node modules sockjs examples echo index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery prior to allows cross site scripting attacks via the load method the load method fails to recognize and remove html tags that contain a whitespace character i e which results in the enclosed script logic to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
63,049
| 14,656,662,500
|
IssuesEvent
|
2020-12-28 13:55:38
|
fu1771695yongxie/uni-app
|
https://api.github.com/repos/fu1771695yongxie/uni-app
|
opened
|
WS-2019-0026 (Medium) detected in marked-0.3.19.js
|
security vulnerability
|
## WS-2019-0026 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.3.19.js</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js">https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js</a></p>
<p>Path to dependency file: uni-app/packages/vue-cli-plugin-uni/packages/vue-template-compiler/node_modules/vue/packages/vue-loader/node_modules/marked/www/demo.html</p>
<p>Path to vulnerable library: uni-app/packages/vue-cli-plugin-uni/packages/vue-template-compiler/node_modules/vue/packages/vue-loader/node_modules/marked/www/../lib/marked.js,uni-app/packages/vue-cli-plugin-uni/packages/vue-loader/node_modules/marked/www/../lib/marked.js</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.3.19.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/uni-app/commit/49d3dd1020e7b9b0a47700866658384836bf7529">49d3dd1020e7b9b0a47700866658384836bf7529</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions 0.3.7 and earlier of marked unescape only lowercase while owsers support both lowercase and uppercase x in hexadecimal form of HTML character entity
<p>Publish Date: 2017-12-23
<p>URL: <a href=https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9>WS-2019-0026</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9">https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9</a></p>
<p>Release Date: 2019-03-17</p>
<p>Fix Resolution: 0.3.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0026 (Medium) detected in marked-0.3.19.js - ## WS-2019-0026 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.3.19.js</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js">https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js</a></p>
<p>Path to dependency file: uni-app/packages/vue-cli-plugin-uni/packages/vue-template-compiler/node_modules/vue/packages/vue-loader/node_modules/marked/www/demo.html</p>
<p>Path to vulnerable library: uni-app/packages/vue-cli-plugin-uni/packages/vue-template-compiler/node_modules/vue/packages/vue-loader/node_modules/marked/www/../lib/marked.js,uni-app/packages/vue-cli-plugin-uni/packages/vue-loader/node_modules/marked/www/../lib/marked.js</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.3.19.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/uni-app/commit/49d3dd1020e7b9b0a47700866658384836bf7529">49d3dd1020e7b9b0a47700866658384836bf7529</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions 0.3.7 and earlier of marked unescape only lowercase while owsers support both lowercase and uppercase x in hexadecimal form of HTML character entity
<p>Publish Date: 2017-12-23
<p>URL: <a href=https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9>WS-2019-0026</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9">https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9</a></p>
<p>Release Date: 2019-03-17</p>
<p>Fix Resolution: 0.3.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in marked js ws medium severity vulnerability vulnerable library marked js a markdown parser built for speed library home page a href path to dependency file uni app packages vue cli plugin uni packages vue template compiler node modules vue packages vue loader node modules marked www demo html path to vulnerable library uni app packages vue cli plugin uni packages vue template compiler node modules vue packages vue loader node modules marked www lib marked js uni app packages vue cli plugin uni packages vue loader node modules marked www lib marked js dependency hierarchy x marked js vulnerable library found in head commit a href found in base branch master vulnerability details versions and earlier of marked unescape only lowercase while owsers support both lowercase and uppercase x in hexadecimal form of html character entity publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
3,111
| 13,107,645,474
|
IssuesEvent
|
2020-08-04 15:34:04
|
submariner-io/releases
|
https://api.github.com/repos/submariner-io/releases
|
closed
|
Script to validate release YAML
|
automation
|
We need a script that validates the release yaml:
Verify it's proper YAML
Version should be a valid version in SemVer compatible format.
Name should be specified
Release notes exist at least for bugs or features.
Components are all present and have a commit hash/tag.
|
1.0
|
Script to validate release YAML - We need a script that validates the release yaml:
Verify it's proper YAML
Version should be a valid version in SemVer compatible format.
Name should be specified
Release notes exist at least for bugs or features.
Components are all present and have a commit hash/tag.
|
non_process
|
script to validate release yaml we need a script that validates the release yaml verify it s proper yaml version should be a valid version in semver compatible format name should be specified release notes exist at least for bugs or features components are all present and have a commit hash tag
| 0
|
1,880
| 2,576,574,491
|
IssuesEvent
|
2015-02-12 11:09:16
|
Financial-Times/o-overlay
|
https://api.github.com/repos/Financial-Times/o-overlay
|
opened
|
Alignment of arrow - compact overlay
|
design
|
Arrow at the top of the overlay box allowed to be aligned. i.e. a certain distance from the corners rather than automatically centred.

|
1.0
|
Alignment of arrow - compact overlay - Arrow at the top of the overlay box allowed to be aligned. i.e. a certain distance from the corners rather than automatically centred.

|
non_process
|
alignment of arrow compact overlay arrow at the top of the overlay box allowed to be aligned i e a certain distance from the corners rather than automatically centred
| 0
|
5,742
| 8,580,910,360
|
IssuesEvent
|
2018-11-13 13:23:12
|
easy-software-ufal/annotations_repos
|
https://api.github.com/repos/easy-software-ufal/annotations_repos
|
opened
|
icsharpcode/RefactoringEssentials ConvertMethodGroupToLambdaCodeRefactoringProvider invalid when method has ConditionalAttribute
|
ADA C# test wrong processing
|
Issue: `https://github.com/icsharpcode/RefactoringEssentials/issues/115`
PR: `https://github.com/icsharpcode/RefactoringEssentials/commit/911dc99a5fcdd2c07822693d1c4ffe61ea5eb045`
|
1.0
|
icsharpcode/RefactoringEssentials ConvertMethodGroupToLambdaCodeRefactoringProvider invalid when method has ConditionalAttribute - Issue: `https://github.com/icsharpcode/RefactoringEssentials/issues/115`
PR: `https://github.com/icsharpcode/RefactoringEssentials/commit/911dc99a5fcdd2c07822693d1c4ffe61ea5eb045`
|
process
|
icsharpcode refactoringessentials convertmethodgrouptolambdacoderefactoringprovider invalid when method has conditionalattribute issue pr
| 1
|
14,829
| 18,167,899,656
|
IssuesEvent
|
2021-09-27 16:26:03
|
2i2c-org/team-compass
|
https://api.github.com/repos/2i2c-org/team-compass
|
closed
|
Move monthly team meetings from Monday to Wednesday
|
:label: team-process type: task
|
### Description
We should move our team meetings off of either Mondays or Fridays, and onto Tue/Wed/Thu. We should pick one of these days for our monthly team meetings.
### Value / benefit
By avoiding Mon/Fri, we'll be less likely to lose people because of holidays or vacation, and we'll all more easily be able to have flexible weekend working schedules :-)
### Implementation details
We should plan around some other important meetings as well:
Other meetings to consider:
- Weekly sprint planning meeting: Tuesdays
- Monthly steering council meetings: Tuesdays
- Jupyter Community meeting: Tuesdays
- Turing Way Collaboration Cafes: Wednesdays
- Berkeley tech strategy meeting: Thursdays
Another thing to consider is how this meeting fits in with our weekly "sprint planning" meeting. It feels natural to me that the monthly meeting comes towards the end of a sprint, rather than at the beginning of a new one. This more naturally creates space for reflection and discussion rather than thinking about tactical issues. We should decide if we want to follow the same pattern in moving the meeting time.
### Tasks to complete
- [x] Arrive at a time for meeting planning to work
- [x] Update Google Calendars for future meeting changes
- [ ] Prep a PR to update our team meeting practices
- [ ] On **Wednesday the 29th** begin new process for team meetings
### Decision
We'll follow the plan in this comment: https://github.com/2i2c-org/team-compass/issues/237#issuecomment-914666552
### Updates
_No response_
|
1.0
|
Move monthly team meetings from Monday to Wednesday - ### Description
We should move our team meetings off of either Mondays or Fridays, and onto Tue/Wed/Thu. We should pick one of these days for our monthly team meetings.
### Value / benefit
By avoiding Mon/Fri, we'll be less likely to lose people because of holidays or vacation, and we'll all more easily be able to have flexible weekend working schedules :-)
### Implementation details
We should plan around some other important meetings as well:
Other meetings to consider:
- Weekly sprint planning meeting: Tuesdays
- Monthly steering council meetings: Tuesdays
- Jupyter Community meeting: Tuesdays
- Turing Way Collaboration Cafes: Wednesdays
- Berkeley tech strategy meeting: Thursdays
Another thing to consider is how this meeting fits in with our weekly "sprint planning" meeting. It feels natural to me that the monthly meeting comes towards the end of a sprint, rather than at the beginning of a new one. This more naturally creates space for reflection and discussion rather than thinking about tactical issues. We should decide if we want to follow the same pattern in moving the meeting time.
### Tasks to complete
- [x] Arrive at a time for meeting planning to work
- [x] Update Google Calendars for future meeting changes
- [ ] Prep a PR to update our team meeting practices
- [ ] On **Wednesday the 29th** begin new process for team meetings
### Decision
We'll follow the plan in this comment: https://github.com/2i2c-org/team-compass/issues/237#issuecomment-914666552
### Updates
_No response_
|
process
|
move monthly team meetings from monday to wednesday description we should move our team meetings off of either mondays or fridays and onto tue wed thu we should pick one of these days for our monthly team meetings value benefit by avoiding mon fri we ll be less likely to lose people because of holidays or vacation and we ll all more easily be able to have flexible weekend working schedules implementation details we should plan around some other important meetings as well other meetings to consider weekly sprint planning meeting tuesdays monthly steering council meetings tuesdays jupyter community meeting tuesdays turing way collaboration cafes wednesdays berkeley tech strategy meeting thursdays another thing to consider is how this meeting fits in with our weekly sprint planning meeting it feels natural to me that the monthly meeting comes towards the end of a sprint rather than at the beginning of a new one this more naturally creates space for reflection and discussion rather than thinking about tactical issues we should decide if we want to follow the same pattern in moving the meeting time tasks to complete arrive at a time for meeting planning to work update google calendars for future meeting changes prep a pr to update our team meeting practices on wednesday the begin new process for team meetings decision we ll follow the plan in this comment updates no response
| 1
|
66,658
| 8,956,764,806
|
IssuesEvent
|
2019-01-26 20:19:01
|
Microsoft/Recommenders
|
https://api.github.com/repos/Microsoft/Recommenders
|
closed
|
ModuleNotFoundError: No module named 'azure.datalake'
|
documentation setup
|
### *What* is affected by this bug?
Installation of conda_bare.yaml environment succeeds, but papermill doesn't work with the environment as it is created.
### In *which* platform does it happen?
ProductName: Mac OS X
ProductVersion: 10.14.2
BuildVersion: 18C54
### *How* do we replicate the issue?
```
$ ./scripts/generate_conda_file.sh
$ conda env create -n bare -f conda_bare.yaml --quiet
$ conda activate bare
$ python
Python 3.6.7 | packaged by conda-forge | (default, Nov 20 2018, 18:37:09)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import papermill
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/danielsc/.azureml/envs/simple/lib/python3.6/site-packages/papermill/__init__.py", line 8, in <module>
from .api import display, record, read_notebook, read_notebooks
File "/Users/danielsc/.azureml/envs/simple/lib/python3.6/site-packages/papermill/api.py", line 17, in <module>
from .iorw import load_notebook_node, list_notebook_files
File "/Users/danielsc/.azureml/envs/simple/lib/python3.6/site-packages/papermill/iorw.py", line 20, in <module>
from .adl import ADL
File "/Users/danielsc/.azureml/envs/simple/lib/python3.6/site-packages/papermill/adl.py", line 2, in <module>
from azure.datalake.store import core, lib
ModuleNotFoundError: No module named 'azure.datalake'
>>>
```
### Expected behavior (i.e. solution)
This appears to be caused by pinning scikit-learn==0.19.1. If I replace that with scikit-learn>=0.19.1, then papermill imports fine.
|
1.0
|
ModuleNotFoundError: No module named 'azure.datalake' - ### *What* is affected by this bug?
Installation of conda_bare.yaml environment succeeds, but papermill doesn't work with the environment as it is created.
### In *which* platform does it happen?
ProductName: Mac OS X
ProductVersion: 10.14.2
BuildVersion: 18C54
### *How* do we replicate the issue?
```
$ ./scripts/generate_conda_file.sh
$ conda env create -n bare -f conda_bare.yaml --quiet
$ conda activate bare
$ python
Python 3.6.7 | packaged by conda-forge | (default, Nov 20 2018, 18:37:09)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import papermill
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/danielsc/.azureml/envs/simple/lib/python3.6/site-packages/papermill/__init__.py", line 8, in <module>
from .api import display, record, read_notebook, read_notebooks
File "/Users/danielsc/.azureml/envs/simple/lib/python3.6/site-packages/papermill/api.py", line 17, in <module>
from .iorw import load_notebook_node, list_notebook_files
File "/Users/danielsc/.azureml/envs/simple/lib/python3.6/site-packages/papermill/iorw.py", line 20, in <module>
from .adl import ADL
File "/Users/danielsc/.azureml/envs/simple/lib/python3.6/site-packages/papermill/adl.py", line 2, in <module>
from azure.datalake.store import core, lib
ModuleNotFoundError: No module named 'azure.datalake'
>>>
```
### Expected behavior (i.e. solution)
This appears to be caused by pinning scikit-learn==0.19.1. If I replace that with scikit-learn>=0.19.1, then papermill imports fine.
|
non_process
|
modulenotfounderror no module named azure datalake what is affected by this bug installation of conda bare yaml environment succeeds but papermill doesn t work with the environment as it is created in which platform does it happen productname mac os x productversion buildversion how do we replicate the issue scripts generate conda file sh conda env create n bare f conda bare yaml quiet conda activate bare python python packaged by conda forge default nov on darwin type help copyright credits or license for more information import papermill traceback most recent call last file line in file users danielsc azureml envs simple lib site packages papermill init py line in from api import display record read notebook read notebooks file users danielsc azureml envs simple lib site packages papermill api py line in from iorw import load notebook node list notebook files file users danielsc azureml envs simple lib site packages papermill iorw py line in from adl import adl file users danielsc azureml envs simple lib site packages papermill adl py line in from azure datalake store import core lib modulenotfounderror no module named azure datalake expected behavior i e solution this appears to be caused by pinning scikit learn if i replace that with scikit learn then papermill imports fine
| 0
|
222,384
| 7,431,812,161
|
IssuesEvent
|
2018-03-25 18:20:38
|
bounswe/bounswe2018group3
|
https://api.github.com/repos/bounswe/bounswe2018group3
|
closed
|
Meeting Time
|
help wanted priority : high type : planning type : question
|
Me and Anıl will organize these two weeks' meetings. Please inform us about your free time.
|
1.0
|
Meeting Time - Me and Anıl will organize these two weeks' meetings. Please inform us about your free time.
|
non_process
|
meeting time me and anıl will organize these two weeks meetings please inform us about your free time
| 0
|
5,144
| 7,923,812,529
|
IssuesEvent
|
2018-07-05 15:03:44
|
Open-EO/openeo-api
|
https://api.github.com/repos/Open-EO/openeo-api
|
closed
|
get_data: Make the "Image Collection" a process
|
feedback required process graphs processes work in progress
|
According to https://open-eo.github.io/openeo-api/processgraphs/index.html we currently we have a separate object type definition called "image collection", which basically loads a product from /data. Why do we need an individual type for this? Can't this be simply a process, like everything else?
This would mean we can replace the Image collection object:
```
{
"product_id": <string>
}
```
with a process like this:
```
{
"process_id": "data"
"args": { "product_id": "Sentinel2-L1C" }
}
```
The process name could also be load_imagery, get_data, product, ...
At first this looks more complicated, but overall it would *simplify* the overall process_graph definition, as we would use only one object type (Process). In the ende a value in the argument set would be only:
`<Value> := <string|number|array|boolean|null|Process>`
and not:
`<Value> := <string|number|array|boolean|null|Process|ImageCollection>`
This makes it more general and would allow easier extension, e.g. to load other data types than imagery (time series?)
|
2.0
|
get_data: Make the "Image Collection" a process - According to https://open-eo.github.io/openeo-api/processgraphs/index.html we currently we have a separate object type definition called "image collection", which basically loads a product from /data. Why do we need an individual type for this? Can't this be simply a process, like everything else?
This would mean we can replace the Image collection object:
```
{
"product_id": <string>
}
```
with a process like this:
```
{
"process_id": "data"
"args": { "product_id": "Sentinel2-L1C" }
}
```
The process name could also be load_imagery, get_data, product, ...
At first this looks more complicated, but overall it would *simplify* the overall process_graph definition, as we would use only one object type (Process). In the ende a value in the argument set would be only:
`<Value> := <string|number|array|boolean|null|Process>`
and not:
`<Value> := <string|number|array|boolean|null|Process|ImageCollection>`
This makes it more general and would allow easier extension, e.g. to load other data types than imagery (time series?)
|
process
|
get data make the image collection a process according to we currently we have a separate object type definition called image collection which basically loads a product from data why do we need an individual type for this can t this be simply a process like everything else this would mean we can replace the image collection object product id with a process like this process id data args product id the process name could also be load imagery get data product at first this looks more complicated but overall it would simplify the overall process graph definition as we would use only one object type process in the ende a value in the argument set would be only and not this makes it more general and would allow easier extension e g to load other data types than imagery time series
| 1
|
13,326
| 15,788,222,848
|
IssuesEvent
|
2021-04-01 20:26:58
|
klarEDA/klar-EDA
|
https://api.github.com/repos/klarEDA/klar-EDA
|
closed
|
Implement different normalization techniques in csv data preprocessor
|
data-preprocessing enhancement gssoc21
|
### Description
> The implementation can take one or multiple methods. After the implementations of the method(s), the following things are
achievable :
> - Mean Normalisation of features
> - Standardization of features
### Assumptions
> For standardization of features, it is assumed that the data is in Gaussian Distribution
### Input
> 1. DataFrame
### Output
> Processed data according to the method. After Standardization or Normalisation.
|
1.0
|
Implement different normalization techniques in csv data preprocessor - ### Description
> The implementation can take one or multiple methods. After the implementations of the method(s), the following things are
achievable :
> - Mean Normalisation of features
> - Standardization of features
### Assumptions
> For standardization of features, it is assumed that the data is in Gaussian Distribution
### Input
> 1. DataFrame
### Output
> Processed data according to the method. After Standardization or Normalisation.
|
process
|
implement different normalization techniques in csv data preprocessor description the implementation can take one or multiple methods after the implementations of the method s the following things are achievable mean normalisation of features standardization of features assumptions for standardization of features it is assumed that the data is in gaussian distribution input dataframe output processed data according to the method after standardization or normalisation
| 1
|
9,688
| 12,687,897,601
|
IssuesEvent
|
2020-06-20 19:00:04
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
opened
|
We need to remove hostsfile.mine.nu/hosts0.txt
|
whitelisting process
|
Hi all,
We are using a list: https://hostsfile.mine.nu/hosts0.txt
In this list are many false positives that I have reported, with no response.
For example: https://github.com/Ultimate-Hosts-Blacklist/whitelist/issues/132
>
>
> Unfortunately, no response after two months.
Here are what I have reported and when that are STILL false positive on his (and so OUR ) list!
platform.twitter.com Mar 17 - breaks some tweets embedded in articles in Google News app on Android
3p.ampproject.net Apr 3 - breaks AMP-powered Google News articles
goldmoney.com Apr 7 - site that sells gold
www.goldmoney.com Apr 7 - site that sells gold
mene.com March 25 - site that sells jewelry
rawgit.com April 27 served files from raw.githubusercontent.com
getpocket.com www.getpocket.com api.getpocket.com - May 1 - extension to save articles to read later
yastatic.net - May 18 - hosting static assets for cloud email client Yandex.Mail
ea.com - May 19 - gaming community
www.icq.com icq.com - May 26- chat client
res.cloudinary.com - May 29 - hosting static images in the cloud
a-000*.a-msedge.net - June 2 - CNAME for www.MSN.com
no-ip.org - June 3 - dynamic domain provider
ign.com - gaming site
|
1.0
|
We need to remove hostsfile.mine.nu/hosts0.txt - Hi all,
We are using a list: https://hostsfile.mine.nu/hosts0.txt
In this list are many false positives that I have reported, with no response.
For example: https://github.com/Ultimate-Hosts-Blacklist/whitelist/issues/132
>
>
> Unfortunately, no response after two months.
Here are what I have reported and when that are STILL false positive on his (and so OUR ) list!
platform.twitter.com Mar 17 - breaks some tweets embedded in articles in Google News app on Android
3p.ampproject.net Apr 3 - breaks AMP-powered Google News articles
goldmoney.com Apr 7 - site that sells gold
www.goldmoney.com Apr 7 - site that sells gold
mene.com March 25 - site that sells jewelry
rawgit.com April 27 served files from raw.githubusercontent.com
getpocket.com www.getpocket.com api.getpocket.com - May 1 - extension to save articles to read later
yastatic.net - May 18 - hosting static assets for cloud email client Yandex.Mail
ea.com - May 19 - gaming community
www.icq.com icq.com - May 26- chat client
res.cloudinary.com - May 29 - hosting static images in the cloud
a-000*.a-msedge.net - June 2 - CNAME for www.MSN.com
no-ip.org - June 3 - dynamic domain provider
ign.com - gaming site
|
process
|
we need to remove hostsfile mine nu txt hi all we are using a list in this list are many false positives that i have reported with no response for example unfortunately no response after two months here are what i have reported and when that are still false positive on his and so our list platform twitter com mar breaks some tweets embedded in articles in google news app on android ampproject net apr breaks amp powered google news articles goldmoney com apr site that sells gold apr site that sells gold mene com march site that sells jewelry rawgit com april served files from raw githubusercontent com getpocket com api getpocket com may extension to save articles to read later yastatic net may hosting static assets for cloud email client yandex mail ea com may gaming community icq com may chat client res cloudinary com may hosting static images in the cloud a a msedge net june cname for no ip org june dynamic domain provider ign com gaming site
| 1
|
157,092
| 12,346,154,742
|
IssuesEvent
|
2020-05-15 10:13:46
|
kyma-project/kyma
|
https://api.github.com/repos/kyma-project/kyma
|
closed
|
Switch fake function used in external solution test to real function
|
area/serverless test-missing
|
<!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
<!-- Provide a clear and concise description of the feature. -->
At the moment external solution test, that verifies if Kyma works end-to-end is not using Function. Instead of a function, it is using simple service. To test the full end-to-end the service should be replaced by function.
**Reasons**
<!-- Explain why we should add this feature. Provide use cases to illustrate its benefits. -->
Verify if Kyma works end-to-end.
**Attachments**
<!-- Attach any files, links, code samples, or screenshots that will convince us to your idea. -->
Epic: https://github.com/kyma-project/kyma/issues/8256
|
1.0
|
Switch fake function used in external solution test to real function - <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
<!-- Provide a clear and concise description of the feature. -->
At the moment external solution test, that verifies if Kyma works end-to-end is not using Function. Instead of a function, it is using simple service. To test the full end-to-end the service should be replaced by function.
**Reasons**
<!-- Explain why we should add this feature. Provide use cases to illustrate its benefits. -->
Verify if Kyma works end-to-end.
**Attachments**
<!-- Attach any files, links, code samples, or screenshots that will convince us to your idea. -->
Epic: https://github.com/kyma-project/kyma/issues/8256
|
non_process
|
switch fake function used in external solution test to real function thank you for your contribution before you submit the issue search open and closed issues for duplicates read the contributing guidelines description at the moment external solution test that verifies if kyma works end to end is not using function instead of a function it is using simple service to test the full end to end the service should be replaced by function reasons verify if kyma works end to end attachments epic
| 0
|
387,644
| 26,730,218,496
|
IssuesEvent
|
2023-01-30 03:17:56
|
JavierTovar19/git_web_practice_branch
|
https://api.github.com/repos/JavierTovar19/git_web_practice_branch
|
closed
|
Un commit que no sigue la convención de código o arreglo a realizar
|
documentation
|
La convención del mensaje del último commit no es la esperada:
`FIX2: mescla de ramas`
Recuerde que debe tener el siguiente formato: `<Identificador de la corrección>: <Comentario>`
Para realizar la corrección del mensaje de commit ejecute los comandos `git commit --amend` y `git push -f`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.
|
1.0
|
Un commit que no sigue la convención de código o arreglo a realizar - La convención del mensaje del último commit no es la esperada:
`FIX2: mescla de ramas`
Recuerde que debe tener el siguiente formato: `<Identificador de la corrección>: <Comentario>`
Para realizar la corrección del mensaje de commit ejecute los comandos `git commit --amend` y `git push -f`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.
|
non_process
|
un commit que no sigue la convención de código o arreglo a realizar la convención del mensaje del último commit no es la esperada mescla de ramas recuerde que debe tener el siguiente formato para realizar la corrección del mensaje de commit ejecute los comandos git commit amend y git push f este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado
| 0
|
9,909
| 8,237,290,591
|
IssuesEvent
|
2018-09-10 02:00:12
|
APSIMInitiative/ApsimX
|
https://api.github.com/repos/APSIMInitiative/ApsimX
|
closed
|
Chicory documentation
|
bug interface/infrastructure
|
Recent changes to simple leaf appear to have broken the chicory documentation (and possibly others too).
|
1.0
|
Chicory documentation - Recent changes to simple leaf appear to have broken the chicory documentation (and possibly others too).
|
non_process
|
chicory documentation recent changes to simple leaf appear to have broken the chicory documentation and possibly others too
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.