Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
13,702
| 16,457,249,058
|
IssuesEvent
|
2021-05-21 14:07:36
|
cncf/tag-security
|
https://api.github.com/repos/cncf/tag-security
|
closed
|
write up more detail on how we organize security reviewer teams
|
assessment-process inactive
|
@JustinCappos @justincormack @lumjjb and I met and discussed how to organize teams for the security audits.
Here are notes which may only make sense to the people who were present -- capturing them here in advance of PR with better explanation

- Group of 10 people in the security assessment team
- Build a broader community
- Each do 2, which will then be 5 assessments total, then retrospective
Good idea to have 3 when possible
Lead do an initial through review, ask dumb questions, make the doc clear
Timeline
- 48 hours “dumb question phase” — ask a bunch of clarifying questions to make the document clear about
- 5 days, where review team forms an opinion
- Presentation (could be separate meeting)
- Summary info
- Like an executive summary of page of an audit
- Slide too
|
1.0
|
write up more detail on how we organize security reviewer teams - @JustinCappos @justincormack @lumjjb and I met and discussed how to organize teams for the security audits.
Here are notes which may only make sense to the people who were present -- capturing them here in advance of PR with better explanation

- Group of 10 people in the security assessment team
- Build a broader community
- Each do 2, which will then be 5 assessments total, then retrospective
Good idea to have 3 when possible
Lead do an initial through review, ask dumb questions, make the doc clear
Timeline
- 48 hours “dumb question phase” — ask a bunch of clarifying questions to make the document clear about
- 5 days, where review team forms an opinion
- Presentation (could be separate meeting)
- Summary info
- Like an executive summary of page of an audit
- Slide too
|
process
|
write up more detail on how we organize security reviewer teams justincappos justincormack lumjjb and i met and discussed how to organize teams for the security audits here are notes which may only make sense to the people who were present capturing them here in advance of pr with better explanation group of people in the security assessment team build a broader community each do which will then be assessments total then retrospective good idea to have when possible lead do an initial through review ask dumb questions make the doc clear timeline hours “dumb question phase” — ask a bunch of clarifying questions to make the document clear about days where review team forms an opinion presentation could be separate meeting summary info like an executive summary of page of an audit slide too
| 1
|
42,853
| 23,018,626,497
|
IssuesEvent
|
2022-07-22 01:09:04
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
The Enumerable.Chunk can leak memory (.NET 7)
|
area-System.Linq tenet-performance in-pr
|
Hi! I noticed that the [`System.Linq.Enumerable.Chunk`](https://docs.microsoft.com/en-us/dotnet/api/system.linq.enumerable.chunk) operator is [currently](https://github.com/dotnet/runtime/blob/0d96247e828d153e738d0a2067ecbe330d7d58ab/src/libraries/System.Linq/src/System/Linq/Chunk.cs) (.NET 7) implemented in a way that could potentially result in delaying the garbage collection of some objects.
The issue emerges in case the processing of each element contained in each `TSource[]` chunk, involves allocating a large amount of memory. Although the element is removed from the chunk, it is still referenced by the underlying `List<TSource>`, and so it is not eligible for garbage collection until the whole chunk has been fully processed. Below is a minimal demonstration of this behavior:
```C#
public static void Main()
{
var source = Enumerable
.Range(1, 15)
.Select(n => new Item() { Id = n });
var chunkified = source.Chunk(10);
foreach (var chunk in chunkified) ProcessChunk(chunk);
Console.WriteLine($"After foreach");
}
private class Item
{
private byte[] _bytes;
public int Id { get; init; }
public void Load() => _bytes = new byte[5_000_000];
}
static void ProcessChunk(Item[] chunk)
{
Console.WriteLine($"Processing chunk of {chunk.Length} items");
for (int i = 0; i < chunk.Length; i++)
{
var item = chunk[i];
chunk[i] = null;
item.Load();
Console.WriteLine(@$"After processing item #{item
.Id,-2} chunk[{i}], Memory: {GC.GetTotalMemory(true):#,0} bytes");
}
}
```
Output with the .NET 6 implementation:
```none
Processing chunk of 10 items
After processing item #1 chunk[0], Memory: 5,103,968 bytes
After processing item #2 chunk[1], Memory: 5,113,624 bytes
After processing item #3 chunk[2], Memory: 5,113,592 bytes
After processing item #4 chunk[3], Memory: 5,113,560 bytes
After processing item #5 chunk[4], Memory: 5,113,528 bytes
After processing item #6 chunk[5], Memory: 5,113,496 bytes
After processing item #7 chunk[6], Memory: 5,113,464 bytes
After processing item #8 chunk[7], Memory: 5,113,432 bytes
After processing item #9 chunk[8], Memory: 5,113,400 bytes
After processing item #10 chunk[9], Memory: 5,113,368 bytes
Processing chunk of 5 items
After processing item #11 chunk[0], Memory: 5,113,392 bytes
After processing item #12 chunk[1], Memory: 5,113,504 bytes
After processing item #13 chunk[2], Memory: 5,113,472 bytes
After processing item #14 chunk[3], Memory: 5,113,440 bytes
After processing item #15 chunk[4], Memory: 5,113,408 bytes
After foreach
```
Output with the current (.NET 7) implementation:
```none
Processing chunk of 10 items
After processing item #1 chunk[0], Memory: 5,104,248 bytes
After processing item #2 chunk[1], Memory: 10,122,064 bytes
After processing item #3 chunk[2], Memory: 15,117,680 bytes
After processing item #4 chunk[3], Memory: 20,122,176 bytes
After processing item #5 chunk[4], Memory: 25,122,200 bytes
After processing item #6 chunk[5], Memory: 30,122,224 bytes
After processing item #7 chunk[6], Memory: 35,122,248 bytes
After processing item #8 chunk[7], Memory: 40,122,272 bytes
After processing item #9 chunk[8], Memory: 45,122,296 bytes
After processing item #10 chunk[9], Memory: 50,122,320 bytes
Processing chunk of 5 items
After processing item #11 chunk[0], Memory: 5,120,088 bytes
After processing item #12 chunk[1], Memory: 10,113,816 bytes
After processing item #13 chunk[2], Memory: 15,113,840 bytes
After processing item #14 chunk[3], Memory: 20,113,864 bytes
After processing item #15 chunk[4], Memory: 25,113,888 bytes
After foreach
```
[Online demo](https://dotnetfiddle.net/pz3cfy).
The conditions that can trigger this temporary memory leak are quite unusual, but nevertheless I am reporting it because the fix should be relatively easy.
|
True
|
The Enumerable.Chunk can leak memory (.NET 7) - Hi! I noticed that the [`System.Linq.Enumerable.Chunk`](https://docs.microsoft.com/en-us/dotnet/api/system.linq.enumerable.chunk) operator is [currently](https://github.com/dotnet/runtime/blob/0d96247e828d153e738d0a2067ecbe330d7d58ab/src/libraries/System.Linq/src/System/Linq/Chunk.cs) (.NET 7) implemented in a way that could potentially result in delaying the garbage collection of some objects.
The issue emerges in case the processing of each element contained in each `TSource[]` chunk, involves allocating a large amount of memory. Although the element is removed from the chunk, it is still referenced by the underlying `List<TSource>`, and so it is not eligible for garbage collection until the whole chunk has been fully processed. Below is a minimal demonstration of this behavior:
```C#
public static void Main()
{
var source = Enumerable
.Range(1, 15)
.Select(n => new Item() { Id = n });
var chunkified = source.Chunk(10);
foreach (var chunk in chunkified) ProcessChunk(chunk);
Console.WriteLine($"After foreach");
}
private class Item
{
private byte[] _bytes;
public int Id { get; init; }
public void Load() => _bytes = new byte[5_000_000];
}
static void ProcessChunk(Item[] chunk)
{
Console.WriteLine($"Processing chunk of {chunk.Length} items");
for (int i = 0; i < chunk.Length; i++)
{
var item = chunk[i];
chunk[i] = null;
item.Load();
Console.WriteLine(@$"After processing item #{item
.Id,-2} chunk[{i}], Memory: {GC.GetTotalMemory(true):#,0} bytes");
}
}
```
Output with the .NET 6 implementation:
```none
Processing chunk of 10 items
After processing item #1 chunk[0], Memory: 5,103,968 bytes
After processing item #2 chunk[1], Memory: 5,113,624 bytes
After processing item #3 chunk[2], Memory: 5,113,592 bytes
After processing item #4 chunk[3], Memory: 5,113,560 bytes
After processing item #5 chunk[4], Memory: 5,113,528 bytes
After processing item #6 chunk[5], Memory: 5,113,496 bytes
After processing item #7 chunk[6], Memory: 5,113,464 bytes
After processing item #8 chunk[7], Memory: 5,113,432 bytes
After processing item #9 chunk[8], Memory: 5,113,400 bytes
After processing item #10 chunk[9], Memory: 5,113,368 bytes
Processing chunk of 5 items
After processing item #11 chunk[0], Memory: 5,113,392 bytes
After processing item #12 chunk[1], Memory: 5,113,504 bytes
After processing item #13 chunk[2], Memory: 5,113,472 bytes
After processing item #14 chunk[3], Memory: 5,113,440 bytes
After processing item #15 chunk[4], Memory: 5,113,408 bytes
After foreach
```
Output with the current (.NET 7) implementation:
```none
Processing chunk of 10 items
After processing item #1 chunk[0], Memory: 5,104,248 bytes
After processing item #2 chunk[1], Memory: 10,122,064 bytes
After processing item #3 chunk[2], Memory: 15,117,680 bytes
After processing item #4 chunk[3], Memory: 20,122,176 bytes
After processing item #5 chunk[4], Memory: 25,122,200 bytes
After processing item #6 chunk[5], Memory: 30,122,224 bytes
After processing item #7 chunk[6], Memory: 35,122,248 bytes
After processing item #8 chunk[7], Memory: 40,122,272 bytes
After processing item #9 chunk[8], Memory: 45,122,296 bytes
After processing item #10 chunk[9], Memory: 50,122,320 bytes
Processing chunk of 5 items
After processing item #11 chunk[0], Memory: 5,120,088 bytes
After processing item #12 chunk[1], Memory: 10,113,816 bytes
After processing item #13 chunk[2], Memory: 15,113,840 bytes
After processing item #14 chunk[3], Memory: 20,113,864 bytes
After processing item #15 chunk[4], Memory: 25,113,888 bytes
After foreach
```
[Online demo](https://dotnetfiddle.net/pz3cfy).
The conditions that can trigger this temporary memory leak are quite unusual, but nevertheless I am reporting it because the fix should be relatively easy.
|
non_process
|
the enumerable chunk can leak memory net hi i noticed that the operator is net implemented in a way that could potentially result in delaying the garbage collection of some objects the issue emerges in case the processing of each element contained in each tsource chunk involves allocating a large amount of memory although the element is removed from the chunk it is still referenced by the underlying list and so it is not eligible for garbage collection until the whole chunk has been fully processed below is a minimal demonstration of this behavior c public static void main var source enumerable range select n new item id n var chunkified source chunk foreach var chunk in chunkified processchunk chunk console writeline after foreach private class item private byte bytes public int id get init public void load bytes new byte static void processchunk item chunk console writeline processing chunk of chunk length items for int i i chunk length i var item chunk chunk null item load console writeline after processing item item id chunk memory gc gettotalmemory true bytes output with the net implementation none processing chunk of items after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes processing chunk of items after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after foreach output with the current net implementation none processing chunk of items after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes processing chunk of items after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after processing item chunk memory bytes after foreach the conditions that can trigger this temporary memory leak are quite unusual but nevertheless i am reporting it because the fix should be relatively easy
| 0
|
64,352
| 15,874,744,416
|
IssuesEvent
|
2021-04-09 05:44:27
|
open-telemetry/opentelemetry-python
|
https://api.github.com/repos/open-telemetry/opentelemetry-python
|
closed
|
Delete old/renamed opentelemetry-ext-* packages on PyPI
|
backlog build & infra
|
We renamed instrumentation and exporter PyPI packages from `opentelemetry-ext-*` to `opentelemetry-instrumentation-*` and `opentelemetry-exporter-*`. Now there are bunch of deprecated packages stuck at version `0.11b0` or earlier that need to be deleted from PyPI.
However, deleting them outright can break user builds if they are depending on these old packages. There are a few options to mitigate this and some statistics on package downloads we can follow to determine how safe it is.
|
1.0
|
Delete old/renamed opentelemetry-ext-* packages on PyPI - We renamed instrumentation and exporter PyPI packages from `opentelemetry-ext-*` to `opentelemetry-instrumentation-*` and `opentelemetry-exporter-*`. Now there are bunch of deprecated packages stuck at version `0.11b0` or earlier that need to be deleted from PyPI.
However, deleting them outright can break user builds if they are depending on these old packages. There are a few options to mitigate this and some statistics on package downloads we can follow to determine how safe it is.
|
non_process
|
delete old renamed opentelemetry ext packages on pypi we renamed instrumentation and exporter pypi packages from opentelemetry ext to opentelemetry instrumentation and opentelemetry exporter now there are bunch of deprecated packages stuck at version or earlier that need to be deleted from pypi however deleting them outright can break user builds if they are depending on these old packages there are a few options to mitigate this and some statistics on package downloads we can follow to determine how safe it is
| 0
|
37,257
| 18,245,438,004
|
IssuesEvent
|
2021-10-01 17:44:06
|
firebase/firebase-ios-sdk
|
https://api.github.com/repos/firebase/firebase-ios-sdk
|
closed
|
Catalyst 15.0 with Swift Package Manager: No such module 'FirebasePerformance'
|
api: performance Catalyst
|
<!-- DO NOT DELETE
validate_template=true
template_path=.github/ISSUE_TEMPLATE/bug_report.md
-->
### [REQUIRED] Step 1: Describe your environment
* Xcode version: Version 13.0 beta 5 (13A5212g)
* Firebase SDK version: 8.8.0 (also tried 8.6.1)
* Installation method: `Swift Package Manager`
* Firebase Component: Performance
* M1 Mac
* Catalyst 15.0 (macOS Monterey) target
### [REQUIRED] Step 2: Describe the problem
With the above environment, building after placing an `import FirebasePerformance` results in an error `No such module 'FirebasePerformance'`.
Switching the target to be an iOS App on M1 and building, the module is found. Appears to be specific to Catalyst.
#### Steps to reproduce:
To build Catalyst 15.0 apps, Xcode 13.0 beta 5 is required.
Set project support to iPhone, iPad, Mac.
iOS deployment target to 15.0. macOS deployment target to 12.0.
Use Swift Package Manager to include FirebaseCrashlytics and FirebasePerformance 8.8.0.
Place `import FirebasePerformance` in a Swift file.
Set target to `Mac`.
Build
Observe `No such module 'FirebasePerformance'`.
#### Relevant Code:
```
import FirebasePerformance
```
|
True
|
Catalyst 15.0 with Swift Package Manager: No such module 'FirebasePerformance' - <!-- DO NOT DELETE
validate_template=true
template_path=.github/ISSUE_TEMPLATE/bug_report.md
-->
### [REQUIRED] Step 1: Describe your environment
* Xcode version: Version 13.0 beta 5 (13A5212g)
* Firebase SDK version: 8.8.0 (also tried 8.6.1)
* Installation method: `Swift Package Manager`
* Firebase Component: Performance
* M1 Mac
* Catalyst 15.0 (macOS Monterey) target
### [REQUIRED] Step 2: Describe the problem
With the above environment, building after placing an `import FirebasePerformance` results in an error `No such module 'FirebasePerformance'`.
Switching the target to be an iOS App on M1 and building, the module is found. Appears to be specific to Catalyst.
#### Steps to reproduce:
To build Catalyst 15.0 apps, Xcode 13.0 beta 5 is required.
Set project support to iPhone, iPad, Mac.
iOS deployment target to 15.0. macOS deployment target to 12.0.
Use Swift Package Manager to include FirebaseCrashlytics and FirebasePerformance 8.8.0.
Place `import FirebasePerformance` in a Swift file.
Set target to `Mac`.
Build
Observe `No such module 'FirebasePerformance'`.
#### Relevant Code:
```
import FirebasePerformance
```
|
non_process
|
catalyst with swift package manager no such module firebaseperformance do not delete validate template true template path github issue template bug report md step describe your environment xcode version version beta firebase sdk version also tried installation method swift package manager firebase component performance mac catalyst macos monterey target step describe the problem with the above environment building after placing an import firebaseperformance results in an error no such module firebaseperformance switching the target to be an ios app on and building the module is found appears to be specific to catalyst steps to reproduce to build catalyst apps xcode beta is required set project support to iphone ipad mac ios deployment target to macos deployment target to use swift package manager to include firebasecrashlytics and firebaseperformance place import firebaseperformance in a swift file set target to mac build observe no such module firebaseperformance relevant code import firebaseperformance
| 0
|
20,832
| 27,592,380,962
|
IssuesEvent
|
2023-03-09 02:00:08
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Thu, 9 Mar 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Intermediate and Future Frame Prediction of Geostationary Satellite Imagery With Warp and Refine Network
- **Authors:** Minseok Seo, Yeji Choi, Hyungon Ry, Heesun Park, Hyungkun Bae, Hyesook Lee, Wanseok Seo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04405
- **Pdf link:** https://arxiv.org/pdf/2303.04405
- **Abstract**
Geostationary satellite imagery has applications in climate and weather forecasting, planning natural energy resources, and predicting extreme weather events. For precise and accurate prediction, higher spatial and temporal resolution of geostationary satellite imagery is important. Although recent geostationary satellite resolution has improved, the long-term analysis of climate applications is limited to using multiple satellites from the past to the present due to the different resolutions. To solve this problem, we proposed warp and refine network (WR-Net). WR-Net is divided into an optical flow warp component and a warp image refinement component. We used the TV-L1 algorithm instead of deep learning-based approaches to extract the optical flow warp component. The deep-learning-based model is trained on the human-centric view of the RGB channel and does not work on geostationary satellites, which is gray-scale one-channel imagery. The refinement network refines the warped image through a multi-temporal fusion layer. We evaluated WR-Net by interpolation of temporal resolution at 4 min intervals to 2 min intervals in large-scale GK2A geostationary meteorological satellite imagery. Furthermore, we applied WR-Net to the future frame prediction task and showed that the explicit use of optical flow can help future frame prediction.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### DiM: Distilling Dataset into Generative Model
- **Authors:** Kai Wang, Jianyang Gu, Daquan Zhou, Zheng Zhu, Wei Jiang, Yang You
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04707
- **Pdf link:** https://arxiv.org/pdf/2303.04707
- **Abstract**
Dataset distillation reduces the network training cost by synthesizing small and informative datasets from large-scale ones. Despite the success of the recent dataset distillation algorithms, three drawbacks still limit their wider application: i). the synthetic images perform poorly on large architectures; ii). they need to be re-optimized when the distillation ratio changes; iii). the limited diversity restricts the performance when the distillation ratio is large. In this paper, we propose a novel distillation scheme to \textbf{D}istill information of large train sets \textbf{i}nto generative \textbf{M}odels, named DiM. Specifically, DiM learns to use a generative model to store the information of the target dataset. During the distillation phase, we minimize the differences in logits predicted by a models pool between real and generated images. At the deployment stage, the generative model synthesizes various training samples from random noises on the fly. Due to the simple yet effective designs, the trained DiM can be directly applied to different distillation ratios and large architectures without extra cost. We validate the proposed DiM across 4 datasets and achieve state-of-the-art results on all of them. To the best of our knowledge, we are the first to achieve higher accuracy on complex architectures than simple ones, such as 75.1\% with ResNet-18 and 72.6\% with ConvNet-3 on ten images per class of CIFAR-10. Besides, DiM outperforms previous methods with 10\% $\sim$ 22\% when images per class are 1 and 10 on the SVHN dataset.
## Keyword: ISP
### Neural Vector Fields: Implicit Representation by Explicit Learning
- **Authors:** Xianghui Yang, Guosheng Lin, Zhenghao Chen, Luping Zhou
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2303.04341
- **Pdf link:** https://arxiv.org/pdf/2303.04341
- **Abstract**
Deep neural networks (DNNs) are widely applied for nowadays 3D surface reconstruction tasks and such methods can be further divided into two categories, which respectively warp templates explicitly by moving vertices or represent 3D surfaces implicitly as signed or unsigned distance functions. Taking advantage of both advanced explicit learning process and powerful representation ability of implicit functions, we propose a novel 3D representation method, Neural Vector Fields (NVF). It not only adopts the explicit learning process to manipulate meshes directly, but also leverages the implicit representation of unsigned distance functions (UDFs) to break the barriers in resolution and topology. Specifically, our method first predicts the displacements from queries towards the surface and models the shapes as \textit{Vector Fields}. Rather than relying on network differentiation to obtain direction fields as most existing UDF-based methods, the produced vector fields encode the distance and direction fields both and mitigate the ambiguity at "ridge" points, such that the calculation of direction fields is straightforward and differentiation-free. The differentiation-free characteristic enables us to further learn a shape codebook via Vector Quantization, which encodes the cross-object priors, accelerates the training procedure, and boosts model generalization on cross-category reconstruction. The extensive experiments on surface reconstruction benchmarks indicate that our method outperforms those state-of-the-art methods in different evaluation scenarios including watertight vs non-watertight shapes, category-specific vs category-agnostic reconstruction, category-unseen reconstruction, and cross-domain reconstruction. Our code will be publicly released.
### SEMv2: Table Separation Line Detection Based on Conditional Convolution
- **Authors:** Zhenrong Zhang, Pengfei Hu, Jiefeng Ma, Jun Du, Jianshu Zhang, Huihui Zhu, Baocai Yin, Bing Yin, Cong Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04384
- **Pdf link:** https://arxiv.org/pdf/2303.04384
- **Abstract**
Table structure recognition is an indispensable element for enabling machines to comprehend tables. Its primary purpose is to identify the internal structure of a table. Nevertheless, due to the complexity and diversity of their structure and style, it is highly challenging to parse the tabular data into a structured format that machines can comprehend. In this work, we adhere to the principle of the split-and-merge based methods and propose an accurate table structure recognizer, termed SEMv2 (SEM: Split, Embed and Merge). Unlike the previous works in the ``split'' stage, we aim to address the table separation line instance-level discrimination problem and introduce a table separation line detection strategy based on conditional convolution. Specifically, we design the ``split'' in a top-down manner that detects the table separation line instance first and then dynamically predicts the table separation line mask for each instance. The final table separation line shape can be accurately obtained by processing the table separation line mask in a row-wise/column-wise manner. To comprehensively evaluate the SEMv2, we also present a more challenging dataset for table structure recognition, dubbed iFLYTAB, which encompasses multiple style tables in various scenarios such as photos, scanned documents, etc. Extensive experiments on publicly available datasets (e.g. SciTSR, PubTabNet and iFLYTAB) demonstrate the efficacy of our proposed approach. The code and iFLYTAB dataset will be made publicly available upon acceptance of this paper.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Scene Matters: Model-based Deep Video Compression
- **Authors:** Lv Tang, Xinfeng Zhang, Gai Zhang, Xiaoqi Ma
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2303.04557
- **Pdf link:** https://arxiv.org/pdf/2303.04557
- **Abstract**
Video compression has always been a popular research area, where many traditional and deep video compression methods have been proposed. These methods typically rely on signal prediction theory to enhance compression performance by designing high efficient intra and inter prediction strategies and compressing video frames one by one. In this paper, we propose a novel model-based video compression (MVC) framework that regards scenes as the fundamental units for video sequences. Our proposed MVC directly models the intensity variation of the entire video sequence in one scene, seeking non-redundant representations instead of reducing redundancy through spatio-temporal predictions. To achieve this, we employ implicit neural representation (INR) as our basic modeling architecture. To improve the efficiency of video modeling, we first propose context-related spatial positional embedding (CRSPE) and frequency domain supervision (FDS) in spatial context enhancement. For temporal correlation capturing, we design the scene flow constrain mechanism (SFCM) and temporal contrastive loss (TCL). Extensive experimental results demonstrate that our method achieves up to a 20\% bitrate reduction compared to the latest video coding standard H.266 and is more efficient in decoding than existing video coding strategies.
## Keyword: RAW
### Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples
- **Authors:** Jinwei Wang, Hao Wu, Haihua Wang, Jiawei Zhang, Xiangyang Luo, Bin Ma
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04502
- **Pdf link:** https://arxiv.org/pdf/2303.04502
- **Abstract**
The vulnerability of Deep Neural Networks (DNNs) to adversarial examples has been confirmed. Existing adversarial defenses primarily aim at preventing adversarial examples from attacking DNNs successfully, rather than preventing their generation. If the generation of adversarial examples is unregulated, images within reach are no longer secure and pose a threat to non-robust DNNs. Although gradient obfuscation attempts to address this issue, it has been shown to be circumventable. Therefore, we propose a novel adversarial defense mechanism, which is referred to as immune defense and is the example-based pre-defense. This mechanism applies carefully designed quasi-imperceptible perturbations to the raw images to prevent the generation of adversarial examples for the raw images, and thereby protecting both images and DNNs. These perturbed images are referred to as Immune Examples (IEs). In the white-box immune defense, we provide a gradient-based and an optimization-based approach, respectively. Additionally, the more complex black-box immune defense is taken into consideration. We propose Masked Gradient Sign Descent (MGSD) to reduce approximation error and stabilize the update to improve the transferability of IEs and thereby ensure their effectiveness against black-box adversarial attacks. The experimental results demonstrate that the optimization-based approach has superior performance and better visual quality in white-box immune defense. In contrast, the gradient-based approach has stronger transferability and the proposed MGSD significantly improve the transferability of baselines.
### DiM: Distilling Dataset into Generative Model
- **Authors:** Kai Wang, Jianyang Gu, Daquan Zhou, Zheng Zhu, Wei Jiang, Yang You
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04707
- **Pdf link:** https://arxiv.org/pdf/2303.04707
- **Abstract**
Dataset distillation reduces the network training cost by synthesizing small and informative datasets from large-scale ones. Despite the success of the recent dataset distillation algorithms, three drawbacks still limit their wider application: i). the synthetic images perform poorly on large architectures; ii). they need to be re-optimized when the distillation ratio changes; iii). the limited diversity restricts the performance when the distillation ratio is large. In this paper, we propose a novel distillation scheme to \textbf{D}istill information of large train sets \textbf{i}nto generative \textbf{M}odels, named DiM. Specifically, DiM learns to use a generative model to store the information of the target dataset. During the distillation phase, we minimize the differences in logits predicted by a models pool between real and generated images. At the deployment stage, the generative model synthesizes various training samples from random noises on the fly. Due to the simple yet effective designs, the trained DiM can be directly applied to different distillation ratios and large architectures without extra cost. We validate the proposed DiM across 4 datasets and achieve state-of-the-art results on all of them. To the best of our knowledge, we are the first to achieve higher accuracy on complex architectures than simple ones, such as 75.1\% with ResNet-18 and 72.6\% with ConvNet-3 on ten images per class of CIFAR-10. Besides, DiM outperforms previous methods with 10\% $\sim$ 22\% when images per class are 1 and 10 on the SVHN dataset.
## Keyword: raw image
### Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples
- **Authors:** Jinwei Wang, Hao Wu, Haihua Wang, Jiawei Zhang, Xiangyang Luo, Bin Ma
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04502
- **Pdf link:** https://arxiv.org/pdf/2303.04502
- **Abstract**
The vulnerability of Deep Neural Networks (DNNs) to adversarial examples has been confirmed. Existing adversarial defenses primarily aim at preventing adversarial examples from attacking DNNs successfully, rather than preventing their generation. If the generation of adversarial examples is unregulated, images within reach are no longer secure and pose a threat to non-robust DNNs. Although gradient obfuscation attempts to address this issue, it has been shown to be circumventable. Therefore, we propose a novel adversarial defense mechanism, which is referred to as immune defense and is the example-based pre-defense. This mechanism applies carefully designed quasi-imperceptible perturbations to the raw images to prevent the generation of adversarial examples for the raw images, and thereby protecting both images and DNNs. These perturbed images are referred to as Immune Examples (IEs). In the white-box immune defense, we provide a gradient-based and an optimization-based approach, respectively. Additionally, the more complex black-box immune defense is taken into consideration. We propose Masked Gradient Sign Descent (MGSD) to reduce approximation error and stabilize the update to improve the transferability of IEs and thereby ensure their effectiveness against black-box adversarial attacks. The experimental results demonstrate that the optimization-based approach has superior performance and better visual quality in white-box immune defense. In contrast, the gradient-based approach has stronger transferability and the proposed MGSD significantly improve the transferability of baselines.
|
2.0
|
New submissions for Thu, 9 Mar 23 - ## Keyword: events
### Intermediate and Future Frame Prediction of Geostationary Satellite Imagery With Warp and Refine Network
- **Authors:** Minseok Seo, Yeji Choi, Hyungon Ry, Heesun Park, Hyungkun Bae, Hyesook Lee, Wanseok Seo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04405
- **Pdf link:** https://arxiv.org/pdf/2303.04405
- **Abstract**
Geostationary satellite imagery has applications in climate and weather forecasting, planning natural energy resources, and predicting extreme weather events. For precise and accurate prediction, higher spatial and temporal resolution of geostationary satellite imagery is important. Although recent geostationary satellite resolution has improved, the long-term analysis of climate applications is limited to using multiple satellites from the past to the present due to the different resolutions. To solve this problem, we proposed warp and refine network (WR-Net). WR-Net is divided into an optical flow warp component and a warp image refinement component. We used the TV-L1 algorithm instead of deep learning-based approaches to extract the optical flow warp component. The deep-learning-based model is trained on the human-centric view of the RGB channel and does not work on geostationary satellites, which is gray-scale one-channel imagery. The refinement network refines the warped image through a multi-temporal fusion layer. We evaluated WR-Net by interpolation of temporal resolution at 4 min intervals to 2 min intervals in large-scale GK2A geostationary meteorological satellite imagery. Furthermore, we applied WR-Net to the future frame prediction task and showed that the explicit use of optical flow can help future frame prediction.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### DiM: Distilling Dataset into Generative Model
- **Authors:** Kai Wang, Jianyang Gu, Daquan Zhou, Zheng Zhu, Wei Jiang, Yang You
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04707
- **Pdf link:** https://arxiv.org/pdf/2303.04707
- **Abstract**
Dataset distillation reduces the network training cost by synthesizing small and informative datasets from large-scale ones. Despite the success of the recent dataset distillation algorithms, three drawbacks still limit their wider application: i). the synthetic images perform poorly on large architectures; ii). they need to be re-optimized when the distillation ratio changes; iii). the limited diversity restricts the performance when the distillation ratio is large. In this paper, we propose a novel distillation scheme to \textbf{D}istill information of large train sets \textbf{i}nto generative \textbf{M}odels, named DiM. Specifically, DiM learns to use a generative model to store the information of the target dataset. During the distillation phase, we minimize the differences in logits predicted by a models pool between real and generated images. At the deployment stage, the generative model synthesizes various training samples from random noises on the fly. Due to the simple yet effective designs, the trained DiM can be directly applied to different distillation ratios and large architectures without extra cost. We validate the proposed DiM across 4 datasets and achieve state-of-the-art results on all of them. To the best of our knowledge, we are the first to achieve higher accuracy on complex architectures than simple ones, such as 75.1\% with ResNet-18 and 72.6\% with ConvNet-3 on ten images per class of CIFAR-10. Besides, DiM outperforms previous methods with 10\% $\sim$ 22\% when images per class are 1 and 10 on the SVHN dataset.
## Keyword: ISP
### Neural Vector Fields: Implicit Representation by Explicit Learning
- **Authors:** Xianghui Yang, Guosheng Lin, Zhenghao Chen, Luping Zhou
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2303.04341
- **Pdf link:** https://arxiv.org/pdf/2303.04341
- **Abstract**
Deep neural networks (DNNs) are widely applied for nowadays 3D surface reconstruction tasks and such methods can be further divided into two categories, which respectively warp templates explicitly by moving vertices or represent 3D surfaces implicitly as signed or unsigned distance functions. Taking advantage of both advanced explicit learning process and powerful representation ability of implicit functions, we propose a novel 3D representation method, Neural Vector Fields (NVF). It not only adopts the explicit learning process to manipulate meshes directly, but also leverages the implicit representation of unsigned distance functions (UDFs) to break the barriers in resolution and topology. Specifically, our method first predicts the displacements from queries towards the surface and models the shapes as \textit{Vector Fields}. Rather than relying on network differentiation to obtain direction fields as most existing UDF-based methods, the produced vector fields encode the distance and direction fields both and mitigate the ambiguity at "ridge" points, such that the calculation of direction fields is straightforward and differentiation-free. The differentiation-free characteristic enables us to further learn a shape codebook via Vector Quantization, which encodes the cross-object priors, accelerates the training procedure, and boosts model generalization on cross-category reconstruction. The extensive experiments on surface reconstruction benchmarks indicate that our method outperforms those state-of-the-art methods in different evaluation scenarios including watertight vs non-watertight shapes, category-specific vs category-agnostic reconstruction, category-unseen reconstruction, and cross-domain reconstruction. Our code will be publicly released.
### SEMv2: Table Separation Line Detection Based on Conditional Convolution
- **Authors:** Zhenrong Zhang, Pengfei Hu, Jiefeng Ma, Jun Du, Jianshu Zhang, Huihui Zhu, Baocai Yin, Bing Yin, Cong Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04384
- **Pdf link:** https://arxiv.org/pdf/2303.04384
- **Abstract**
Table structure recognition is an indispensable element for enabling machines to comprehend tables. Its primary purpose is to identify the internal structure of a table. Nevertheless, due to the complexity and diversity of their structure and style, it is highly challenging to parse the tabular data into a structured format that machines can comprehend. In this work, we adhere to the principle of the split-and-merge based methods and propose an accurate table structure recognizer, termed SEMv2 (SEM: Split, Embed and Merge). Unlike the previous works in the ``split'' stage, we aim to address the table separation line instance-level discrimination problem and introduce a table separation line detection strategy based on conditional convolution. Specifically, we design the ``split'' in a top-down manner that detects the table separation line instance first and then dynamically predicts the table separation line mask for each instance. The final table separation line shape can be accurately obtained by processing the table separation line mask in a row-wise/column-wise manner. To comprehensively evaluate the SEMv2, we also present a more challenging dataset for table structure recognition, dubbed iFLYTAB, which encompasses multiple style tables in various scenarios such as photos, scanned documents, etc. Extensive experiments on publicly available datasets (e.g. SciTSR, PubTabNet and iFLYTAB) demonstrate the efficacy of our proposed approach. The code and iFLYTAB dataset will be made publicly available upon acceptance of this paper.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Scene Matters: Model-based Deep Video Compression
- **Authors:** Lv Tang, Xinfeng Zhang, Gai Zhang, Xiaoqi Ma
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2303.04557
- **Pdf link:** https://arxiv.org/pdf/2303.04557
- **Abstract**
Video compression has always been a popular research area, where many traditional and deep video compression methods have been proposed. These methods typically rely on signal prediction theory to enhance compression performance by designing high efficient intra and inter prediction strategies and compressing video frames one by one. In this paper, we propose a novel model-based video compression (MVC) framework that regards scenes as the fundamental units for video sequences. Our proposed MVC directly models the intensity variation of the entire video sequence in one scene, seeking non-redundant representations instead of reducing redundancy through spatio-temporal predictions. To achieve this, we employ implicit neural representation (INR) as our basic modeling architecture. To improve the efficiency of video modeling, we first propose context-related spatial positional embedding (CRSPE) and frequency domain supervision (FDS) in spatial context enhancement. For temporal correlation capturing, we design the scene flow constrain mechanism (SFCM) and temporal contrastive loss (TCL). Extensive experimental results demonstrate that our method achieves up to a 20\% bitrate reduction compared to the latest video coding standard H.266 and is more efficient in decoding than existing video coding strategies.
## Keyword: RAW
### Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples
- **Authors:** Jinwei Wang, Hao Wu, Haihua Wang, Jiawei Zhang, Xiangyang Luo, Bin Ma
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04502
- **Pdf link:** https://arxiv.org/pdf/2303.04502
- **Abstract**
The vulnerability of Deep Neural Networks (DNNs) to adversarial examples has been confirmed. Existing adversarial defenses primarily aim at preventing adversarial examples from attacking DNNs successfully, rather than preventing their generation. If the generation of adversarial examples is unregulated, images within reach are no longer secure and pose a threat to non-robust DNNs. Although gradient obfuscation attempts to address this issue, it has been shown to be circumventable. Therefore, we propose a novel adversarial defense mechanism, which is referred to as immune defense and is the example-based pre-defense. This mechanism applies carefully designed quasi-imperceptible perturbations to the raw images to prevent the generation of adversarial examples for the raw images, and thereby protecting both images and DNNs. These perturbed images are referred to as Immune Examples (IEs). In the white-box immune defense, we provide a gradient-based and an optimization-based approach, respectively. Additionally, the more complex black-box immune defense is taken into consideration. We propose Masked Gradient Sign Descent (MGSD) to reduce approximation error and stabilize the update to improve the transferability of IEs and thereby ensure their effectiveness against black-box adversarial attacks. The experimental results demonstrate that the optimization-based approach has superior performance and better visual quality in white-box immune defense. In contrast, the gradient-based approach has stronger transferability and the proposed MGSD significantly improve the transferability of baselines.
### DiM: Distilling Dataset into Generative Model
- **Authors:** Kai Wang, Jianyang Gu, Daquan Zhou, Zheng Zhu, Wei Jiang, Yang You
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04707
- **Pdf link:** https://arxiv.org/pdf/2303.04707
- **Abstract**
Dataset distillation reduces the network training cost by synthesizing small and informative datasets from large-scale ones. Despite the success of the recent dataset distillation algorithms, three drawbacks still limit their wider application: i). the synthetic images perform poorly on large architectures; ii). they need to be re-optimized when the distillation ratio changes; iii). the limited diversity restricts the performance when the distillation ratio is large. In this paper, we propose a novel distillation scheme to \textbf{D}istill information of large train sets \textbf{i}nto generative \textbf{M}odels, named DiM. Specifically, DiM learns to use a generative model to store the information of the target dataset. During the distillation phase, we minimize the differences in logits predicted by a models pool between real and generated images. At the deployment stage, the generative model synthesizes various training samples from random noises on the fly. Due to the simple yet effective designs, the trained DiM can be directly applied to different distillation ratios and large architectures without extra cost. We validate the proposed DiM across 4 datasets and achieve state-of-the-art results on all of them. To the best of our knowledge, we are the first to achieve higher accuracy on complex architectures than simple ones, such as 75.1\% with ResNet-18 and 72.6\% with ConvNet-3 on ten images per class of CIFAR-10. Besides, DiM outperforms previous methods with 10\% $\sim$ 22\% when images per class are 1 and 10 on the SVHN dataset.
## Keyword: raw image
### Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples
- **Authors:** Jinwei Wang, Hao Wu, Haihua Wang, Jiawei Zhang, Xiangyang Luo, Bin Ma
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.04502
- **Pdf link:** https://arxiv.org/pdf/2303.04502
- **Abstract**
The vulnerability of Deep Neural Networks (DNNs) to adversarial examples has been confirmed. Existing adversarial defenses primarily aim at preventing adversarial examples from attacking DNNs successfully, rather than preventing their generation. If the generation of adversarial examples is unregulated, images within reach are no longer secure and pose a threat to non-robust DNNs. Although gradient obfuscation attempts to address this issue, it has been shown to be circumventable. Therefore, we propose a novel adversarial defense mechanism, which is referred to as immune defense and is the example-based pre-defense. This mechanism applies carefully designed quasi-imperceptible perturbations to the raw images to prevent the generation of adversarial examples for the raw images, and thereby protecting both images and DNNs. These perturbed images are referred to as Immune Examples (IEs). In the white-box immune defense, we provide a gradient-based and an optimization-based approach, respectively. Additionally, the more complex black-box immune defense is taken into consideration. We propose Masked Gradient Sign Descent (MGSD) to reduce approximation error and stabilize the update to improve the transferability of IEs and thereby ensure their effectiveness against black-box adversarial attacks. The experimental results demonstrate that the optimization-based approach has superior performance and better visual quality in white-box immune defense. In contrast, the gradient-based approach has stronger transferability and the proposed MGSD significantly improve the transferability of baselines.
|
process
|
new submissions for thu mar keyword events intermediate and future frame prediction of geostationary satellite imagery with warp and refine network authors minseok seo yeji choi hyungon ry heesun park hyungkun bae hyesook lee wanseok seo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract geostationary satellite imagery has applications in climate and weather forecasting planning natural energy resources and predicting extreme weather events for precise and accurate prediction higher spatial and temporal resolution of geostationary satellite imagery is important although recent geostationary satellite resolution has improved the long term analysis of climate applications is limited to using multiple satellites from the past to the present due to the different resolutions to solve this problem we proposed warp and refine network wr net wr net is divided into an optical flow warp component and a warp image refinement component we used the tv algorithm instead of deep learning based approaches to extract the optical flow warp component the deep learning based model is trained on the human centric view of the rgb channel and does not work on geostationary satellites which is gray scale one channel imagery the refinement network refines the warped image through a multi temporal fusion layer we evaluated wr net by interpolation of temporal resolution at min intervals to min intervals in large scale geostationary meteorological satellite imagery furthermore we applied wr net to the future frame prediction task and showed that the explicit use of optical flow can help future frame prediction keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb dim distilling dataset into generative model authors kai wang jianyang gu daquan zhou zheng zhu wei jiang yang you subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract dataset distillation reduces the network training cost by synthesizing small and informative datasets from large scale ones despite the success of the recent dataset distillation algorithms three drawbacks still limit their wider application i the synthetic images perform poorly on large architectures ii they need to be re optimized when the distillation ratio changes iii the limited diversity restricts the performance when the distillation ratio is large in this paper we propose a novel distillation scheme to textbf d istill information of large train sets textbf i nto generative textbf m odels named dim specifically dim learns to use a generative model to store the information of the target dataset during the distillation phase we minimize the differences in logits predicted by a models pool between real and generated images at the deployment stage the generative model synthesizes various training samples from random noises on the fly due to the simple yet effective designs the trained dim can be directly applied to different distillation ratios and large architectures without extra cost we validate the proposed dim across datasets and achieve state of the art results on all of them to the best of our knowledge we are the first to achieve higher accuracy on complex architectures than simple ones such as with resnet and with convnet on ten images per class of cifar besides dim outperforms previous methods with sim when images per class are and on the svhn dataset keyword isp neural vector fields implicit representation by explicit learning authors xianghui yang guosheng lin zhenghao chen luping zhou subjects computer vision and pattern recognition cs cv artificial intelligence cs ai graphics cs gr arxiv link pdf link abstract deep neural networks dnns are widely applied for nowadays surface reconstruction tasks and such methods can be further divided into two categories which respectively warp templates explicitly by moving vertices or represent surfaces implicitly as signed or unsigned distance functions taking advantage of both advanced explicit learning process and powerful representation ability of implicit functions we propose a novel representation method neural vector fields nvf it not only adopts the explicit learning process to manipulate meshes directly but also leverages the implicit representation of unsigned distance functions udfs to break the barriers in resolution and topology specifically our method first predicts the displacements from queries towards the surface and models the shapes as textit vector fields rather than relying on network differentiation to obtain direction fields as most existing udf based methods the produced vector fields encode the distance and direction fields both and mitigate the ambiguity at ridge points such that the calculation of direction fields is straightforward and differentiation free the differentiation free characteristic enables us to further learn a shape codebook via vector quantization which encodes the cross object priors accelerates the training procedure and boosts model generalization on cross category reconstruction the extensive experiments on surface reconstruction benchmarks indicate that our method outperforms those state of the art methods in different evaluation scenarios including watertight vs non watertight shapes category specific vs category agnostic reconstruction category unseen reconstruction and cross domain reconstruction our code will be publicly released table separation line detection based on conditional convolution authors zhenrong zhang pengfei hu jiefeng ma jun du jianshu zhang huihui zhu baocai yin bing yin cong liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract table structure recognition is an indispensable element for enabling machines to comprehend tables its primary purpose is to identify the internal structure of a table nevertheless due to the complexity and diversity of their structure and style it is highly challenging to parse the tabular data into a structured format that machines can comprehend in this work we adhere to the principle of the split and merge based methods and propose an accurate table structure recognizer termed sem split embed and merge unlike the previous works in the split stage we aim to address the table separation line instance level discrimination problem and introduce a table separation line detection strategy based on conditional convolution specifically we design the split in a top down manner that detects the table separation line instance first and then dynamically predicts the table separation line mask for each instance the final table separation line shape can be accurately obtained by processing the table separation line mask in a row wise column wise manner to comprehensively evaluate the we also present a more challenging dataset for table structure recognition dubbed iflytab which encompasses multiple style tables in various scenarios such as photos scanned documents etc extensive experiments on publicly available datasets e g scitsr pubtabnet and iflytab demonstrate the efficacy of our proposed approach the code and iflytab dataset will be made publicly available upon acceptance of this paper keyword image signal processing there is no result keyword image signal process there is no result keyword compression scene matters model based deep video compression authors lv tang xinfeng zhang gai zhang xiaoqi ma subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract video compression has always been a popular research area where many traditional and deep video compression methods have been proposed these methods typically rely on signal prediction theory to enhance compression performance by designing high efficient intra and inter prediction strategies and compressing video frames one by one in this paper we propose a novel model based video compression mvc framework that regards scenes as the fundamental units for video sequences our proposed mvc directly models the intensity variation of the entire video sequence in one scene seeking non redundant representations instead of reducing redundancy through spatio temporal predictions to achieve this we employ implicit neural representation inr as our basic modeling architecture to improve the efficiency of video modeling we first propose context related spatial positional embedding crspe and frequency domain supervision fds in spatial context enhancement for temporal correlation capturing we design the scene flow constrain mechanism sfcm and temporal contrastive loss tcl extensive experimental results demonstrate that our method achieves up to a bitrate reduction compared to the latest video coding standard h and is more efficient in decoding than existing video coding strategies keyword raw immune defense a novel adversarial defense mechanism for preventing the generation of adversarial examples authors jinwei wang hao wu haihua wang jiawei zhang xiangyang luo bin ma subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the vulnerability of deep neural networks dnns to adversarial examples has been confirmed existing adversarial defenses primarily aim at preventing adversarial examples from attacking dnns successfully rather than preventing their generation if the generation of adversarial examples is unregulated images within reach are no longer secure and pose a threat to non robust dnns although gradient obfuscation attempts to address this issue it has been shown to be circumventable therefore we propose a novel adversarial defense mechanism which is referred to as immune defense and is the example based pre defense this mechanism applies carefully designed quasi imperceptible perturbations to the raw images to prevent the generation of adversarial examples for the raw images and thereby protecting both images and dnns these perturbed images are referred to as immune examples ies in the white box immune defense we provide a gradient based and an optimization based approach respectively additionally the more complex black box immune defense is taken into consideration we propose masked gradient sign descent mgsd to reduce approximation error and stabilize the update to improve the transferability of ies and thereby ensure their effectiveness against black box adversarial attacks the experimental results demonstrate that the optimization based approach has superior performance and better visual quality in white box immune defense in contrast the gradient based approach has stronger transferability and the proposed mgsd significantly improve the transferability of baselines dim distilling dataset into generative model authors kai wang jianyang gu daquan zhou zheng zhu wei jiang yang you subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract dataset distillation reduces the network training cost by synthesizing small and informative datasets from large scale ones despite the success of the recent dataset distillation algorithms three drawbacks still limit their wider application i the synthetic images perform poorly on large architectures ii they need to be re optimized when the distillation ratio changes iii the limited diversity restricts the performance when the distillation ratio is large in this paper we propose a novel distillation scheme to textbf d istill information of large train sets textbf i nto generative textbf m odels named dim specifically dim learns to use a generative model to store the information of the target dataset during the distillation phase we minimize the differences in logits predicted by a models pool between real and generated images at the deployment stage the generative model synthesizes various training samples from random noises on the fly due to the simple yet effective designs the trained dim can be directly applied to different distillation ratios and large architectures without extra cost we validate the proposed dim across datasets and achieve state of the art results on all of them to the best of our knowledge we are the first to achieve higher accuracy on complex architectures than simple ones such as with resnet and with convnet on ten images per class of cifar besides dim outperforms previous methods with sim when images per class are and on the svhn dataset keyword raw image immune defense a novel adversarial defense mechanism for preventing the generation of adversarial examples authors jinwei wang hao wu haihua wang jiawei zhang xiangyang luo bin ma subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the vulnerability of deep neural networks dnns to adversarial examples has been confirmed existing adversarial defenses primarily aim at preventing adversarial examples from attacking dnns successfully rather than preventing their generation if the generation of adversarial examples is unregulated images within reach are no longer secure and pose a threat to non robust dnns although gradient obfuscation attempts to address this issue it has been shown to be circumventable therefore we propose a novel adversarial defense mechanism which is referred to as immune defense and is the example based pre defense this mechanism applies carefully designed quasi imperceptible perturbations to the raw images to prevent the generation of adversarial examples for the raw images and thereby protecting both images and dnns these perturbed images are referred to as immune examples ies in the white box immune defense we provide a gradient based and an optimization based approach respectively additionally the more complex black box immune defense is taken into consideration we propose masked gradient sign descent mgsd to reduce approximation error and stabilize the update to improve the transferability of ies and thereby ensure their effectiveness against black box adversarial attacks the experimental results demonstrate that the optimization based approach has superior performance and better visual quality in white box immune defense in contrast the gradient based approach has stronger transferability and the proposed mgsd significantly improve the transferability of baselines
| 1
|
15,506
| 19,703,264,856
|
IssuesEvent
|
2022-01-12 18:52:11
|
googleapis/java-analytics-admin
|
https://api.github.com/repos/googleapis/java-analytics-admin
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'analytics-admin' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'analytics-admin' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname analytics admin invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
720,895
| 24,810,240,431
|
IssuesEvent
|
2022-10-25 08:51:23
|
spidernet-io/spiderpool
|
https://api.github.com/repos/spidernet-io/spiderpool
|
closed
|
Edit subnet is rejected
|
issue/not-assign priority/important-soon kind/bug
|
Describe the version
version about:
spiderpool
- v0.2.2
**Describe the bug**
A Subnet, without any IP assigned to the IPPool,But it is not possible to change the size of the ips
**Output of the failure**
```
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
# spidersubnets.spiderpool.spidernet.io "v4-ss-10" was not valid:
# * spec.ips: Forbidden: remove some IP ranges [10.118.88.101-10.118.88.201] that is being used, total IP addresses of an Subnet are jointly determined by 'spec.ips' and 'spec.excludeIPs'
#
apiVersion: spiderpool.spidernet.io/v1
kind: SpiderSubnet
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"spiderpool.spidernet.io/v1","kind":"SpiderSubnet","metadata":{"annotations":{},"deletionGracePeriodSeconds":0,"finalizers":["spiderpool.spidernet.io"],"generation":2,"name":"v4-ss-10","resourceVersion":"43769"},"spec":{"gateway":"10.118.88.1","ipVersion":4,"ips":["10.118.88.2-10.118.88.201"],"subnet":"10.118.88.0/24","vlan":0}}
creationTimestamp: "2022-10-25T03:23:49Z"
finalizers:
- spiderpool.spidernet.io
generation: 1
name: v4-ss-10
resourceVersion: "127189"
uid: 1f9b1561-a7a8-4a14-80ba-105bf77198be
spec:
gateway: 10.118.88.1
ipVersion: 4
ips:
- 10.118.88.2-10.118.88.100
subnet: 10.118.88.0/24
vlan: 0
status:
allocatedIPCount: 0
totalIPCount: 200
```
**Additional**
```
# * spec.ips: Forbidden: remove some IP ranges [10.118.88.101-10.118.88.201] that is being used, total IP addresses of an Subnet are jointly determined by 'spec.ips' and 'spec.excludeIPs'
status:
allocatedIPCount: 0
totalIPCount: 200
```
|
1.0
|
Edit subnet is rejected - Describe the version
version about:
spiderpool
- v0.2.2
**Describe the bug**
A Subnet, without any IP assigned to the IPPool,But it is not possible to change the size of the ips
**Output of the failure**
```
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
# spidersubnets.spiderpool.spidernet.io "v4-ss-10" was not valid:
# * spec.ips: Forbidden: remove some IP ranges [10.118.88.101-10.118.88.201] that is being used, total IP addresses of an Subnet are jointly determined by 'spec.ips' and 'spec.excludeIPs'
#
apiVersion: spiderpool.spidernet.io/v1
kind: SpiderSubnet
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"spiderpool.spidernet.io/v1","kind":"SpiderSubnet","metadata":{"annotations":{},"deletionGracePeriodSeconds":0,"finalizers":["spiderpool.spidernet.io"],"generation":2,"name":"v4-ss-10","resourceVersion":"43769"},"spec":{"gateway":"10.118.88.1","ipVersion":4,"ips":["10.118.88.2-10.118.88.201"],"subnet":"10.118.88.0/24","vlan":0}}
creationTimestamp: "2022-10-25T03:23:49Z"
finalizers:
- spiderpool.spidernet.io
generation: 1
name: v4-ss-10
resourceVersion: "127189"
uid: 1f9b1561-a7a8-4a14-80ba-105bf77198be
spec:
gateway: 10.118.88.1
ipVersion: 4
ips:
- 10.118.88.2-10.118.88.100
subnet: 10.118.88.0/24
vlan: 0
status:
allocatedIPCount: 0
totalIPCount: 200
```
**Additional**
```
# * spec.ips: Forbidden: remove some IP ranges [10.118.88.101-10.118.88.201] that is being used, total IP addresses of an Subnet are jointly determined by 'spec.ips' and 'spec.excludeIPs'
status:
allocatedIPCount: 0
totalIPCount: 200
```
|
non_process
|
edit subnet is rejected describe the version version about spiderpool describe the bug a subnet without any ip assigned to the ippool,but it is not possible to change the size of the ips output of the failure please edit the object below lines beginning with a will be ignored and an empty file will abort the edit if an error occurs while saving this file will be reopened with the relevant failures spidersubnets spiderpool spidernet io ss was not valid spec ips forbidden remove some ip ranges that is being used total ip addresses of an subnet are jointly determined by spec ips and spec excludeips apiversion spiderpool spidernet io kind spidersubnet metadata annotations kubectl kubernetes io last applied configuration apiversion spiderpool spidernet io kind spidersubnet metadata annotations deletiongraceperiodseconds finalizers generation name ss resourceversion spec gateway ipversion ips subnet vlan creationtimestamp finalizers spiderpool spidernet io generation name ss resourceversion uid spec gateway ipversion ips subnet vlan status allocatedipcount totalipcount additional spec ips forbidden remove some ip ranges that is being used total ip addresses of an subnet are jointly determined by spec ips and spec excludeips status allocatedipcount totalipcount
| 0
|
7,800
| 10,959,312,910
|
IssuesEvent
|
2019-11-27 11:07:58
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
closed
|
Implement solution for (flexible) process
|
Epic area/process status/draft team/nusse
|
**Goal for MVP2:** Know how to solve this
**Goal for MVP3:** Solve this for MVP - including update workflow to support one-step submit (not two step like in current workflow)
|
1.0
|
Implement solution for (flexible) process - **Goal for MVP2:** Know how to solve this
**Goal for MVP3:** Solve this for MVP - including update workflow to support one-step submit (not two step like in current workflow)
|
process
|
implement solution for flexible process goal for know how to solve this goal for solve this for mvp including update workflow to support one step submit not two step like in current workflow
| 1
|
7,371
| 10,512,645,033
|
IssuesEvent
|
2019-09-27 18:26:50
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Applicant Status: Completed
|
Apply Process Requirements Ready State Dept.
|
Who: Student
What: Status on Dashboard
Why: As a student I would like to know what my status is as an applicant
Statuses:
**Not complete** - when a student is a primary selection and the Hiring Manger does not mark them complete before closing the internship.
**Completed** - when a student is an alternate or primary selection and the Hiring Manger marks them as completed before closing the internship.
**Alternate** - when a student is an alternate selection and the Hiring Manger does not mark them completed before closing the internship
|
1.0
|
Applicant Status: Completed - Who: Student
What: Status on Dashboard
Why: As a student I would like to know what my status is as an applicant
Statuses:
**Not complete** - when a student is a primary selection and the Hiring Manger does not mark them complete before closing the internship.
**Completed** - when a student is an alternate or primary selection and the Hiring Manger marks them as completed before closing the internship.
**Alternate** - when a student is an alternate selection and the Hiring Manger does not mark them completed before closing the internship
|
process
|
applicant status completed who student what status on dashboard why as a student i would like to know what my status is as an applicant statuses not complete when a student is a primary selection and the hiring manger does not mark them complete before closing the internship completed when a student is an alternate or primary selection and the hiring manger marks them as completed before closing the internship alternate when a student is an alternate selection and the hiring manger does not mark them completed before closing the internship
| 1
|
120,494
| 17,644,204,886
|
IssuesEvent
|
2021-08-20 01:57:00
|
logbie/HyperGAN
|
https://api.github.com/repos/logbie/HyperGAN
|
opened
|
CVE-2021-29616 (High) detected in tensorflow_gpu-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl
|
security vulnerability
|
## CVE-2021-29616 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow_gpu-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/0a/93/c7bca39b23aae45cd2e85ad3871c81eccc63b9c5276e926511e2e5b0879d/tensorflow_gpu-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/0a/93/c7bca39b23aae45cd2e85ad3871c81eccc63b9c5276e926511e2e5b0879d/tensorflow_gpu-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: HyperGAN/requirements.txt</p>
<p>Path to vulnerable library: HyperGAN/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow_gpu-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. The implementation of TrySimplify(https://github.com/tensorflow/tensorflow/blob/c22d88d6ff33031aa113e48aa3fc9aa74ed79595/tensorflow/core/grappler/optimizers/arithmetic_optimizer.cc#L390-L401) has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29616>CVE-2021-29616</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvv-7x94-7vq8">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvv-7x94-7vq8</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-29616 (High) detected in tensorflow_gpu-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl - ## CVE-2021-29616 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow_gpu-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/0a/93/c7bca39b23aae45cd2e85ad3871c81eccc63b9c5276e926511e2e5b0879d/tensorflow_gpu-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/0a/93/c7bca39b23aae45cd2e85ad3871c81eccc63b9c5276e926511e2e5b0879d/tensorflow_gpu-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: HyperGAN/requirements.txt</p>
<p>Path to vulnerable library: HyperGAN/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow_gpu-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. The implementation of TrySimplify(https://github.com/tensorflow/tensorflow/blob/c22d88d6ff33031aa113e48aa3fc9aa74ed79595/tensorflow/core/grappler/optimizers/arithmetic_optimizer.cc#L390-L401) has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29616>CVE-2021-29616</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvv-7x94-7vq8">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvv-7x94-7vq8</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tensorflow gpu whl cve high severity vulnerability vulnerable library tensorflow gpu whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file hypergan requirements txt path to vulnerable library hypergan requirements txt dependency hierarchy x tensorflow gpu whl vulnerable library vulnerability details tensorflow is an end to end open source platform for machine learning the implementation of trysimplify has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
| 0
|
16,699
| 21,797,984,938
|
IssuesEvent
|
2022-05-15 22:26:48
|
TheUltimateC0der/listrr.pro
|
https://api.github.com/repos/TheUltimateC0der/listrr.pro
|
closed
|
Create Lists that must meet all Genre selected
|
feature-request processing:server-side
|
For example, Crime and Thriller to get Crime Thrillers
|
1.0
|
Create Lists that must meet all Genre selected - For example, Crime and Thriller to get Crime Thrillers
|
process
|
create lists that must meet all genre selected for example crime and thriller to get crime thrillers
| 1
|
228,435
| 18,232,491,512
|
IssuesEvent
|
2021-10-01 00:00:46
|
ImagingDataCommons/IDC-WebApp
|
https://api.github.com/repos/ImagingDataCommons/IDC-WebApp
|
closed
|
Page selection buttons are jumping around (again)
|
bug merged:dev testing needed testing passed In Progress
|
This is what I see in the test tier:

|
2.0
|
Page selection buttons are jumping around (again) - This is what I see in the test tier:

|
non_process
|
page selection buttons are jumping around again this is what i see in the test tier
| 0
|
50,707
| 10,549,209,124
|
IssuesEvent
|
2019-10-03 08:10:55
|
CleverRaven/Cataclysm-DDA
|
https://api.github.com/repos/CleverRaven/Cataclysm-DDA
|
closed
|
Reduce scope of sunlight shadowcasting.
|
Code: Performance Good First Issue [C++]
|
**Is your feature request related to a problem? Please describe.**
During active play, recalculating sunlight levels in outdoor areas consumes a significant amount of processing time.
**Describe the solution you'd like**
On map shift or on initial sunlight propogation, we can track which horizontal map slices are populated with opaque tiles, and start sunlight propagation at that level instead of starting at the top of the skybox.
|
1.0
|
Reduce scope of sunlight shadowcasting. - **Is your feature request related to a problem? Please describe.**
During active play, recalculating sunlight levels in outdoor areas consumes a significant amount of processing time.
**Describe the solution you'd like**
On map shift or on initial sunlight propogation, we can track which horizontal map slices are populated with opaque tiles, and start sunlight propagation at that level instead of starting at the top of the skybox.
|
non_process
|
reduce scope of sunlight shadowcasting is your feature request related to a problem please describe during active play recalculating sunlight levels in outdoor areas consumes a significant amount of processing time describe the solution you d like on map shift or on initial sunlight propogation we can track which horizontal map slices are populated with opaque tiles and start sunlight propagation at that level instead of starting at the top of the skybox
| 0
|
92,708
| 3,872,978,201
|
IssuesEvent
|
2016-04-11 15:29:39
|
cs2103jan2016-W14-2J/main
|
https://api.github.com/repos/cs2103jan2016-W14-2J/main
|
closed
|
Parser to enhance Deadline parsing
|
Priority.medium Type.Enhancement
|
etc: meet Hannah by 7pm by the beach. (if 7pm has passed, set the date to the following day)
etc. finish revising chapter 1 in 2 hours time.
|
1.0
|
Parser to enhance Deadline parsing - etc: meet Hannah by 7pm by the beach. (if 7pm has passed, set the date to the following day)
etc. finish revising chapter 1 in 2 hours time.
|
non_process
|
parser to enhance deadline parsing etc meet hannah by by the beach if has passed set the date to the following day etc finish revising chapter in hours time
| 0
|
9,585
| 12,536,739,608
|
IssuesEvent
|
2020-06-05 01:05:41
|
googleapis/gapic-showcase
|
https://api.github.com/repos/googleapis/gapic-showcase
|
closed
|
chore: enable typescript-smoke-test once proto3_optional support is added
|
process
|
Disabled the `typescript-smoke-test` job in #380. Once gapic-generator-typescript implements proto3_optional support, we can enable it again (if we want to).
|
1.0
|
chore: enable typescript-smoke-test once proto3_optional support is added - Disabled the `typescript-smoke-test` job in #380. Once gapic-generator-typescript implements proto3_optional support, we can enable it again (if we want to).
|
process
|
chore enable typescript smoke test once optional support is added disabled the typescript smoke test job in once gapic generator typescript implements optional support we can enable it again if we want to
| 1
|
676,061
| 23,115,103,540
|
IssuesEvent
|
2022-07-27 15:58:18
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[docdb] Cluster fails on macOS Sierra / Intel Core 2 Duo due to missing SSE4.2 instruction set
|
kind/enhancement area/docdb priority/medium
|
Jira Link: [DB-1798](https://yugabyte.atlassian.net/browse/DB-1798)

$ uname -v
Darwin Kernel Version 16.7.0: Fri Apr 27 17:59:46 PDT 2018; root:xnu-3789.73.13~1/RELEASE_X86_64
$ ./bin/yb-ctl --num_shards_per_tserver 4 create
$ cat /tmp/yugabyte-local-cluster/node-3/disk-1/tserver.err
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0621 19:39:16.525822 3893433280 main_util.cc:24] Not implemented (yb/util/init.cc:62): The CPU on this system (Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz) does not support the SSE4.2 instruction set which is required for running YB.
|
1.0
|
[docdb] Cluster fails on macOS Sierra / Intel Core 2 Duo due to missing SSE4.2 instruction set - Jira Link: [DB-1798](https://yugabyte.atlassian.net/browse/DB-1798)

$ uname -v
Darwin Kernel Version 16.7.0: Fri Apr 27 17:59:46 PDT 2018; root:xnu-3789.73.13~1/RELEASE_X86_64
$ ./bin/yb-ctl --num_shards_per_tserver 4 create
$ cat /tmp/yugabyte-local-cluster/node-3/disk-1/tserver.err
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0621 19:39:16.525822 3893433280 main_util.cc:24] Not implemented (yb/util/init.cc:62): The CPU on this system (Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz) does not support the SSE4.2 instruction set which is required for running YB.
|
non_process
|
cluster fails on macos sierra intel core duo due to missing instruction set jira link uname v darwin kernel version fri apr pdt root xnu release bin yb ctl num shards per tserver create cat tmp yugabyte local cluster node disk tserver err warning logging before initgooglelogging is written to stderr main util cc not implemented yb util init cc the cpu on this system intel r core tm duo cpu does not support the instruction set which is required for running yb
| 0
|
15,440
| 19,656,047,763
|
IssuesEvent
|
2022-01-10 12:37:57
|
plazi/community
|
https://api.github.com/repos/plazi/community
|
closed
|
A GENERIC CLASSIFICATION OF THE THELYPTERIDACEAE by Fawcett to be finished
|
process request
|
This is the file name for the fern book 2021_ThelypteridaceaeFerns_OACopy3.pdf
FF8EFF8EFFE99B4B602B7D49FFA7FFBB
thanks for looking into this - quiet a bit has already been done
tx donat
|
1.0
|
A GENERIC CLASSIFICATION OF THE THELYPTERIDACEAE by Fawcett to be finished - This is the file name for the fern book 2021_ThelypteridaceaeFerns_OACopy3.pdf
FF8EFF8EFFE99B4B602B7D49FFA7FFBB
thanks for looking into this - quiet a bit has already been done
tx donat
|
process
|
a generic classification of the thelypteridaceae by fawcett to be finished this is the file name for the fern book thelypteridaceaeferns pdf thanks for looking into this quiet a bit has already been done tx donat
| 1
|
7,002
| 10,275,927,802
|
IssuesEvent
|
2019-08-24 12:46:54
|
nsubstitute/NSubstitute.Analyzers
|
https://api.github.com/repos/nsubstitute/NSubstitute.Analyzers
|
closed
|
Remove code duplication for checking if given symbol belongs to NSubstitute
|
Non functional requirement
|
There is a lot of code duplication in the codebase related to checks if a given symbol/node is part of NSubstitute. Those checks should be extracted to some sort of helper class/service so as they can be easily reusable without need of copying the code all over the places
|
1.0
|
Remove code duplication for checking if given symbol belongs to NSubstitute - There is a lot of code duplication in the codebase related to checks if a given symbol/node is part of NSubstitute. Those checks should be extracted to some sort of helper class/service so as they can be easily reusable without need of copying the code all over the places
|
non_process
|
remove code duplication for checking if given symbol belongs to nsubstitute there is a lot of code duplication in the codebase related to checks if a given symbol node is part of nsubstitute those checks should be extracted to some sort of helper class service so as they can be easily reusable without need of copying the code all over the places
| 0
|
355,126
| 25,175,642,188
|
IssuesEvent
|
2022-11-11 09:00:44
|
Yuvaraj0702/pe
|
https://api.github.com/repos/Yuvaraj0702/pe
|
opened
|
FAQ sections does not have many FAQ's
|
type.DocumentationBug severity.VeryLow
|
Description: There is only one bug in the FAQ sections while there may be many more frequently asked questions regarding compatibility and file storage etc.
Image of bug:

Reason for bug level:
These are things that can be added easily and are only important to a small sample of the population who are the beginner users of the app who have no idea about operating systems etc.
<!--session: 1668154120243-dff9adb3-704a-4a87-b1b7-603816df992b-->
<!--Version: Web v3.4.4-->
|
1.0
|
FAQ sections does not have many FAQ's - Description: There is only one bug in the FAQ sections while there may be many more frequently asked questions regarding compatibility and file storage etc.
Image of bug:

Reason for bug level:
These are things that can be added easily and are only important to a small sample of the population who are the beginner users of the app who have no idea about operating systems etc.
<!--session: 1668154120243-dff9adb3-704a-4a87-b1b7-603816df992b-->
<!--Version: Web v3.4.4-->
|
non_process
|
faq sections does not have many faq s description there is only one bug in the faq sections while there may be many more frequently asked questions regarding compatibility and file storage etc image of bug reason for bug level these are things that can be added easily and are only important to a small sample of the population who are the beginner users of the app who have no idea about operating systems etc
| 0
|
6,672
| 9,789,424,237
|
IssuesEvent
|
2019-06-10 09:46:57
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
reopened
|
Meetings: multiple select date error
|
2.0.6 Fixed Process bug bug
|
go to meetings
open new meeting
click on multiple select
click on date filed
fill the fields Starting date and Ending date
click enter
the dates is not update

|
1.0
|
Meetings: multiple select date error - go to meetings
open new meeting
click on multiple select
click on date filed
fill the fields Starting date and Ending date
click enter
the dates is not update

|
process
|
meetings multiple select date error go to meetings open new meeting click on multiple select click on date filed fill the fields starting date and ending date click enter the dates is not update
| 1
|
22,594
| 31,818,256,917
|
IssuesEvent
|
2023-09-13 22:37:15
|
assimp/assimp
|
https://api.github.com/repos/assimp/assimp
|
closed
|
Bug: memory delete error.
|
Bug Postprocessing
|
error is occured when the postProcessing with ForceGenerateNormals
post option is
```
PostProcessSteps ppSteps = PostProcessSteps.None;
ppSteps |= PostProcessPreset.TargetRealTimeQuality;
ppSteps |= PostProcessSteps.GenerateSmoothNormals;
ppSteps |= PostProcessSteps.ForceGenerateNormals;
```
File: JoinVerticesProcess.cpp
Function:
```
void updateXMeshVertices(XMesh *pMesh, std::vector<Vertex> &uniqueVertices)
{
if (pMesh->mNormals) {
delete [] pMesh->mNormals; <= here ( memory exception ).
pMesh->mNormals = new aiVector3D[pMesh->mNumVertices];
for( unsigned int a = 0; a < pMesh->mNumVertices; a++) {
pMesh->mNormals[a] = uniqueVertices[a].normal;
}
}
}
```
I have fixed that.
code is under.
```
bool GenVertexNormalsProcess::GenMeshVertexNormals(aiMesh *pMesh, unsigned int meshIndex) {
if (nullptr != pMesh->mNormals) {
if (!force_) {
return false;
}
delete[] pMesh->mNormals;
pMesh->mNormals = nullptr; <- add code.
}
}
```
|
1.0
|
Bug: memory delete error. - error is occured when the postProcessing with ForceGenerateNormals
post option is
```
PostProcessSteps ppSteps = PostProcessSteps.None;
ppSteps |= PostProcessPreset.TargetRealTimeQuality;
ppSteps |= PostProcessSteps.GenerateSmoothNormals;
ppSteps |= PostProcessSteps.ForceGenerateNormals;
```
File: JoinVerticesProcess.cpp
Function:
```
void updateXMeshVertices(XMesh *pMesh, std::vector<Vertex> &uniqueVertices)
{
if (pMesh->mNormals) {
delete [] pMesh->mNormals; <= here ( memory exception ).
pMesh->mNormals = new aiVector3D[pMesh->mNumVertices];
for( unsigned int a = 0; a < pMesh->mNumVertices; a++) {
pMesh->mNormals[a] = uniqueVertices[a].normal;
}
}
}
```
I have fixed that.
code is under.
```
bool GenVertexNormalsProcess::GenMeshVertexNormals(aiMesh *pMesh, unsigned int meshIndex) {
if (nullptr != pMesh->mNormals) {
if (!force_) {
return false;
}
delete[] pMesh->mNormals;
pMesh->mNormals = nullptr; <- add code.
}
}
```
|
process
|
bug memory delete error error is occured when the postprocessing with forcegeneratenormals post option is postprocesssteps ppsteps postprocesssteps none ppsteps postprocesspreset targetrealtimequality ppsteps postprocesssteps generatesmoothnormals ppsteps postprocesssteps forcegeneratenormals file joinverticesprocess cpp function void updatexmeshvertices xmesh pmesh std vector uniquevertices if pmesh mnormals delete pmesh mnormals here memory exception pmesh mnormals new for unsigned int a a mnumvertices a pmesh mnormals uniquevertices normal i have fixed that code is under bool genvertexnormalsprocess genmeshvertexnormals aimesh pmesh unsigned int meshindex if nullptr pmesh mnormals if force return false delete pmesh mnormals pmesh mnormals nullptr add code
| 1
|
18,484
| 24,550,777,785
|
IssuesEvent
|
2022-10-12 12:27:00
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] [Angular Upgrade] My account > Change password > New password > Password view icon should get displayed
|
Bug P1 Participant manager Process: Fixed Process: Tested dev
|
My account > Change password > New password > Password view icon should get displayed
**Note:** Issue needs to be fixed wherever password view icon is getting displayed
**AR:**

**ER:**

|
2.0
|
[PM] [Angular Upgrade] My account > Change password > New password > Password view icon should get displayed - My account > Change password > New password > Password view icon should get displayed
**Note:** Issue needs to be fixed wherever password view icon is getting displayed
**AR:**

**ER:**

|
process
|
my account change password new password password view icon should get displayed my account change password new password password view icon should get displayed note issue needs to be fixed wherever password view icon is getting displayed ar er
| 1
|
187,450
| 15,098,161,932
|
IssuesEvent
|
2021-02-07 21:32:50
|
GetPublii/Publii
|
https://api.github.com/repos/GetPublii/Publii
|
closed
|
Add checked/unchecked setting for cookie groups in the GDPR banner config
|
new feature user documentation needed
|
Sometimes there can be more than one cookies group which is enabled by default.
|
1.0
|
Add checked/unchecked setting for cookie groups in the GDPR banner config - Sometimes there can be more than one cookies group which is enabled by default.
|
non_process
|
add checked unchecked setting for cookie groups in the gdpr banner config sometimes there can be more than one cookies group which is enabled by default
| 0
|
95,392
| 16,096,553,777
|
IssuesEvent
|
2021-04-27 01:14:02
|
Thezone1975/choosealicense.com
|
https://api.github.com/repos/Thezone1975/choosealicense.com
|
opened
|
WS-2020-0091 (High) detected in http-proxy-1.17.0.tgz
|
security vulnerability
|
## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.17.0.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.17.0.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.17.0.tgz</a></p>
<p>Path to dependency file: /choosealicense.com/assets/vendor/clipboard/package.json</p>
<p>Path to vulnerable library: choosealicense.com/assets/vendor/clipboard/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- karma-0.13.22.tgz (Root Library)
- :x: **http-proxy-1.17.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: http-proxy - 1.18.1 </p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2020-0091 (High) detected in http-proxy-1.17.0.tgz - ## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.17.0.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.17.0.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.17.0.tgz</a></p>
<p>Path to dependency file: /choosealicense.com/assets/vendor/clipboard/package.json</p>
<p>Path to vulnerable library: choosealicense.com/assets/vendor/clipboard/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- karma-0.13.22.tgz (Root Library)
- :x: **http-proxy-1.17.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: http-proxy - 1.18.1 </p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws high detected in http proxy tgz ws high severity vulnerability vulnerable library http proxy tgz http proxying for the masses library home page a href path to dependency file choosealicense com assets vendor clipboard package json path to vulnerable library choosealicense com assets vendor clipboard node modules http proxy package json dependency hierarchy karma tgz root library x http proxy tgz vulnerable library vulnerability details versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http proxy step up your open source security game with whitesource
| 0
|
14,899
| 18,291,533,875
|
IssuesEvent
|
2021-10-05 15:44:08
|
googleapis/python-bigquery-pandas
|
https://api.github.com/repos/googleapis/python-bigquery-pandas
|
closed
|
Dependency Dashboard
|
type: process api: bigquery
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/pandas-1.x -->[chore(deps): update dependency pandas to v1.3.3](../pull/397)
- [ ] <!-- rebase-branch=renovate/google-cloud-bigquery-2.x -->[chore(deps): update dependency google-cloud-bigquery to v2.28.0](../pull/396)
- [ ] <!-- rebase-branch=renovate/google-cloud-bigquery-storage-2.x -->[chore(deps): update dependency google-cloud-bigquery-storage to v2.9.0](../pull/399)
- [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/pandas-1.x -->[chore(deps): update dependency pandas to v1.3.3](../pull/397)
- [ ] <!-- rebase-branch=renovate/google-cloud-bigquery-2.x -->[chore(deps): update dependency google-cloud-bigquery to v2.28.0](../pull/396)
- [ ] <!-- rebase-branch=renovate/google-cloud-bigquery-storage-2.x -->[chore(deps): update dependency google-cloud-bigquery-storage to v2.9.0](../pull/399)
- [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any pull pull pull click on this checkbox to rebase all open prs at once check this box to trigger a request for renovate to run again on this repository
| 1
|
19,406
| 25,546,054,690
|
IssuesEvent
|
2022-11-29 18:54:23
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/cumulativetodelta] Output delta histograms include min and max when they shouldnt
|
bug help wanted priority:p2 processor/cumulativetodelta
|
### What happened?
## Description
The cumulativetodelta processor retains the min and max from the cumulative histograms when they shouldn't. Its semantically wrong to include cumulative min and max on a resulting delta.
## Steps to Reproduce
Its quite clear in the [source code](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/cumulativetodeltaprocessor/processor.go#L214-L220) that the min and max fields are retained from the data point. However, I've also pushed up a commit with a test case that [demonstrates it](https://github.com/jack-berg/opentelemetry-collector-contrib/commit/c512296b60f70155516d258f4d338d5e52b284c1).
## Expected Result
No values for resulting delta histogram min and max.
## Actual Result
Cumulative histogram min and max retained.
### Collector version
head
### Environment information
## Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
Its not super easy to resolve this because there isn't an obvious way to clear the min and max values. The [HistogramDataPoint](https://github.com/open-telemetry/opentelemetry-collector/blob/main/pdata/pmetric/generated_metrics.go#L1570-L1600) has methods to set min and max, but those require a `float64`. No method exists to set as nil, or clear.
|
1.0
|
[processor/cumulativetodelta] Output delta histograms include min and max when they shouldnt - ### What happened?
## Description
The cumulativetodelta processor retains the min and max from the cumulative histograms when they shouldn't. Its semantically wrong to include cumulative min and max on a resulting delta.
## Steps to Reproduce
Its quite clear in the [source code](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/cumulativetodeltaprocessor/processor.go#L214-L220) that the min and max fields are retained from the data point. However, I've also pushed up a commit with a test case that [demonstrates it](https://github.com/jack-berg/opentelemetry-collector-contrib/commit/c512296b60f70155516d258f4d338d5e52b284c1).
## Expected Result
No values for resulting delta histogram min and max.
## Actual Result
Cumulative histogram min and max retained.
### Collector version
head
### Environment information
## Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
Its not super easy to resolve this because there isn't an obvious way to clear the min and max values. The [HistogramDataPoint](https://github.com/open-telemetry/opentelemetry-collector/blob/main/pdata/pmetric/generated_metrics.go#L1570-L1600) has methods to set min and max, but those require a `float64`. No method exists to set as nil, or clear.
|
process
|
output delta histograms include min and max when they shouldnt what happened description the cumulativetodelta processor retains the min and max from the cumulative histograms when they shouldn t its semantically wrong to include cumulative min and max on a resulting delta steps to reproduce its quite clear in the that the min and max fields are retained from the data point however i ve also pushed up a commit with a test case that expected result no values for resulting delta histogram min and max actual result cumulative histogram min and max retained collector version head environment information environment os e g ubuntu compiler if manually compiled e g go opentelemetry collector configuration no response log output no response additional context its not super easy to resolve this because there isn t an obvious way to clear the min and max values the has methods to set min and max but those require a no method exists to set as nil or clear
| 1
|
222,541
| 17,081,368,035
|
IssuesEvent
|
2021-07-08 05:55:57
|
POSSF/POSSF
|
https://api.github.com/repos/POSSF/POSSF
|
closed
|
Typo & translate
|
documentation
|

صفحهی اول اشکالات متعددی از نظر غلط املایی و... دارد.
اهداف، حامیان و برگزار کنندگان همایش نیز به طور مشخص یا حداقل واضح قابل رویت نیستند.
|
1.0
|
Typo & translate - 
صفحهی اول اشکالات متعددی از نظر غلط املایی و... دارد.
اهداف، حامیان و برگزار کنندگان همایش نیز به طور مشخص یا حداقل واضح قابل رویت نیستند.
|
non_process
|
typo translate صفحهی اول اشکالات متعددی از نظر غلط املایی و دارد اهداف، حامیان و برگزار کنندگان همایش نیز به طور مشخص یا حداقل واضح قابل رویت نیستند
| 0
|
8,280
| 11,438,356,710
|
IssuesEvent
|
2020-02-05 03:12:55
|
GoogleContainerTools/kpt
|
https://api.github.com/repos/GoogleContainerTools/kpt
|
closed
|
Setup domain to work with the go module alias'
|
process
|
Need to be able to `go get kpt.dev` and `go get lib.kpt.dev` -- requires configuring the domain to respond with the correct go headers
See: https://golang.org/cmd/go/#hdr-Remote_import_paths
Also see:
https://github.com/GoogleCloudPlatform/govanityurls
|
1.0
|
Setup domain to work with the go module alias' - Need to be able to `go get kpt.dev` and `go get lib.kpt.dev` -- requires configuring the domain to respond with the correct go headers
See: https://golang.org/cmd/go/#hdr-Remote_import_paths
Also see:
https://github.com/GoogleCloudPlatform/govanityurls
|
process
|
setup domain to work with the go module alias need to be able to go get kpt dev and go get lib kpt dev requires configuring the domain to respond with the correct go headers see also see
| 1
|
440,940
| 12,706,526,986
|
IssuesEvent
|
2020-06-23 07:21:24
|
AGROFIMS/hagrofims
|
https://api.github.com/repos/AGROFIMS/hagrofims
|
opened
|
Add more soil info in Site description
|
medium priority site information
|
Add a section to give info about soil tests done prior to the experiment start, in Site description.
|
1.0
|
Add more soil info in Site description - Add a section to give info about soil tests done prior to the experiment start, in Site description.
|
non_process
|
add more soil info in site description add a section to give info about soil tests done prior to the experiment start in site description
| 0
|
25,359
| 18,538,771,420
|
IssuesEvent
|
2021-10-21 14:09:20
|
pythonitalia/pycon
|
https://api.github.com/repos/pythonitalia/pycon
|
closed
|
Add a shared local .env file in the repo
|
infrastructure
|
So who wants to work on this project doesn't need to know what to put in a .env file
|
1.0
|
Add a shared local .env file in the repo - So who wants to work on this project doesn't need to know what to put in a .env file
|
non_process
|
add a shared local env file in the repo so who wants to work on this project doesn t need to know what to put in a env file
| 0
|
230,922
| 18,724,840,948
|
IssuesEvent
|
2021-11-03 15:20:00
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: X-Pack Case API Integration Tests.x-pack/test/case_api_integration/spaces_only/tests/trial/cases/push_case·ts - cases spaces only enabled: trial push_case "after each" hook for "should push a case in space1"
|
loe:hours failed-test impact:low Team: SecuritySolution
|
A test failed on a tracked branch
```
Error: ECONNREFUSED: Connection refused
at Test.assert (/dev/shm/workspace/parallel/11/kibana/node_modules/supertest/lib/test.js:165:15)
at assert (/dev/shm/workspace/parallel/11/kibana/node_modules/supertest/lib/test.js:131:12)
at /dev/shm/workspace/parallel/11/kibana/node_modules/supertest/lib/test.js:128:5
at Test.Request.callback (/dev/shm/workspace/parallel/11/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:718:3)
at ClientRequest.<anonymous> (/dev/shm/workspace/parallel/11/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:646:10)
at Socket.socketErrorListener (_http_client.js:475:9)
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/15912/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Case API Integration Tests.x-pack/test/case_api_integration/spaces_only/tests/trial/cases/push_case·ts","test.name":"cases spaces only enabled: trial push_case \"after each\" hook for \"should push a case in space1\"","test.failCount":1}} -->
|
1.0
|
Failing test: X-Pack Case API Integration Tests.x-pack/test/case_api_integration/spaces_only/tests/trial/cases/push_case·ts - cases spaces only enabled: trial push_case "after each" hook for "should push a case in space1" - A test failed on a tracked branch
```
Error: ECONNREFUSED: Connection refused
at Test.assert (/dev/shm/workspace/parallel/11/kibana/node_modules/supertest/lib/test.js:165:15)
at assert (/dev/shm/workspace/parallel/11/kibana/node_modules/supertest/lib/test.js:131:12)
at /dev/shm/workspace/parallel/11/kibana/node_modules/supertest/lib/test.js:128:5
at Test.Request.callback (/dev/shm/workspace/parallel/11/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:718:3)
at ClientRequest.<anonymous> (/dev/shm/workspace/parallel/11/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:646:10)
at Socket.socketErrorListener (_http_client.js:475:9)
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/15912/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Case API Integration Tests.x-pack/test/case_api_integration/spaces_only/tests/trial/cases/push_case·ts","test.name":"cases spaces only enabled: trial push_case \"after each\" hook for \"should push a case in space1\"","test.failCount":1}} -->
|
non_process
|
failing test x pack case api integration tests x pack test case api integration spaces only tests trial cases push case·ts cases spaces only enabled trial push case after each hook for should push a case in a test failed on a tracked branch error econnrefused connection refused at test assert dev shm workspace parallel kibana node modules supertest lib test js at assert dev shm workspace parallel kibana node modules supertest lib test js at dev shm workspace parallel kibana node modules supertest lib test js at test request callback dev shm workspace parallel kibana node modules supertest node modules superagent lib node index js at clientrequest dev shm workspace parallel kibana node modules supertest node modules superagent lib node index js at socket socketerrorlistener http client js at emiterrornt internal streams destroy js at emiterrorclosent internal streams destroy js at processticksandrejections internal process task queues js first failure
| 0
|
21,814
| 30,316,587,015
|
IssuesEvent
|
2023-07-10 15:58:25
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - measurementUnit: use at least one SI example
|
Term - change Class - MeasurementOrFact non-normative Process - complete
|
## Term change
* Submitter: @damianooldoni @peterdesmet
* Efficacy Justification (why is this change necessary?): The comments for `measurementUnit` state: "Recommended best practice is to use the International System of Units (SI)." However, none of the examples (`mm`, `C`, `km`, `ha`) are one of the 7 base [SI units](https://en.wikipedia.org/wiki/International_System_of_Units), although mm and km are **SI units** with prefixes. It would be good to have at least `m` (meter) as first example, which use is widespread as `measurementUnit`. In addition, it would be good to have an example of a **squared unit** (e.g. `km²`) showing that a superscripted 2 is available in UTF8.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): no explicit demand, but use of `m` is widespread, so would be helpful to have that as a base SI example
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: yes, although currently no examples are listed for https://dwc.tdwg.org/terms/#dwciri:measurementUnit
Current Term definition: https://dwc.tdwg.org/list/#dwc_measurementUnit
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): measurementUnit
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): MeasurementOrFact
* Definition of the term (normative): The units associated with the measurementValue.
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use the International System of Units (SI) **where applicable**.
* Examples (not normative): ~`mm`, `C`, `km`, `ha`~ → **`m`, `g`, `l`, `C`, `mm`, `km²`**
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): http://rs.tdwg.org/dwc/terms/version/measurementUnit-2018-09-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/Gathering/SiteMeasurementsOrFacts/MeasurementOrFact/MeasurementOrFactAtomised/UnitOfMeasurement or DataSets/DataSet/Units/Unit/Gathering/SiteMeasurementsOrFacts/MeasurementOrFact/MeasurementOrFactAtomised/UnitOfMeasurement
|
1.0
|
Change term - measurementUnit: use at least one SI example - ## Term change
* Submitter: @damianooldoni @peterdesmet
* Efficacy Justification (why is this change necessary?): The comments for `measurementUnit` state: "Recommended best practice is to use the International System of Units (SI)." However, none of the examples (`mm`, `C`, `km`, `ha`) are one of the 7 base [SI units](https://en.wikipedia.org/wiki/International_System_of_Units), although mm and km are **SI units** with prefixes. It would be good to have at least `m` (meter) as first example, which use is widespread as `measurementUnit`. In addition, it would be good to have an example of a **squared unit** (e.g. `km²`) showing that a superscripted 2 is available in UTF8.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): no explicit demand, but use of `m` is widespread, so would be helpful to have that as a base SI example
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: yes, although currently no examples are listed for https://dwc.tdwg.org/terms/#dwciri:measurementUnit
Current Term definition: https://dwc.tdwg.org/list/#dwc_measurementUnit
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): measurementUnit
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): MeasurementOrFact
* Definition of the term (normative): The units associated with the measurementValue.
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use the International System of Units (SI) **where applicable**.
* Examples (not normative): ~`mm`, `C`, `km`, `ha`~ → **`m`, `g`, `l`, `C`, `mm`, `km²`**
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): http://rs.tdwg.org/dwc/terms/version/measurementUnit-2018-09-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/Gathering/SiteMeasurementsOrFacts/MeasurementOrFact/MeasurementOrFactAtomised/UnitOfMeasurement or DataSets/DataSet/Units/Unit/Gathering/SiteMeasurementsOrFacts/MeasurementOrFact/MeasurementOrFactAtomised/UnitOfMeasurement
|
process
|
change term measurementunit use at least one si example term change submitter damianooldoni peterdesmet efficacy justification why is this change necessary the comments for measurementunit state recommended best practice is to use the international system of units si however none of the examples mm c km ha are one of the base although mm and km are si units with prefixes it would be good to have at least m meter as first example which use is widespread as measurementunit in addition it would be good to have an example of a squared unit e g km² showing that a superscripted is available in demand justification if the change is semantic in nature name at least two organizations that independently need this term no explicit demand but use of m is widespread so would be helpful to have that as a base si example stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version yes although currently no examples are listed for current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes measurementunit organized in class e g occurrence event location taxon measurementorfact definition of the term normative the units associated with the measurementvalue usage comments recommendations regarding content etc not normative recommended best practice is to use the international system of units si where applicable examples not normative mm c km ha → m g l c mm km² refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative abcd xpath of the equivalent term in abcd or efg not normative datasets dataset units unit gathering sitemeasurementsorfacts measurementorfact measurementorfactatomised unitofmeasurement or datasets dataset units unit gathering sitemeasurementsorfacts measurementorfact measurementorfactatomised unitofmeasurement
| 1
|
17,386
| 23,202,597,633
|
IssuesEvent
|
2022-08-01 23:41:20
|
UMEP-dev/SuPy
|
https://api.github.com/repos/UMEP-dev/SuPy
|
closed
|
Timestep issue when running SuPy with 1min forcing data
|
pre-processing
|
When running SUEWS with input data at 60s resolution, the model timestep is set to 60s, and the output is set to 60s, the resulting output is given for every 30s. Same happens when running the model for a 5min timestep with input data at 5min resolution. The results are then given every 2.5min. The results of these half timesteps looks very strange, with extremely high values of longwave out .
The figure show results while running SUEWS with 5min input data, with a timestep of 5min and the output is set to be given at 5min but are given at every 2.5min.

|
1.0
|
Timestep issue when running SuPy with 1min forcing data - When running SUEWS with input data at 60s resolution, the model timestep is set to 60s, and the output is set to 60s, the resulting output is given for every 30s. Same happens when running the model for a 5min timestep with input data at 5min resolution. The results are then given every 2.5min. The results of these half timesteps looks very strange, with extremely high values of longwave out .
The figure show results while running SUEWS with 5min input data, with a timestep of 5min and the output is set to be given at 5min but are given at every 2.5min.

|
process
|
timestep issue when running supy with forcing data when running suews with input data at resolution the model timestep is set to and the output is set to the resulting output is given for every same happens when running the model for a timestep with input data at resolution the results are then given every the results of these half timesteps looks very strange with extremely high values of longwave out the figure show results while running suews with input data with a timestep of and the output is set to be given at but are given at every
| 1
|
16,099
| 20,271,295,266
|
IssuesEvent
|
2022-02-15 16:25:02
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Question on 16-bit x86: STOSD.REP ES:EDI
|
Feature: Processor/x86
|
The bytes ``66 f2 67 ab`` disassemble to STOSD.REP ES:EDI on 16-bit x86. The resulting Pcode mentions the registers EDI, EAX, ECX. Is this a bug?
|
1.0
|
Question on 16-bit x86: STOSD.REP ES:EDI - The bytes ``66 f2 67 ab`` disassemble to STOSD.REP ES:EDI on 16-bit x86. The resulting Pcode mentions the registers EDI, EAX, ECX. Is this a bug?
|
process
|
question on bit stosd rep es edi the bytes ab disassemble to stosd rep es edi on bit the resulting pcode mentions the registers edi eax ecx is this a bug
| 1
|
67,188
| 27,748,517,924
|
IssuesEvent
|
2023-03-15 18:48:08
|
BCDevOps/developer-experience
|
https://api.github.com/repos/BCDevOps/developer-experience
|
closed
|
SDN - Fix and add static KLAB2 STOR dhcpd include to GitHub repo
|
*team/ DXC* tech/automation *team/ ops and shared services* NSXT/SDN
|
**Describe the issue**
During the course of an emergency RFC involving DNS issues needing us to update dhcpd.conf on all our Openshift clusters, it was discovered that the unique situation of KLAB2 STOR network is not persistent, and thus when a refresh of KLAB's dhcpd.conf configuration takes place from regular IDIR home folder versus a specific folder as root, then KLAB2's STOR entries are not included in KLAB's dhcpd.conf, thus resulting in missing entries, which over a longer period of time, would have resulted in KLAB2 losing all of its IP addresses used to access NetApp Storage.
This ticket will address reviewing and cleaning this up so that re-running KLAB's playbook for refreshing dhcpd.conf will not lose these KLAB2 IP addresses later on when the dhcp lease expires on the assigned IP addresses.
**Additional context**
KLAB2 has a NetApp instance that is carved off of the KLAB NetApp instance. Because KLAB's NetApp instance is a virtual instead of physical appliance, the KLAB2 NetApp instance must be on the same network VLAN as KLAB's NetApp instance. KLAB-UTIL is currently the provider of DHCP services to the KLAB/KLAB2 NetApp network VLAN, thus we can't let KLAB2-UTIL also attempt DHCP services for this same network range. Thus what happens is that running the playbook on KLAB2-UTIL creates a stub file that we then copy over to KLAB-UTIL to be read in and thus inject the additional network definitions for KLAB2 NetApp access from the KLAB-UTIL server. This ticket will serve to ensure that stub file generated is now sourced from GitHub repo instead of being manually copied over between servers.
This is not an issue with EMERALD. EMERALD NetApp, like KLAB2 NetApp, is an instance off of another NetApp (SILVER in this case), but since SILVER NetApp is a physical instead of virtual appliance, we are able/allowed to assign EMERALD NetAPP its unique and separate network VLAN, thus avoiding this split-dhcpd configuration as experienced by KLAB/KLAB2.
**How does this benefit the users of our platform?**
Fixes of this nature improve the stability of our Openshift platforms.
**Definition of done**
- [x] Review and refresh how current roles are handling the dhcpd.conf.
- [x] Investigate how best we can include the KLAB2 dhcpd snippet into the GitHub repo but ensure other clusters are not reading in the KLAB2 content as well.
- [x] Implement and test any required changes.
|
1.0
|
SDN - Fix and add static KLAB2 STOR dhcpd include to GitHub repo - **Describe the issue**
During the course of an emergency RFC involving DNS issues needing us to update dhcpd.conf on all our Openshift clusters, it was discovered that the unique situation of KLAB2 STOR network is not persistent, and thus when a refresh of KLAB's dhcpd.conf configuration takes place from regular IDIR home folder versus a specific folder as root, then KLAB2's STOR entries are not included in KLAB's dhcpd.conf, thus resulting in missing entries, which over a longer period of time, would have resulted in KLAB2 losing all of its IP addresses used to access NetApp Storage.
This ticket will address reviewing and cleaning this up so that re-running KLAB's playbook for refreshing dhcpd.conf will not lose these KLAB2 IP addresses later on when the dhcp lease expires on the assigned IP addresses.
**Additional context**
KLAB2 has a NetApp instance that is carved off of the KLAB NetApp instance. Because KLAB's NetApp instance is a virtual instead of physical appliance, the KLAB2 NetApp instance must be on the same network VLAN as KLAB's NetApp instance. KLAB-UTIL is currently the provider of DHCP services to the KLAB/KLAB2 NetApp network VLAN, thus we can't let KLAB2-UTIL also attempt DHCP services for this same network range. Thus what happens is that running the playbook on KLAB2-UTIL creates a stub file that we then copy over to KLAB-UTIL to be read in and thus inject the additional network definitions for KLAB2 NetApp access from the KLAB-UTIL server. This ticket will serve to ensure that stub file generated is now sourced from GitHub repo instead of being manually copied over between servers.
This is not an issue with EMERALD. EMERALD NetApp, like KLAB2 NetApp, is an instance off of another NetApp (SILVER in this case), but since SILVER NetApp is a physical instead of virtual appliance, we are able/allowed to assign EMERALD NetAPP its unique and separate network VLAN, thus avoiding this split-dhcpd configuration as experienced by KLAB/KLAB2.
**How does this benefit the users of our platform?**
Fixes of this nature improve the stability of our Openshift platforms.
**Definition of done**
- [x] Review and refresh how current roles are handling the dhcpd.conf.
- [x] Investigate how best we can include the KLAB2 dhcpd snippet into the GitHub repo but ensure other clusters are not reading in the KLAB2 content as well.
- [x] Implement and test any required changes.
|
non_process
|
sdn fix and add static stor dhcpd include to github repo describe the issue during the course of an emergency rfc involving dns issues needing us to update dhcpd conf on all our openshift clusters it was discovered that the unique situation of stor network is not persistent and thus when a refresh of klab s dhcpd conf configuration takes place from regular idir home folder versus a specific folder as root then s stor entries are not included in klab s dhcpd conf thus resulting in missing entries which over a longer period of time would have resulted in losing all of its ip addresses used to access netapp storage this ticket will address reviewing and cleaning this up so that re running klab s playbook for refreshing dhcpd conf will not lose these ip addresses later on when the dhcp lease expires on the assigned ip addresses additional context has a netapp instance that is carved off of the klab netapp instance because klab s netapp instance is a virtual instead of physical appliance the netapp instance must be on the same network vlan as klab s netapp instance klab util is currently the provider of dhcp services to the klab netapp network vlan thus we can t let util also attempt dhcp services for this same network range thus what happens is that running the playbook on util creates a stub file that we then copy over to klab util to be read in and thus inject the additional network definitions for netapp access from the klab util server this ticket will serve to ensure that stub file generated is now sourced from github repo instead of being manually copied over between servers this is not an issue with emerald emerald netapp like netapp is an instance off of another netapp silver in this case but since silver netapp is a physical instead of virtual appliance we are able allowed to assign emerald netapp its unique and separate network vlan thus avoiding this split dhcpd configuration as experienced by klab how does this benefit the users of our platform fixes of this nature improve the stability of our openshift platforms definition of done review and refresh how current roles are handling the dhcpd conf investigate how best we can include the dhcpd snippet into the github repo but ensure other clusters are not reading in the content as well implement and test any required changes
| 0
|
10,782
| 13,608,980,336
|
IssuesEvent
|
2020-09-23 03:55:33
|
googleapis/java-securitycenter
|
https://api.github.com/repos/googleapis/java-securitycenter
|
closed
|
Dependency Dashboard
|
api: securitycenter type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-securitycenter-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-securitycenter to v1.2.0
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-pubsub-1.x -->deps: update dependency com.google.cloud:google-cloud-pubsub to v1.108.1
- [ ] <!-- rebase-branch=renovate/com.google.protobuf-protobuf-java-util-3.x -->deps: update dependency com.google.protobuf:protobuf-java-util to v3.13.0
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-securitycenter-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-securitycenter to v1.2.0
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-pubsub-1.x -->deps: update dependency com.google.cloud:google-cloud-pubsub to v1.108.1
- [ ] <!-- rebase-branch=renovate/com.google.protobuf-protobuf-java-util-3.x -->deps: update dependency com.google.protobuf:protobuf-java-util to v3.13.0
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any build deps update dependency org apache maven plugins maven project info reports plugin to chore deps update dependency com google cloud google cloud securitycenter to deps update dependency com google cloud google cloud pubsub to deps update dependency com google protobuf protobuf java util to check this option to rebase all the above open prs at once check this box to trigger a request for renovate to run again on this repository
| 1
|
1,433
| 3,996,531,667
|
IssuesEvent
|
2016-05-10 19:05:35
|
kerubistan/kerub
|
https://api.github.com/repos/kerubistan/kerub
|
opened
|
planner: learn from mistakes
|
component:data processing enhancement priority: normal
|
The planner should somehow keep track of the steps that passed and failed in the past. This information could be used to deprioritize steps that failed recently or often.
|
1.0
|
planner: learn from mistakes - The planner should somehow keep track of the steps that passed and failed in the past. This information could be used to deprioritize steps that failed recently or often.
|
process
|
planner learn from mistakes the planner should somehow keep track of the steps that passed and failed in the past this information could be used to deprioritize steps that failed recently or often
| 1
|
148,570
| 13,239,857,280
|
IssuesEvent
|
2020-08-19 04:46:49
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
[Android] Release Notes for Android 1.12.x Release
|
OS/Android QA/No documentation :writing_hand: release-notes/exclude
|
- Added Sync v2. ([#10203](https://github.com/brave/brave-browser/issues/10203))
- Enabled the "prefetch-privacy-changes" flag by default under brave://flags. ([#8319](https://github.com/brave/brave-browser/issues/8319))
- Added support for state level ads delivery. ([#9200](https://github.com/brave/brave-browser/issues/9200))
- Added the date of installation to the stats ping. ([#10061](https://github.com/brave/brave-browser/issues/10061))
- Added farbling for WebGL API when "Fingerprinting blocking" is set to "strict". ([#10214](https://github.com/brave/brave-browser/issues/10214))
- Updated referrer policy to improve privacy and prevent web compatibility issues. ([#8696](https://github.com/brave/brave-browser/issues/8696))
- Updated canvas maximum farbling to match balanced farbling. ([#11067](https://github.com/brave/brave-browser/issues/11067))
- Updated pre-populated search engine list. ([#11089](https://github.com/brave/brave-browser/issues/11089))
- Improved web compatibility by changing behavior of local and session storage in third-party frames to not throw an exception when storage is blocked. ([#9578](https://github.com/brave/brave-browser/issues/9758))
- Reduced size and improved performance of the publisher list for Brave Rewards. ([#10836](https://github.com/brave/brave-browser/issues/10836))
- Reduced the frequency at which promotions are fetched for rewards. ([#9513](https://github.com/brave/brave-browser/issues/9513))
- Fixed issue where "Bat Ads Service" was running when Brave Ads were not enabled. ([#9196](https://github.com/brave/brave-browser/issues/9196))
- Fixed crash with Brave Ads when opening a new tab in certain cases. ([#9393](https://github.com/brave/brave-browser/issues/9393))
- Fixed issue where "Bat Ledger Service" was running when Brave Rewards was not enabled. ([#9526](https://github.com/brave/brave-browser/issues/9526))
- Fixed file-path for cookies as reported on HackerOne by kanytu. ([#9818](https://github.com/brave/brave-browser/issues/9818))
- Fixed "Estimated pending rewards" not being refreshed after claiming an ad grant. ([#10094](https://github.com/brave/brave-browser/issues/10094))
- Fixed ads state being removed when Brave Ads are disabled. ([#10097](https://github.com/brave/brave-browser/issues/10097))
- Fixed clearing URL bar when in edit mode. ([#10524](https://github.com/brave/brave-browser/issues/10524))
- Fixed ads not being enabled on clean install when enabling rewards. ([#10526](https://github.com/brave/brave-browser/issues/10526))
- Fixed state level ads being shown on versions without support for state level ads delivery. ([#10557](https://github.com/brave/brave-browser/issues/10557))
- Disabled ad notifications on wearables. ([#9397](https://github.com/brave/brave-browser/issues/9397))
- Upgrade to Chromium Chromium 84.0.4147.125. ([#11153](https://github.com/brave/brave-browser/issues/11153))
|
1.0
|
[Android] Release Notes for Android 1.12.x Release - - Added Sync v2. ([#10203](https://github.com/brave/brave-browser/issues/10203))
- Enabled the "prefetch-privacy-changes" flag by default under brave://flags. ([#8319](https://github.com/brave/brave-browser/issues/8319))
- Added support for state level ads delivery. ([#9200](https://github.com/brave/brave-browser/issues/9200))
- Added the date of installation to the stats ping. ([#10061](https://github.com/brave/brave-browser/issues/10061))
- Added farbling for WebGL API when "Fingerprinting blocking" is set to "strict". ([#10214](https://github.com/brave/brave-browser/issues/10214))
- Updated referrer policy to improve privacy and prevent web compatibility issues. ([#8696](https://github.com/brave/brave-browser/issues/8696))
- Updated canvas maximum farbling to match balanced farbling. ([#11067](https://github.com/brave/brave-browser/issues/11067))
- Updated pre-populated search engine list. ([#11089](https://github.com/brave/brave-browser/issues/11089))
- Improved web compatibility by changing behavior of local and session storage in third-party frames to not throw an exception when storage is blocked. ([#9578](https://github.com/brave/brave-browser/issues/9758))
- Reduced size and improved performance of the publisher list for Brave Rewards. ([#10836](https://github.com/brave/brave-browser/issues/10836))
- Reduced the frequency at which promotions are fetched for rewards. ([#9513](https://github.com/brave/brave-browser/issues/9513))
- Fixed issue where "Bat Ads Service" was running when Brave Ads were not enabled. ([#9196](https://github.com/brave/brave-browser/issues/9196))
- Fixed crash with Brave Ads when opening a new tab in certain cases. ([#9393](https://github.com/brave/brave-browser/issues/9393))
- Fixed issue where "Bat Ledger Service" was running when Brave Rewards was not enabled. ([#9526](https://github.com/brave/brave-browser/issues/9526))
- Fixed file-path for cookies as reported on HackerOne by kanytu. ([#9818](https://github.com/brave/brave-browser/issues/9818))
- Fixed "Estimated pending rewards" not being refreshed after claiming an ad grant. ([#10094](https://github.com/brave/brave-browser/issues/10094))
- Fixed ads state being removed when Brave Ads are disabled. ([#10097](https://github.com/brave/brave-browser/issues/10097))
- Fixed clearing URL bar when in edit mode. ([#10524](https://github.com/brave/brave-browser/issues/10524))
- Fixed ads not being enabled on clean install when enabling rewards. ([#10526](https://github.com/brave/brave-browser/issues/10526))
- Fixed state level ads being shown on versions without support for state level ads delivery. ([#10557](https://github.com/brave/brave-browser/issues/10557))
- Disabled ad notifications on wearables. ([#9397](https://github.com/brave/brave-browser/issues/9397))
- Upgrade to Chromium Chromium 84.0.4147.125. ([#11153](https://github.com/brave/brave-browser/issues/11153))
|
non_process
|
release notes for android x release added sync enabled the prefetch privacy changes flag by default under brave flags added support for state level ads delivery added the date of installation to the stats ping added farbling for webgl api when fingerprinting blocking is set to strict updated referrer policy to improve privacy and prevent web compatibility issues updated canvas maximum farbling to match balanced farbling updated pre populated search engine list improved web compatibility by changing behavior of local and session storage in third party frames to not throw an exception when storage is blocked reduced size and improved performance of the publisher list for brave rewards reduced the frequency at which promotions are fetched for rewards fixed issue where bat ads service was running when brave ads were not enabled fixed crash with brave ads when opening a new tab in certain cases fixed issue where bat ledger service was running when brave rewards was not enabled fixed file path for cookies as reported on hackerone by kanytu fixed estimated pending rewards not being refreshed after claiming an ad grant fixed ads state being removed when brave ads are disabled fixed clearing url bar when in edit mode fixed ads not being enabled on clean install when enabling rewards fixed state level ads being shown on versions without support for state level ads delivery disabled ad notifications on wearables upgrade to chromium chromium
| 0
|
4,256
| 7,189,055,998
|
IssuesEvent
|
2018-02-02 12:35:28
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
chifra improvements
|
apps-chifra status-inprocess type-enhancement
|
Chifra should allow for hitting enter on the names and if empty then name the contracts simply by dao_1, dao-2 etc. Also first block should be non-empty, but if second is empty it should ask if you want to use same block for all--or better, alow to sepcify block on command line and if present don't do anyhting.
|
1.0
|
chifra improvements - Chifra should allow for hitting enter on the names and if empty then name the contracts simply by dao_1, dao-2 etc. Also first block should be non-empty, but if second is empty it should ask if you want to use same block for all--or better, alow to sepcify block on command line and if present don't do anyhting.
|
process
|
chifra improvements chifra should allow for hitting enter on the names and if empty then name the contracts simply by dao dao etc also first block should be non empty but if second is empty it should ask if you want to use same block for all or better alow to sepcify block on command line and if present don t do anyhting
| 1
|
4,756
| 5,258,750,651
|
IssuesEvent
|
2017-02-03 00:30:52
|
gahansen/Albany
|
https://api.github.com/repos/gahansen/Albany
|
opened
|
Implement Teko preconditioner for matrix-free GMRES + Schwarz
|
Infrastructure LCM
|
This is needed to improve convergence for matrix-free GMRES + Schwarz (= monolithic Schwarz), and will require changes in Piro. I will need some help from Trilinos experts, e.g., @rppawlo , on how to hook this up, as it is nontrivial.
|
1.0
|
Implement Teko preconditioner for matrix-free GMRES + Schwarz - This is needed to improve convergence for matrix-free GMRES + Schwarz (= monolithic Schwarz), and will require changes in Piro. I will need some help from Trilinos experts, e.g., @rppawlo , on how to hook this up, as it is nontrivial.
|
non_process
|
implement teko preconditioner for matrix free gmres schwarz this is needed to improve convergence for matrix free gmres schwarz monolithic schwarz and will require changes in piro i will need some help from trilinos experts e g rppawlo on how to hook this up as it is nontrivial
| 0
|
846
| 2,517,129,666
|
IssuesEvent
|
2015-01-16 12:01:27
|
GoogleChrome/webrtc
|
https://api.github.com/repos/GoogleChrome/webrtc
|
opened
|
Make sure all manual-test/* files pass grunt
|
bug enhancement manual-test
|
Currently the manual-test folder is excluded from lint checking etc. Need to make sure all files adhere to the guidelines.
|
1.0
|
Make sure all manual-test/* files pass grunt - Currently the manual-test folder is excluded from lint checking etc. Need to make sure all files adhere to the guidelines.
|
non_process
|
make sure all manual test files pass grunt currently the manual test folder is excluded from lint checking etc need to make sure all files adhere to the guidelines
| 0
|
18,612
| 24,579,236,192
|
IssuesEvent
|
2022-10-13 14:29:25
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Consent API] [iOS] Study resources screen > Participants should be able to view and download the data-sharing consent screenshot, in the mobile app
|
Bug P0 iOS Process: Fixed Process: Tested dev
|
**Description:**
Pre-condition: The study should be created in SB by enabling data sharing permission
**Steps:**
1. Install the mobile app
3. Sign in / Sign up
4. Enroll to the study
5. Navigate to 'Study Resources' screen and Verify
**AR:** Participants are not able to view and download the data-sharing consent screenshot, if applicable, from the Study Resources screen
**ER:** Participants should be able to view and download the data-sharing consent screenshot, if applicable, from the Study Resources screen
|
2.0
|
[Consent API] [iOS] Study resources screen > Participants should be able to view and download the data-sharing consent screenshot, in the mobile app - **Description:**
Pre-condition: The study should be created in SB by enabling data sharing permission
**Steps:**
1. Install the mobile app
3. Sign in / Sign up
4. Enroll to the study
5. Navigate to 'Study Resources' screen and Verify
**AR:** Participants are not able to view and download the data-sharing consent screenshot, if applicable, from the Study Resources screen
**ER:** Participants should be able to view and download the data-sharing consent screenshot, if applicable, from the Study Resources screen
|
process
|
study resources screen participants should be able to view and download the data sharing consent screenshot in the mobile app description pre condition the study should be created in sb by enabling data sharing permission steps install the mobile app sign in sign up enroll to the study navigate to study resources screen and verify ar participants are not able to view and download the data sharing consent screenshot if applicable from the study resources screen er participants should be able to view and download the data sharing consent screenshot if applicable from the study resources screen
| 1
|
64,953
| 14,703,898,074
|
IssuesEvent
|
2021-01-04 15:42:24
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[SECURITY SOLUTION] Timeline updates may not be saved
|
Feature:Timeline Team: SecuritySolution Team:Threat Hunting bug fixed impact:high v7.11.0
|
Kibana version:
7.9.0
Elasticsearch version:
7.9.0
Describe the bug:
When you update a Timeline and then immediately close it or navigate to a different page, the Timeline might not be saved.
1. Navigate to Security > Timelines
2. Create a new timeline
3. Add a title
4. Change the order of Timeline columns
5. Immediately navigate to Security > Overview
Current behavior:
The new column order is not saved.
Expected behavior:
The new column order is saved.
|
True
|
[SECURITY SOLUTION] Timeline updates may not be saved - Kibana version:
7.9.0
Elasticsearch version:
7.9.0
Describe the bug:
When you update a Timeline and then immediately close it or navigate to a different page, the Timeline might not be saved.
1. Navigate to Security > Timelines
2. Create a new timeline
3. Add a title
4. Change the order of Timeline columns
5. Immediately navigate to Security > Overview
Current behavior:
The new column order is not saved.
Expected behavior:
The new column order is saved.
|
non_process
|
timeline updates may not be saved kibana version elasticsearch version describe the bug when you update a timeline and then immediately close it or navigate to a different page the timeline might not be saved navigate to security timelines create a new timeline add a title change the order of timeline columns immediately navigate to security overview current behavior the new column order is not saved expected behavior the new column order is saved
| 0
|
13,137
| 15,556,871,970
|
IssuesEvent
|
2021-03-16 08:25:12
|
ethereumclassic/ECIPs
|
https://api.github.com/repos/ethereumclassic/ECIPs
|
closed
|
Kevin's ECIP Editor Permissions
|
meta:1 governance meta:3 process
|
It looks like @developerkevin has ECIP Editor permissions, but is not listed in the ECIP-1000. Can we get that updated?
|
1.0
|
Kevin's ECIP Editor Permissions - It looks like @developerkevin has ECIP Editor permissions, but is not listed in the ECIP-1000. Can we get that updated?
|
process
|
kevin s ecip editor permissions it looks like developerkevin has ecip editor permissions but is not listed in the ecip can we get that updated
| 1
|
122,537
| 10,226,478,689
|
IssuesEvent
|
2019-08-16 17:56:28
|
fossasia/open-event-frontend
|
https://api.github.com/repos/fossasia/open-event-frontend
|
closed
|
Tickets are required for publishing event error even when tickets are present
|
bug weekly-testing
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
"Tickets are required for publishing event" is popping up even when tickets are present.
Error:

Tickets :

**Additional context**
<!-- Add any other context about the problem here. -->
Working on it
|
1.0
|
Tickets are required for publishing event error even when tickets are present - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
"Tickets are required for publishing event" is popping up even when tickets are present.
Error:

Tickets :

**Additional context**
<!-- Add any other context about the problem here. -->
Working on it
|
non_process
|
tickets are required for publishing event error even when tickets are present describe the bug tickets are required for publishing event is popping up even when tickets are present error tickets additional context working on it
| 0
|
8,331
| 11,493,009,443
|
IssuesEvent
|
2020-02-11 22:08:09
|
googleapis/sloth
|
https://api.github.com/repos/googleapis/sloth
|
closed
|
Add Andrew Zammit (zamnuts) to yoshi-nodejs team.
|
type: process
|
Andrew is onboarding this week as a new Agendaless contractor for Node.js.
|
1.0
|
Add Andrew Zammit (zamnuts) to yoshi-nodejs team. - Andrew is onboarding this week as a new Agendaless contractor for Node.js.
|
process
|
add andrew zammit zamnuts to yoshi nodejs team andrew is onboarding this week as a new agendaless contractor for node js
| 1
|
15,786
| 19,976,925,113
|
IssuesEvent
|
2022-01-29 08:20:19
|
tushushu/ulist
|
https://api.github.com/repos/tushushu/ulist
|
closed
|
Implement apply method
|
data processing
|
Implement `apply` method like this:
```Python
>>> import ulist as ul
>>> arr = ul.arange(3)
>>> arr.apply(lambda x: x < 1)
UltraFastList([True, False, False])
```
|
1.0
|
Implement apply method - Implement `apply` method like this:
```Python
>>> import ulist as ul
>>> arr = ul.arange(3)
>>> arr.apply(lambda x: x < 1)
UltraFastList([True, False, False])
```
|
process
|
implement apply method implement apply method like this python import ulist as ul arr ul arange arr apply lambda x x ultrafastlist
| 1
|
62,968
| 17,274,223,953
|
IssuesEvent
|
2021-07-23 02:24:24
|
milvus-io/milvus-insight
|
https://api.github.com/repos/milvus-io/milvus-insight
|
opened
|
Auto id set true is not working when create collection
|
defect
|
**Describe the bug:**
Auto id set true is not working when create collection
**Steps to reproduce:**
1. create collection
2. set autoid true
3. it's always return false
**Milvus-insight version:**
latest
**Milvus version:**
|
1.0
|
Auto id set true is not working when create collection - **Describe the bug:**
Auto id set true is not working when create collection
**Steps to reproduce:**
1. create collection
2. set autoid true
3. it's always return false
**Milvus-insight version:**
latest
**Milvus version:**
|
non_process
|
auto id set true is not working when create collection describe the bug auto id set true is not working when create collection steps to reproduce create collection set autoid true it s always return false milvus insight version latest milvus version
| 0
|
3,556
| 6,588,148,777
|
IssuesEvent
|
2017-09-14 01:07:49
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
child_process: fork() with shell is impossible?
|
child_process good first contribution windows
|
* **Version**: from v4. x up to v9.0
* **Platform**: Windows 7 x64
* **Subsystem**: child_process
Currently, [the doc](https://github.com/nodejs/node/blob/c02dcc7b5983b2925016194002b9bc9a1e9da6c4/doc/api/child_process.md#child_processforkmodulepath-args-options) says nothing if `fork()` is executed with shell, also no `shell` option is mentioned. However, `fork()` is based upon `spawn()` and almost all the options are transferred as is. So, without `shell` option we have the default `spawn()` behavior (without shell):
```js
if (!process.argv[2]) {
require('child_process').fork(__filename, ['%temp%'], { });
} else {
console.log(process.argv[2]);
}
```
```console
%temp%
```
However, if `shell` option is set to `true`, `fork()` becomes broken in at least two ways:
1. If a path to the executable has spaces, we have this error:
```js
if (!process.argv[2]) {
require('child_process').fork(__filename, ['%temp%'], { shell: true });
} else {
console.log(process.argv[2]);
}
```
```console
>node test.js
'C:\Program' is not recognized as an internal or external command,
operable program or batch file.
```
2. If a path to the executable has no spaces, we have this error:
```console
>node.8.1.3.exe test.js
child_process.js:106
p.open(fd);
^
Error: EBADF: bad file descriptor, uv_pipe_open
at Object.exports._forkChild (child_process.js:106:5)
at Object.setupChannel (internal/process.js:247:8)
at startup (bootstrap_node.js:53:16)
at bootstrap_node.js:575:3
```
So there are some questions:
1. Should we document `fork()` and shell interaction (and `shell` option) and fix these issues?
2. If not, should we strip `shell` option before spawning (and maybe somehow document this)?
|
1.0
|
child_process: fork() with shell is impossible? - * **Version**: from v4. x up to v9.0
* **Platform**: Windows 7 x64
* **Subsystem**: child_process
Currently, [the doc](https://github.com/nodejs/node/blob/c02dcc7b5983b2925016194002b9bc9a1e9da6c4/doc/api/child_process.md#child_processforkmodulepath-args-options) says nothing if `fork()` is executed with shell, also no `shell` option is mentioned. However, `fork()` is based upon `spawn()` and almost all the options are transferred as is. So, without `shell` option we have the default `spawn()` behavior (without shell):
```js
if (!process.argv[2]) {
require('child_process').fork(__filename, ['%temp%'], { });
} else {
console.log(process.argv[2]);
}
```
```console
%temp%
```
However, if `shell` option is set to `true`, `fork()` becomes broken in at least two ways:
1. If a path to the executable has spaces, we have this error:
```js
if (!process.argv[2]) {
require('child_process').fork(__filename, ['%temp%'], { shell: true });
} else {
console.log(process.argv[2]);
}
```
```console
>node test.js
'C:\Program' is not recognized as an internal or external command,
operable program or batch file.
```
2. If a path to the executable has no spaces, we have this error:
```console
>node.8.1.3.exe test.js
child_process.js:106
p.open(fd);
^
Error: EBADF: bad file descriptor, uv_pipe_open
at Object.exports._forkChild (child_process.js:106:5)
at Object.setupChannel (internal/process.js:247:8)
at startup (bootstrap_node.js:53:16)
at bootstrap_node.js:575:3
```
So there are some questions:
1. Should we document `fork()` and shell interaction (and `shell` option) and fix these issues?
2. If not, should we strip `shell` option before spawning (and maybe somehow document this)?
|
process
|
child process fork with shell is impossible version from x up to platform windows subsystem child process currently says nothing if fork is executed with shell also no shell option is mentioned however fork is based upon spawn and almost all the options are transferred as is so without shell option we have the default spawn behavior without shell js if process argv require child process fork filename else console log process argv console temp however if shell option is set to true fork becomes broken in at least two ways if a path to the executable has spaces we have this error js if process argv require child process fork filename shell true else console log process argv console node test js c program is not recognized as an internal or external command operable program or batch file if a path to the executable has no spaces we have this error console node exe test js child process js p open fd error ebadf bad file descriptor uv pipe open at object exports forkchild child process js at object setupchannel internal process js at startup bootstrap node js at bootstrap node js so there are some questions should we document fork and shell interaction and shell option and fix these issues if not should we strip shell option before spawning and maybe somehow document this
| 1
|
290,551
| 21,884,903,960
|
IssuesEvent
|
2022-05-19 17:36:00
|
hashicorp/terraform
|
https://api.github.com/repos/hashicorp/terraform
|
closed
|
Missing feature documentation: precondition and postcondition check blocks
|
bug documentation confirmed
|
<!--
Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
For feature requests concerning Terraform Cloud/Enterprise, please contact tf-cloud@hashicorp.support
If your issue relates to a specific Terraform provider, please open it in the provider's own repository. The index of providers is at https://github.com/terraform-providers.
-->
### Current Terraform Version
<!---
Run `terraform version` to show the version, and paste the result between the ``` marks below. This will record which version was current at the time of your feature request, to help manage the request backlog.
If you're not using the latest version, please check to see if something related to your request has already been implemented in a later version.
-->
1.2.0
### Use-cases
There is a new release 1.2.0, from the release documentation:
```
NEW FEATURES:
precondition and postcondition check blocks for resources, data sources, and module output values: module authors can now document assumptions and assertions about configuration and state values. If these conditions are not met, Terraform will report a custom error message to the user and halt further execution.
```
Earlier RC release also mentioned this feature.
The problem: there seems to be no adequate documentation for it, or at least it cannot be found.
### Attempted Solutions
1. Searching Terraform docs
2. Searching the web with search engine
Result: no relevant hit
### Proposal
Add the documentation, or move it to a relevant place so it can be found.
### References
<!--
Are there any other GitHub issues, whether open or closed, that are related to the problem you've described above or to the suggested solution? If so, please create a list below that mentions each of them. For example:
- #6017
-->
|
1.0
|
Missing feature documentation: precondition and postcondition check blocks - <!--
Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
For feature requests concerning Terraform Cloud/Enterprise, please contact tf-cloud@hashicorp.support
If your issue relates to a specific Terraform provider, please open it in the provider's own repository. The index of providers is at https://github.com/terraform-providers.
-->
### Current Terraform Version
<!---
Run `terraform version` to show the version, and paste the result between the ``` marks below. This will record which version was current at the time of your feature request, to help manage the request backlog.
If you're not using the latest version, please check to see if something related to your request has already been implemented in a later version.
-->
1.2.0
### Use-cases
There is a new release 1.2.0, from the release documentation:
```
NEW FEATURES:
precondition and postcondition check blocks for resources, data sources, and module output values: module authors can now document assumptions and assertions about configuration and state values. If these conditions are not met, Terraform will report a custom error message to the user and halt further execution.
```
Earlier RC release also mentioned this feature.
The problem: there seems to be no adequate documentation for it, or at least it cannot be found.
### Attempted Solutions
1. Searching Terraform docs
2. Searching the web with search engine
Result: no relevant hit
### Proposal
Add the documentation, or move it to a relevant place so it can be found.
### References
<!--
Are there any other GitHub issues, whether open or closed, that are related to the problem you've described above or to the suggested solution? If so, please create a list below that mentions each of them. For example:
- #6017
-->
|
non_process
|
missing feature documentation precondition and postcondition check blocks hi there thank you for opening an issue please note that we try to keep the terraform issue tracker reserved for bug reports and feature requests for general usage questions please see for feature requests concerning terraform cloud enterprise please contact tf cloud hashicorp support if your issue relates to a specific terraform provider please open it in the provider s own repository the index of providers is at current terraform version run terraform version to show the version and paste the result between the marks below this will record which version was current at the time of your feature request to help manage the request backlog if you re not using the latest version please check to see if something related to your request has already been implemented in a later version use cases there is a new release from the release documentation new features precondition and postcondition check blocks for resources data sources and module output values module authors can now document assumptions and assertions about configuration and state values if these conditions are not met terraform will report a custom error message to the user and halt further execution earlier rc release also mentioned this feature the problem there seems to be no adequate documentation for it or at least it cannot be found attempted solutions searching terraform docs searching the web with search engine result no relevant hit proposal add the documentation or move it to a relevant place so it can be found references are there any other github issues whether open or closed that are related to the problem you ve described above or to the suggested solution if so please create a list below that mentions each of them for example
| 0
|
15,854
| 20,032,969,290
|
IssuesEvent
|
2022-02-02 08:50:34
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
migrate diff: Provide a reliable way to detect empty diffs / migrations
|
process/candidate kind/improvement team/migrations topic: diff
|
## Problem
Diff returns the difference between two schemas. While often, you assume that there is a difference, and you are interested in the content, an obvious extension of this functionality is to detect _whether two schemas are the same_.
Analogous to text diffs like `git diff`, `migrated diff` can tell us if two database schemas are in sync. The only way to detect this programmatically at the moment is to expect a specific empty migration message. This solution is not elegant and subject to potential breaking changes.
## Suggested solution
- An `--exit-code` command like flag that would work like git diff's `--exit code` flag
> --exit-code
> Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences.
## Alternatives
This is a thought I just had, more research should go into the suggested solution before we rush into implementation.
|
1.0
|
migrate diff: Provide a reliable way to detect empty diffs / migrations - ## Problem
Diff returns the difference between two schemas. While often, you assume that there is a difference, and you are interested in the content, an obvious extension of this functionality is to detect _whether two schemas are the same_.
Analogous to text diffs like `git diff`, `migrated diff` can tell us if two database schemas are in sync. The only way to detect this programmatically at the moment is to expect a specific empty migration message. This solution is not elegant and subject to potential breaking changes.
## Suggested solution
- An `--exit-code` command like flag that would work like git diff's `--exit code` flag
> --exit-code
> Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences.
## Alternatives
This is a thought I just had, more research should go into the suggested solution before we rush into implementation.
|
process
|
migrate diff provide a reliable way to detect empty diffs migrations problem diff returns the difference between two schemas while often you assume that there is a difference and you are interested in the content an obvious extension of this functionality is to detect whether two schemas are the same analogous to text diffs like git diff migrated diff can tell us if two database schemas are in sync the only way to detect this programmatically at the moment is to expect a specific empty migration message this solution is not elegant and subject to potential breaking changes suggested solution an exit code command like flag that would work like git diff s exit code flag exit code make the program exit with codes similar to diff that is it exits with if there were differences and means no differences alternatives this is a thought i just had more research should go into the suggested solution before we rush into implementation
| 1
|
8,122
| 11,303,388,620
|
IssuesEvent
|
2020-01-17 19:59:24
|
nlpie/mtap
|
https://api.github.com/repos/nlpie/mtap
|
opened
|
Race condition for event cleanup in multithreaded pipeline
|
area/framework/processing kind/bug lang/python
|
The done callback where event cleanup is done is set after the finished state and result are set on the task.
This potentially leads to a race condition:
```
Thread 1 submit -> wait for finished -> finished -> multiprocess returns, close pipeline client
Thread 2 work -> finished
Thread 3 finished -> .... -> cleanup (after client closed)
```
Could be resolved by only closing in the done callback in the case of cancellation, otherwise clean up in the task itself, or by using a count-down latch that makes sure all submitted events are both completed and closed before returning from the multiprocess method.
|
1.0
|
Race condition for event cleanup in multithreaded pipeline - The done callback where event cleanup is done is set after the finished state and result are set on the task.
This potentially leads to a race condition:
```
Thread 1 submit -> wait for finished -> finished -> multiprocess returns, close pipeline client
Thread 2 work -> finished
Thread 3 finished -> .... -> cleanup (after client closed)
```
Could be resolved by only closing in the done callback in the case of cancellation, otherwise clean up in the task itself, or by using a count-down latch that makes sure all submitted events are both completed and closed before returning from the multiprocess method.
|
process
|
race condition for event cleanup in multithreaded pipeline the done callback where event cleanup is done is set after the finished state and result are set on the task this potentially leads to a race condition thread submit wait for finished finished multiprocess returns close pipeline client thread work finished thread finished cleanup after client closed could be resolved by only closing in the done callback in the case of cancellation otherwise clean up in the task itself or by using a count down latch that makes sure all submitted events are both completed and closed before returning from the multiprocess method
| 1
|
61,405
| 7,467,250,759
|
IssuesEvent
|
2018-04-02 14:37:25
|
EvictionLab/eviction-maps
|
https://api.github.com/repos/EvictionLab/eviction-maps
|
closed
|
Flag Maryland and 99th percentile data
|
design needed feature high priority
|
Need to implement a way to flag high eviction rates (in the 99th percentile), and also flag when a user is looking at Maryland.
I'm thinking we could have a dismissible (but not auto dismissible) toast message that shows when they activate a location in the 99th percentile, OR when the map bounds are within a Maryland bounding box.
|
1.0
|
Flag Maryland and 99th percentile data - Need to implement a way to flag high eviction rates (in the 99th percentile), and also flag when a user is looking at Maryland.
I'm thinking we could have a dismissible (but not auto dismissible) toast message that shows when they activate a location in the 99th percentile, OR when the map bounds are within a Maryland bounding box.
|
non_process
|
flag maryland and percentile data need to implement a way to flag high eviction rates in the percentile and also flag when a user is looking at maryland i m thinking we could have a dismissible but not auto dismissible toast message that shows when they activate a location in the percentile or when the map bounds are within a maryland bounding box
| 0
|
13,829
| 16,592,427,193
|
IssuesEvent
|
2021-06-01 09:20:05
|
hashicorp/packer-plugin-vagrant
|
https://api.github.com/repos/hashicorp/packer-plugin-vagrant
|
opened
|
[vagrant-cloud post-processor] Add option to overwrite existing version
|
enhancement post-processor/vagrant-cloud
|
_This issue was originally opened by @adriananeci as hashicorp/packer#9492. It was migrated here as a result of the [Packer plugin split](https://github.com/hashicorp/packer/issues/8610#issuecomment-770034737). The original body of the issue is below._
<hr>
#### Feature Description
When using vagrant-cloud post-processor to upload a freshly generated box into vagrant cloud, we are getting an exception if the version already exists.
```
==> vagrant: Running post-processor: vagrant-cloud
==> vagrant (vagrant-cloud): Verifying box is accessible: hlesey/k8s-base
vagrant (vagrant-cloud): Box accessible and matches tag
==> vagrant (vagrant-cloud): Creating version: 1.18.2.2
vagrant (vagrant-cloud): Version exists, skipping creation
==> vagrant (vagrant-cloud): Creating provider: virtualbox
==> vagrant (vagrant-cloud): Cleaning up provider
vagrant (vagrant-cloud): Provider was not created, not deleting
Build 'vagrant' errored: 1 error(s) occurred:
* Post-processor failed: Error creating provider: Metadata provider must be unique for version
```
With debug enabled I've got
```
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Post-Processor Vagrant Cloud API POST: https://vagrantcloud.com/api/v1/box/hlesey/k8s-base/version/1.18.2.1/providers.
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin:
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Body: {"provider":{"name":"virtualbox"}}
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Post-Processor Vagrant Cloud API Response:
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin:
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: &{Status:422 Unprocessable Entity StatusCode:422 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache] Connection:[keep-alive] Content-Type:[application/json; charset=utf-8] Date:[Sat, 27 Jun 2020 10:00:21 GMT] Referrer-Policy:[strict-origin-when-cross-origin] Server:[Cowboy] Set-Cookie:[_atlas_session_data=aFRxUkhQcnYzV2tvem5YbnJtQ0NjZ0VxanEwcHBIODNlYURYcDBkQ0pWMzZrcjlOeUdVNE0yVlB2N2l2S0VBUTlJOGQ1YThCZWhFUmpsblpSUlc0TlE9PS0tcEgwSFlFb3FYQVhXQ2Y2cVpjN0ZvUT09--0ad671fa052e76fbb10ceb6866cab4e379ad7143; path=/; expires=Mon, 27 Jul 2020 10:00:22 GMT; secure; HttpOnly] Strict-Transport-Security:[max-age=31536000; includeSubDomains; preload] Via:[1.1 vegur] X-Content-Type-Options:[nosniff] X-Download-Options:[noopen] X-Frame-Options:[SAMEORIGIN] X-Permitted-Cross-Domain-Policies:[none] X-Request-Id:[1f0a23a5-7c4c-4a2a-bdec-1902e591f57c] X-Runtime:[0.132013] X-Vagrantcloud-Rate-Limit:[98/100] X-Xss-Protection:[1; mode=block]] Body:0xc00038e4c0 ContentLength:-1 TransferEncoding:[chunked] Close:false Uncompressed:false Trailer:map[] Request:0xc0000c6600 TLS:0xc0004e3550}
```
One way to mitigate it is to manually remove the provider for this version from vagrantcloud UI and retry the `packer build`.
Another way to avoid such exceptions is to increase the version when uploading the new box. In this case we have to announce everyone that consumes this box that a new version is available and they have to adjust their config.
It would be nice to have a new parameter for vagrant-cloud post-processor, something like `replace_if_exists` defaulting to false.
Sample config
```
{
"variables": {
"cloud_token": "{{ env `VAGRANT_CLOUD_TOKEN` }}",
"output_path": "./output"
},
"builders": [
{
"type": "vagrant",
"source_path": "ubuntu/bionic64",
"communicator": "ssh",
"add_force": true,
"provider": "virtualbox",
"output_dir": "{{user `output_path`}}"
}
],
"post-processors": [
[
{
"type": "vagrant-cloud",
"box_tag": "hlesey/k8s-base",
"version": "{{user `version`}}",
"access_token": "{{user `cloud_token`}}",
"replace_if_exists": true
}
]
]
}
```
[vagrant cli](https://www.vagrantup.com/docs/cli/cloud.html#cloud-provider-upload) already support replacing existing version, `vagrant cloud provider upload ORGANIZATION/BOX-NAME PROVIDER-NAME VERSION BOX-FILE`
#### Use Case(s)
As a packer user, I want to be able to replace an existing box version when using vagrant-cloud post-processor.
|
1.0
|
[vagrant-cloud post-processor] Add option to overwrite existing version - _This issue was originally opened by @adriananeci as hashicorp/packer#9492. It was migrated here as a result of the [Packer plugin split](https://github.com/hashicorp/packer/issues/8610#issuecomment-770034737). The original body of the issue is below._
<hr>
#### Feature Description
When using vagrant-cloud post-processor to upload a freshly generated box into vagrant cloud, we are getting an exception if the version already exists.
```
==> vagrant: Running post-processor: vagrant-cloud
==> vagrant (vagrant-cloud): Verifying box is accessible: hlesey/k8s-base
vagrant (vagrant-cloud): Box accessible and matches tag
==> vagrant (vagrant-cloud): Creating version: 1.18.2.2
vagrant (vagrant-cloud): Version exists, skipping creation
==> vagrant (vagrant-cloud): Creating provider: virtualbox
==> vagrant (vagrant-cloud): Cleaning up provider
vagrant (vagrant-cloud): Provider was not created, not deleting
Build 'vagrant' errored: 1 error(s) occurred:
* Post-processor failed: Error creating provider: Metadata provider must be unique for version
```
With debug enabled I've got
```
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Post-Processor Vagrant Cloud API POST: https://vagrantcloud.com/api/v1/box/hlesey/k8s-base/version/1.18.2.1/providers.
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin:
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Body: {"provider":{"name":"virtualbox"}}
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: Post-Processor Vagrant Cloud API Response:
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin:
2020/06/27 13:00:22 packer-post-processor-vagrant-cloud plugin: &{Status:422 Unprocessable Entity StatusCode:422 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache] Connection:[keep-alive] Content-Type:[application/json; charset=utf-8] Date:[Sat, 27 Jun 2020 10:00:21 GMT] Referrer-Policy:[strict-origin-when-cross-origin] Server:[Cowboy] Set-Cookie:[_atlas_session_data=aFRxUkhQcnYzV2tvem5YbnJtQ0NjZ0VxanEwcHBIODNlYURYcDBkQ0pWMzZrcjlOeUdVNE0yVlB2N2l2S0VBUTlJOGQ1YThCZWhFUmpsblpSUlc0TlE9PS0tcEgwSFlFb3FYQVhXQ2Y2cVpjN0ZvUT09--0ad671fa052e76fbb10ceb6866cab4e379ad7143; path=/; expires=Mon, 27 Jul 2020 10:00:22 GMT; secure; HttpOnly] Strict-Transport-Security:[max-age=31536000; includeSubDomains; preload] Via:[1.1 vegur] X-Content-Type-Options:[nosniff] X-Download-Options:[noopen] X-Frame-Options:[SAMEORIGIN] X-Permitted-Cross-Domain-Policies:[none] X-Request-Id:[1f0a23a5-7c4c-4a2a-bdec-1902e591f57c] X-Runtime:[0.132013] X-Vagrantcloud-Rate-Limit:[98/100] X-Xss-Protection:[1; mode=block]] Body:0xc00038e4c0 ContentLength:-1 TransferEncoding:[chunked] Close:false Uncompressed:false Trailer:map[] Request:0xc0000c6600 TLS:0xc0004e3550}
```
One way to mitigate it is to manually remove the provider for this version from vagrantcloud UI and retry the `packer build`.
Another way to avoid such exceptions is to increase the version when uploading the new box. In this case we have to announce everyone that consumes this box that a new version is available and they have to adjust their config.
It would be nice to have a new parameter for vagrant-cloud post-processor, something like `replace_if_exists` defaulting to false.
Sample config
```
{
"variables": {
"cloud_token": "{{ env `VAGRANT_CLOUD_TOKEN` }}",
"output_path": "./output"
},
"builders": [
{
"type": "vagrant",
"source_path": "ubuntu/bionic64",
"communicator": "ssh",
"add_force": true,
"provider": "virtualbox",
"output_dir": "{{user `output_path`}}"
}
],
"post-processors": [
[
{
"type": "vagrant-cloud",
"box_tag": "hlesey/k8s-base",
"version": "{{user `version`}}",
"access_token": "{{user `cloud_token`}}",
"replace_if_exists": true
}
]
]
}
```
[vagrant cli](https://www.vagrantup.com/docs/cli/cloud.html#cloud-provider-upload) already support replacing existing version, `vagrant cloud provider upload ORGANIZATION/BOX-NAME PROVIDER-NAME VERSION BOX-FILE`
#### Use Case(s)
As a packer user, I want to be able to replace an existing box version when using vagrant-cloud post-processor.
|
process
|
add option to overwrite existing version this issue was originally opened by adriananeci as hashicorp packer it was migrated here as a result of the the original body of the issue is below feature description when using vagrant cloud post processor to upload a freshly generated box into vagrant cloud we are getting an exception if the version already exists vagrant running post processor vagrant cloud vagrant vagrant cloud verifying box is accessible hlesey base vagrant vagrant cloud box accessible and matches tag vagrant vagrant cloud creating version vagrant vagrant cloud version exists skipping creation vagrant vagrant cloud creating provider virtualbox vagrant vagrant cloud cleaning up provider vagrant vagrant cloud provider was not created not deleting build vagrant errored error s occurred post processor failed error creating provider metadata provider must be unique for version with debug enabled i ve got packer post processor vagrant cloud plugin post processor vagrant cloud api post packer post processor vagrant cloud plugin packer post processor vagrant cloud plugin body provider name virtualbox packer post processor vagrant cloud plugin post processor vagrant cloud api response packer post processor vagrant cloud plugin packer post processor vagrant cloud plugin status unprocessable entity statuscode proto http protomajor protominor header map connection content type date referrer policy server set cookie strict transport security via x content type options x download options x frame options x permitted cross domain policies x request id x runtime x vagrantcloud rate limit x xss protection body contentlength transferencoding close false uncompressed false trailer map request tls one way to mitigate it is to manually remove the provider for this version from vagrantcloud ui and retry the packer build another way to avoid such exceptions is to increase the version when uploading the new box in this case we have to announce everyone that consumes this box that a new version is available and they have to adjust their config it would be nice to have a new parameter for vagrant cloud post processor something like replace if exists defaulting to false sample config variables cloud token env vagrant cloud token output path output builders type vagrant source path ubuntu communicator ssh add force true provider virtualbox output dir user output path post processors type vagrant cloud box tag hlesey base version user version access token user cloud token replace if exists true already support replacing existing version vagrant cloud provider upload organization box name provider name version box file use case s as a packer user i want to be able to replace an existing box version when using vagrant cloud post processor
| 1
|
17,817
| 23,741,281,646
|
IssuesEvent
|
2022-08-31 12:39:46
|
km4ack/patmenu2
|
https://api.github.com/repos/km4ack/patmenu2
|
closed
|
echo statement incorrect. Should read "VARA HF"
|
in process
|
This [line](https://github.com/km4ack/patmenu2/blob/master/start-vara-hf#L101) is incorrect. It should read "VARA HF"
|
1.0
|
echo statement incorrect. Should read "VARA HF" - This [line](https://github.com/km4ack/patmenu2/blob/master/start-vara-hf#L101) is incorrect. It should read "VARA HF"
|
process
|
echo statement incorrect should read vara hf this is incorrect it should read vara hf
| 1
|
5,206
| 7,977,681,616
|
IssuesEvent
|
2018-07-17 15:57:51
|
harvard-lil/h2o
|
https://api.github.com/repos/harvard-lil/h2o
|
closed
|
Transition from old H2O to new H2O
|
Process
|
- [x] Identify and inform all known faculty authors of transition to new H2O
- [x] Make old H2O read-only
- [x] Modify old H2O template to inform users to contact us if they have not already been contacted about casebook transition
- [x] Get 'final' database dump from old H2O
- [x] Migrate selected data to new H2O based on diff since last database dump
- [x] Email to new H2O authors
|
1.0
|
Transition from old H2O to new H2O - - [x] Identify and inform all known faculty authors of transition to new H2O
- [x] Make old H2O read-only
- [x] Modify old H2O template to inform users to contact us if they have not already been contacted about casebook transition
- [x] Get 'final' database dump from old H2O
- [x] Migrate selected data to new H2O based on diff since last database dump
- [x] Email to new H2O authors
|
process
|
transition from old to new identify and inform all known faculty authors of transition to new make old read only modify old template to inform users to contact us if they have not already been contacted about casebook transition get final database dump from old migrate selected data to new based on diff since last database dump email to new authors
| 1
|
22,200
| 30,757,975,141
|
IssuesEvent
|
2023-07-29 10:03:12
|
bitfocus/companion-module-requests
|
https://api.github.com/repos/bitfocus/companion-module-requests
|
opened
|
AVGear AVG-IP410
|
NOT YET PROCESSED
|
- [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
AVGear AVG-IP410
What you would like to be able to make it do from Companion:
Control of power outputs with feedback
Direct links or attachments to the ethernet control protocol or API:
https://avgear.com.au/shop/distribution/power-distribution/avg-ip410/
|
1.0
|
AVGear AVG-IP410 - - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
AVGear AVG-IP410
What you would like to be able to make it do from Companion:
Control of power outputs with feedback
Direct links or attachments to the ethernet control protocol or API:
https://avgear.com.au/shop/distribution/power-distribution/avg-ip410/
|
process
|
avgear avg i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control avgear avg what you would like to be able to make it do from companion control of power outputs with feedback direct links or attachments to the ethernet control protocol or api
| 1
|
7,401
| 10,523,139,736
|
IssuesEvent
|
2019-09-30 10:16:49
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
cancelToken.exec()方法导致的catch语句在下一个请求发出后才执行,期望是在下一个请求发出前执行
|
processing
|
**场景:**
按下搜索按钮进行商品搜索,并显示搜索出来的商品列表
为了避免上次搜索请求还没返回结果,故在进行搜索前先取消上次搜索
**关键代码和步骤:**
_储存商品搜索结果对象的定义:_

_关键步骤:_



**问题描述:**
在下一个请求发出之后,上一个被取消的请求才进入catch,如上图所示的第6步,此时就把this.goodsRes.isFetching置为false了,而实际上下一个请求还在请求中,所以此时this.goodsRes.isFetching应该为true才对
**期望结果:**
~~上一个被取消的请求的catch语句应该在下一个请求发出之前执行,以避免可能的数据错乱~~
cancelToken.exec()返回一个Promise,从而能够知道上一个请求有没有(何时)取消成功
|
1.0
|
cancelToken.exec()方法导致的catch语句在下一个请求发出后才执行,期望是在下一个请求发出前执行 - **场景:**
按下搜索按钮进行商品搜索,并显示搜索出来的商品列表
为了避免上次搜索请求还没返回结果,故在进行搜索前先取消上次搜索
**关键代码和步骤:**
_储存商品搜索结果对象的定义:_

_关键步骤:_



**问题描述:**
在下一个请求发出之后,上一个被取消的请求才进入catch,如上图所示的第6步,此时就把this.goodsRes.isFetching置为false了,而实际上下一个请求还在请求中,所以此时this.goodsRes.isFetching应该为true才对
**期望结果:**
~~上一个被取消的请求的catch语句应该在下一个请求发出之前执行,以避免可能的数据错乱~~
cancelToken.exec()返回一个Promise,从而能够知道上一个请求有没有(何时)取消成功
|
process
|
canceltoken exec 方法导致的catch语句在下一个请求发出后才执行 期望是在下一个请求发出前执行 场景 按下搜索按钮进行商品搜索 并显示搜索出来的商品列表 为了避免上次搜索请求还没返回结果 故在进行搜索前先取消上次搜索 关键代码和步骤 储存商品搜索结果对象的定义 关键步骤 问题描述 在下一个请求发出之后 上一个被取消的请求才进入catch 此时就把this goodsres isfetching置为false了 而实际上下一个请求还在请求中 所以此时this goodsres isfetching应该为true才对 期望结果 上一个被取消的请求的catch语句应该在下一个请求发出之前执行 以避免可能的数据错乱 canceltoken exec 返回一个promise 从而能够知道上一个请求有没有 何时 取消成功
| 1
|
141,207
| 5,431,763,240
|
IssuesEvent
|
2017-03-04 03:12:10
|
mmisw/mmiorr
|
https://api.github.com/repos/mmisw/mmiorr
|
closed
|
Nicer LOD-friendly display of content on ontology page
|
Addressed_in_ORR3 enhancement imported mmiorr Priority-Medium
|
_From [jbgrayb...@mindspring.com](https://code.google.com/u/106719590578390745654/) on August 22, 2014 11:48:20_
What capability do you want added or improved?
I want to improve the display of the ontology and metadata (see also issue #146 and issue #327 ), particularly values shown within the table. First priority is to provide auto-recognition of things that can be linked:
A) URLs within the ontology auto-linked
B) URLs elsewhere auto-linked
C) Codes or Terms within the ontology auto-linked, to resolve on the appropriate ORR term page (see other details below)
What is the desired output (content, format, location)?
Following CFSN format
Other details of your desired capability?
Later it would be cool to do CFSN's _ term recognition for searching, but recognizing more types of terms:
- ASCII linked by _
- ASCII with embedded camelCase
- ASCII preceded by _
- hex IDs: 6 or more hex digits in a row
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=326_
|
1.0
|
Nicer LOD-friendly display of content on ontology page - _From [jbgrayb...@mindspring.com](https://code.google.com/u/106719590578390745654/) on August 22, 2014 11:48:20_
What capability do you want added or improved?
I want to improve the display of the ontology and metadata (see also issue #146 and issue #327 ), particularly values shown within the table. First priority is to provide auto-recognition of things that can be linked:
A) URLs within the ontology auto-linked
B) URLs elsewhere auto-linked
C) Codes or Terms within the ontology auto-linked, to resolve on the appropriate ORR term page (see other details below)
What is the desired output (content, format, location)?
Following CFSN format
Other details of your desired capability?
Later it would be cool to do CFSN's _ term recognition for searching, but recognizing more types of terms:
- ASCII linked by _
- ASCII with embedded camelCase
- ASCII preceded by _
- hex IDs: 6 or more hex digits in a row
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=326_
|
non_process
|
nicer lod friendly display of content on ontology page from on august what capability do you want added or improved i want to improve the display of the ontology and metadata see also issue and issue particularly values shown within the table first priority is to provide auto recognition of things that can be linked a urls within the ontology auto linked b urls elsewhere auto linked c codes or terms within the ontology auto linked to resolve on the appropriate orr term page see other details below what is the desired output content format location following cfsn format other details of your desired capability later it would be cool to do cfsn s term recognition for searching but recognizing more types of terms ascii linked by ascii with embedded camelcase ascii preceded by hex ids or more hex digits in a row original issue
| 0
|
172,157
| 21,040,448,283
|
IssuesEvent
|
2022-03-31 11:50:40
|
Tim-sandbox/WebGoat
|
https://api.github.com/repos/Tim-sandbox/WebGoat
|
opened
|
WS-2022-0107 (High) detected in spring-beans-5.3.4.jar
|
security vulnerability
|
## WS-2022-0107 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-beans-5.3.4.jar</b></p></summary>
<p>Spring Beans</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /webgoat-integration-tests/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-beans/5.3.4/spring-beans-5.3.4.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-beans/5.3.4/spring-beans-5.3.4.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-validation-2.4.3.jar (Root Library)
- spring-boot-starter-2.4.3.jar
- spring-boot-2.4.3.jar
- spring-context-5.3.4.jar
- :x: **spring-beans-5.3.4.jar** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in Spring-beans which is a component associated with Spring Core allows attackers under certain circumstances to achieve remote code execution, this vulnerability is also known as ״Spring4Shell״ or ״SpringShell״.
The current POC related to the attack is done by creating a specially crafted request which manipulates ClassLoader to successfully achieve RCE (Remote Code Execution).
Please note that the ease of exploitation may diverge by the code implementation.
Currently, the attack depends on the following environment - Tomcat version 9 or above, JDK version 9 or above.
WhiteSource's research team is carefully observing developments and researching the case. We will keep updating this page and our WhiteSource resources with updates.
This is a temporary WhiteSource ID until an official CVE ID will be released.
<p>Publish Date: 2022-03-30
<p>URL: <a href=https://www.cyberkendra.com/2022/03/springshell-rce-0-day-vulnerability.html>WS-2022-0107</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-beans","packageVersion":"5.3.4","packageFilePaths":["/webgoat-integration-tests/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-validation:2.4.3;org.springframework.boot:spring-boot-starter:2.4.3;org.springframework.boot:spring-boot:2.4.3;org.springframework:spring-context:5.3.4;org.springframework:spring-beans:5.3.4","isMinimumFixVersionAvailable":false,"isBinary":false}],"baseBranches":["develop"],"vulnerabilityIdentifier":"WS-2022-0107","vulnerabilityDetails":"Vulnerability in Spring-beans which is a component associated with Spring Core allows attackers under certain circumstances to achieve remote code execution, this vulnerability is also known as ״Spring4Shell״ or ״SpringShell״. \n \nThe current POC related to the attack is done by creating a specially crafted request which manipulates ClassLoader to successfully achieve RCE (Remote Code Execution).\n \nPlease note that the ease of exploitation may diverge by the code implementation.\nCurrently, the attack depends on the following environment - Tomcat version 9 or above, JDK version 9 or above.\n \nWhiteSource\u0027s research team is carefully observing developments and researching the case. We will keep updating this page and our WhiteSource resources with updates. \n \nThis is a temporary WhiteSource ID until an official CVE ID will be released.\n","vulnerabilityUrl":"https://www.cyberkendra.com/2022/03/springshell-rce-0-day-vulnerability.html","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2022-0107 (High) detected in spring-beans-5.3.4.jar - ## WS-2022-0107 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-beans-5.3.4.jar</b></p></summary>
<p>Spring Beans</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /webgoat-integration-tests/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-beans/5.3.4/spring-beans-5.3.4.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-beans/5.3.4/spring-beans-5.3.4.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-validation-2.4.3.jar (Root Library)
- spring-boot-starter-2.4.3.jar
- spring-boot-2.4.3.jar
- spring-context-5.3.4.jar
- :x: **spring-beans-5.3.4.jar** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in Spring-beans which is a component associated with Spring Core allows attackers under certain circumstances to achieve remote code execution, this vulnerability is also known as ״Spring4Shell״ or ״SpringShell״.
The current POC related to the attack is done by creating a specially crafted request which manipulates ClassLoader to successfully achieve RCE (Remote Code Execution).
Please note that the ease of exploitation may diverge by the code implementation.
Currently, the attack depends on the following environment - Tomcat version 9 or above, JDK version 9 or above.
WhiteSource's research team is carefully observing developments and researching the case. We will keep updating this page and our WhiteSource resources with updates.
This is a temporary WhiteSource ID until an official CVE ID will be released.
<p>Publish Date: 2022-03-30
<p>URL: <a href=https://www.cyberkendra.com/2022/03/springshell-rce-0-day-vulnerability.html>WS-2022-0107</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-beans","packageVersion":"5.3.4","packageFilePaths":["/webgoat-integration-tests/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-validation:2.4.3;org.springframework.boot:spring-boot-starter:2.4.3;org.springframework.boot:spring-boot:2.4.3;org.springframework:spring-context:5.3.4;org.springframework:spring-beans:5.3.4","isMinimumFixVersionAvailable":false,"isBinary":false}],"baseBranches":["develop"],"vulnerabilityIdentifier":"WS-2022-0107","vulnerabilityDetails":"Vulnerability in Spring-beans which is a component associated with Spring Core allows attackers under certain circumstances to achieve remote code execution, this vulnerability is also known as ״Spring4Shell״ or ״SpringShell״. \n \nThe current POC related to the attack is done by creating a specially crafted request which manipulates ClassLoader to successfully achieve RCE (Remote Code Execution).\n \nPlease note that the ease of exploitation may diverge by the code implementation.\nCurrently, the attack depends on the following environment - Tomcat version 9 or above, JDK version 9 or above.\n \nWhiteSource\u0027s research team is carefully observing developments and researching the case. We will keep updating this page and our WhiteSource resources with updates. \n \nThis is a temporary WhiteSource ID until an official CVE ID will be released.\n","vulnerabilityUrl":"https://www.cyberkendra.com/2022/03/springshell-rce-0-day-vulnerability.html","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
ws high detected in spring beans jar ws high severity vulnerability vulnerable library spring beans jar spring beans library home page a href path to dependency file webgoat integration tests pom xml path to vulnerable library home wss scanner repository org springframework spring beans spring beans jar home wss scanner repository org springframework spring beans spring beans jar dependency hierarchy spring boot starter validation jar root library spring boot starter jar spring boot jar spring context jar x spring beans jar vulnerable library found in base branch develop vulnerability details vulnerability in spring beans which is a component associated with spring core allows attackers under certain circumstances to achieve remote code execution this vulnerability is also known as ״ ״ or ״springshell״ the current poc related to the attack is done by creating a specially crafted request which manipulates classloader to successfully achieve rce remote code execution please note that the ease of exploitation may diverge by the code implementation currently the attack depends on the following environment tomcat version or above jdk version or above whitesource s research team is carefully observing developments and researching the case we will keep updating this page and our whitesource resources with updates this is a temporary whitesource id until an official cve id will be released publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter validation org springframework boot spring boot starter org springframework boot spring boot org springframework spring context org springframework spring beans isminimumfixversionavailable false isbinary false basebranches vulnerabilityidentifier ws vulnerabilitydetails vulnerability in spring beans which is a component associated with spring core allows attackers under certain circumstances to achieve remote code execution this vulnerability is also known as ״ ״ or ״springshell״ n nthe current poc related to the attack is done by creating a specially crafted request which manipulates classloader to successfully achieve rce remote code execution n nplease note that the ease of exploitation may diverge by the code implementation ncurrently the attack depends on the following environment tomcat version or above jdk version or above n nwhitesource research team is carefully observing developments and researching the case we will keep updating this page and our whitesource resources with updates n nthis is a temporary whitesource id until an official cve id will be released n vulnerabilityurl
| 0
|
537
| 3,000,558,782
|
IssuesEvent
|
2015-07-24 02:59:28
|
HazyResearch/dd-genomics
|
https://api.github.com/repos/HazyResearch/dd-genomics
|
closed
|
Switch to faster extractors?
|
Candidate extraction Preprocessing
|
We are currently using the `tsv_extractor`. Should we switch to e.g. `plpy` or `piggy` extractor?
@netj any comments on what best practices are for this right now?
Also, more minor: right now we pre-convert postgres arrays to strings (`sentences` -> `sentences_input`) is this actually saving any time? Would be cleaner / simpler to do without this. May change if we switch extractors.
|
1.0
|
Switch to faster extractors? - We are currently using the `tsv_extractor`. Should we switch to e.g. `plpy` or `piggy` extractor?
@netj any comments on what best practices are for this right now?
Also, more minor: right now we pre-convert postgres arrays to strings (`sentences` -> `sentences_input`) is this actually saving any time? Would be cleaner / simpler to do without this. May change if we switch extractors.
|
process
|
switch to faster extractors we are currently using the tsv extractor should we switch to e g plpy or piggy extractor netj any comments on what best practices are for this right now also more minor right now we pre convert postgres arrays to strings sentences sentences input is this actually saving any time would be cleaner simpler to do without this may change if we switch extractors
| 1
|
295,018
| 9,064,548,389
|
IssuesEvent
|
2019-02-14 01:32:41
|
OpenRTM/OpenRTM-aist-Java
|
https://api.github.com/repos/OpenRTM/OpenRTM-aist-Java
|
closed
|
Add the way to fill checkbox in pull request template
|
enhancement priority : Low
|
**Is your feature request related to a problem? Please describe.**
pull request テンプレートにはチェックボックスがあるが、
チェックの記入方法が記載されていないのでチェック方法がわからない。
**Describe the solution you'd like**
テンプレートにチェック方法の記載を追加する
**Describe alternatives you've considered**
仮チェックを入れたチェックボックスをテンプレートに追加する
**Additional context**
なし
|
1.0
|
Add the way to fill checkbox in pull request template - **Is your feature request related to a problem? Please describe.**
pull request テンプレートにはチェックボックスがあるが、
チェックの記入方法が記載されていないのでチェック方法がわからない。
**Describe the solution you'd like**
テンプレートにチェック方法の記載を追加する
**Describe alternatives you've considered**
仮チェックを入れたチェックボックスをテンプレートに追加する
**Additional context**
なし
|
non_process
|
add the way to fill checkbox in pull request template is your feature request related to a problem please describe pull request テンプレートにはチェックボックスがあるが、 チェックの記入方法が記載されていないのでチェック方法がわからない。 describe the solution you d like テンプレートにチェック方法の記載を追加する describe alternatives you ve considered 仮チェックを入れたチェックボックスをテンプレートに追加する additional context なし
| 0
|
208,963
| 23,671,334,807
|
IssuesEvent
|
2022-08-27 12:00:59
|
Soontao/miniflux
|
https://api.github.com/repos/Soontao/miniflux
|
reopened
|
github.com/mitchellh/go-server-timing-v1.0.0: 12 vulnerabilities (highest severity is: 7.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/mitchellh/go-server-timing-v1.0.0</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [WS-2021-0200](https://github.com/go-yaml/yaml/commit/bb4e33bf68bf89cad44d386192cbed201f35b241) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2021-44716](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44716) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2020-14040](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14040) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2019-11254](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11254) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2019-8331](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2018-14040](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2018-20677](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20677) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2018-14042](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2018-20676](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20676) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2016-10735](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2021-31525](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31525) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.9 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2022-29526](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29526) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2021-0200</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Yaml in versions v2.2.0 to v2.2.2 is vulnerable to denial of service vector.
Related to decode.go
<p>Publish Date: 2021-04-14
<p>URL: <a href=https://github.com/go-yaml/yaml/commit/bb4e33bf68bf89cad44d386192cbed201f35b241>WS-2021-0200</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2021-0061">https://osv.dev/vulnerability/GO-2021-0061</a></p>
<p>Release Date: 2021-04-14</p>
<p>Fix Resolution: v2.2.3</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-44716</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
net/http in Go before 1.16.12 and 1.17.x before 1.17.5 allows uncontrolled memory consumption in the header canonicalization cache via HTTP/2 requests.
<p>Publish Date: 2022-01-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44716>CVE-2021-44716</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-vc3p-29h2-gpcp">https://github.com/advisories/GHSA-vc3p-29h2-gpcp</a></p>
<p>Release Date: 2022-01-01</p>
<p>Fix Resolution: github.com/golang/net - 491a49abca63de5e07ef554052d180a1b5fe2d70</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-14040</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The x/text package before 0.3.3 for Go has a vulnerability in encoding/unicode that could lead to the UTF-16 decoder entering an infinite loop, causing the program to crash or run out of memory. An attacker could provide a single byte to a UTF16 decoder instantiated with UseBOM or ExpectBOM to trigger an infinite loop if the String function on the Decoder is called, or the Decoder is passed to golang.org/x/text/transform.String.
<p>Publish Date: 2020-06-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14040>CVE-2020-14040</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0015">https://osv.dev/vulnerability/GO-2020-0015</a></p>
<p>Release Date: 2020-06-17</p>
<p>Fix Resolution: v0.3.3</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-11254</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The Kubernetes API Server component in versions 1.1-1.14, and versions prior to 1.15.10, 1.16.7 and 1.17.3 allows an authorized user who sends malicious YAML payloads to cause the kube-apiserver to consume excessive CPU cycles while parsing YAML.
<p>Publish Date: 2020-04-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11254>CVE-2019-11254</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-10-02</p>
<p>Fix Resolution: v2.2.8</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-8331</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-14040</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040>CVE-2018-14040</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-20677</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap before 3.4.0, XSS is possible in the affix configuration target property.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20677>CVE-2018-20677</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: Bootstrap - v3.4.0;NorDroN.AngularTemplate - 0.1.6;Dynamic.NET.Express.ProjectTemplates - 0.8.0;dotnetng.template - 1.0.0.4;ZNxtApp.Core.Module.Theme - 1.0.9-Beta;JMeter - 5.0.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-14042</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042>CVE-2018-14042</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2.org.webjars:bootstrap:3.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-20676</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap before 3.4.0, XSS is possible in the tooltip data-viewport attribute.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20676>CVE-2018-20676</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: bootstrap - 3.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2016-10735</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735>CVE-2016-10735</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: bootstrap - 3.4.0, 4.0.0-beta.2</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-31525</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
net/http in Go before 1.15.12 and 1.16.x before 1.16.4 allows remote attackers to cause a denial of service (panic) via a large header to ReadRequest or ReadResponse. Server, Transport, and Client can each be affected in some configurations.
<p>Publish Date: 2021-05-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31525>CVE-2021-31525</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1958341">https://bugzilla.redhat.com/show_bug.cgi?id=1958341</a></p>
<p>Release Date: 2021-05-27</p>
<p>Fix Resolution: golang - v1.15.12,v1.16.4,v1.17.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-29526</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Go before 1.17.10 and 1.18.x before 1.18.2 has Incorrect Privilege Assignment. When called with a non-zero flags parameter, the Faccessat function could incorrectly report that a file is accessible.
<p>Publish Date: 2022-06-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29526>CVE-2022-29526</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security-tracker.debian.org/tracker/CVE-2022-29526">https://security-tracker.debian.org/tracker/CVE-2022-29526</a></p>
<p>Release Date: 2022-06-23</p>
<p>Fix Resolution: go1.17.10,go1.18.2,go1.19</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
github.com/mitchellh/go-server-timing-v1.0.0: 12 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/mitchellh/go-server-timing-v1.0.0</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [WS-2021-0200](https://github.com/go-yaml/yaml/commit/bb4e33bf68bf89cad44d386192cbed201f35b241) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2021-44716](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44716) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2020-14040](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14040) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2019-11254](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11254) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2019-8331](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2018-14040](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2018-20677](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20677) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2018-14042](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2018-20676](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20676) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2016-10735](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2021-31525](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31525) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.9 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
| [CVE-2022-29526](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29526) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2021-0200</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Yaml in versions v2.2.0 to v2.2.2 is vulnerable to denial of service vector.
Related to decode.go
<p>Publish Date: 2021-04-14
<p>URL: <a href=https://github.com/go-yaml/yaml/commit/bb4e33bf68bf89cad44d386192cbed201f35b241>WS-2021-0200</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2021-0061">https://osv.dev/vulnerability/GO-2021-0061</a></p>
<p>Release Date: 2021-04-14</p>
<p>Fix Resolution: v2.2.3</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-44716</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
net/http in Go before 1.16.12 and 1.17.x before 1.17.5 allows uncontrolled memory consumption in the header canonicalization cache via HTTP/2 requests.
<p>Publish Date: 2022-01-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44716>CVE-2021-44716</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-vc3p-29h2-gpcp">https://github.com/advisories/GHSA-vc3p-29h2-gpcp</a></p>
<p>Release Date: 2022-01-01</p>
<p>Fix Resolution: github.com/golang/net - 491a49abca63de5e07ef554052d180a1b5fe2d70</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-14040</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The x/text package before 0.3.3 for Go has a vulnerability in encoding/unicode that could lead to the UTF-16 decoder entering an infinite loop, causing the program to crash or run out of memory. An attacker could provide a single byte to a UTF16 decoder instantiated with UseBOM or ExpectBOM to trigger an infinite loop if the String function on the Decoder is called, or the Decoder is passed to golang.org/x/text/transform.String.
<p>Publish Date: 2020-06-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14040>CVE-2020-14040</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0015">https://osv.dev/vulnerability/GO-2020-0015</a></p>
<p>Release Date: 2020-06-17</p>
<p>Fix Resolution: v0.3.3</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-11254</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The Kubernetes API Server component in versions 1.1-1.14, and versions prior to 1.15.10, 1.16.7 and 1.17.3 allows an authorized user who sends malicious YAML payloads to cause the kube-apiserver to consume excessive CPU cycles while parsing YAML.
<p>Publish Date: 2020-04-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11254>CVE-2019-11254</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-10-02</p>
<p>Fix Resolution: v2.2.8</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-8331</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-14040</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040>CVE-2018-14040</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-20677</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap before 3.4.0, XSS is possible in the affix configuration target property.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20677>CVE-2018-20677</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: Bootstrap - v3.4.0;NorDroN.AngularTemplate - 0.1.6;Dynamic.NET.Express.ProjectTemplates - 0.8.0;dotnetng.template - 1.0.0.4;ZNxtApp.Core.Module.Theme - 1.0.9-Beta;JMeter - 5.0.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-14042</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042>CVE-2018-14042</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2.org.webjars:bootstrap:3.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-20676</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap before 3.4.0, XSS is possible in the tooltip data-viewport attribute.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20676>CVE-2018-20676</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: bootstrap - 3.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2016-10735</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735>CVE-2016-10735</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: bootstrap - 3.4.0, 4.0.0-beta.2</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-31525</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
net/http in Go before 1.15.12 and 1.16.x before 1.16.4 allows remote attackers to cause a denial of service (panic) via a large header to ReadRequest or ReadResponse. Server, Transport, and Client can each be affected in some configurations.
<p>Publish Date: 2021-05-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31525>CVE-2021-31525</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1958341">https://bugzilla.redhat.com/show_bug.cgi?id=1958341</a></p>
<p>Release Date: 2021-05-27</p>
<p>Fix Resolution: golang - v1.15.12,v1.16.4,v1.17.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-29526</summary>
### Vulnerable Library - <b>github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e</b></p>
<p>Go Doc Dot Org</p>
<p>
Dependency Hierarchy:
- github.com/mitchellh/go-server-timing-v1.0.0 (Root Library)
- :x: **github.com/golang/gddo-20d68f94ee1f7547de2b1c68627253df20c8d45e** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Soontao/miniflux/commit/63bbc979c4f380d439e422f1b3258d18d124164f">63bbc979c4f380d439e422f1b3258d18d124164f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Go before 1.17.10 and 1.18.x before 1.18.2 has Incorrect Privilege Assignment. When called with a non-zero flags parameter, the Faccessat function could incorrectly report that a file is accessible.
<p>Publish Date: 2022-06-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29526>CVE-2022-29526</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security-tracker.debian.org/tracker/CVE-2022-29526">https://security-tracker.debian.org/tracker/CVE-2022-29526</a></p>
<p>Release Date: 2022-06-23</p>
<p>Fix Resolution: go1.17.10,go1.18.2,go1.19</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_process
|
github com mitchellh go server timing vulnerabilities highest severity is vulnerable library github com mitchellh go server timing found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high github com golang gddo transitive n a high github com golang gddo transitive n a high github com golang gddo transitive n a medium github com golang gddo transitive n a medium github com golang gddo transitive n a medium github com golang gddo transitive n a medium github com golang gddo transitive n a medium github com golang gddo transitive n a medium github com golang gddo transitive n a medium github com golang gddo transitive n a medium github com golang gddo transitive n a medium github com golang gddo transitive n a details ws vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details yaml in versions to is vulnerable to denial of service vector related to decode go publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details net http in go before and x before allows uncontrolled memory consumption in the header canonicalization cache via http requests publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution github com golang net step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details the x text package before for go has a vulnerability in encoding unicode that could lead to the utf decoder entering an infinite loop causing the program to crash or run out of memory an attacker could provide a single byte to a decoder instantiated with usebom or expectbom to trigger an infinite loop if the string function on the decoder is called or the decoder is passed to golang org x text transform string publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details the kubernetes api server component in versions and versions prior to and allows an authorized user who sends malicious yaml payloads to cause the kube apiserver to consume excessive cpu cycles while parsing yaml publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before and x before xss is possible in the tooltip or popover data template attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution bootstrap bootstrap sass step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before xss is possible in the collapse data parent attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution org webjars npm bootstrap org webjars bootstrap step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before xss is possible in the affix configuration target property publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap nordron angulartemplate dynamic net express projecttemplates dotnetng template znxtapp core module theme beta jmeter step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before xss is possible in the data container property of tooltip publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution org webjars npm bootstrap org webjars bootstrap step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before xss is possible in the tooltip data viewport attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap x before and x beta before beta xss is possible in the data target attribute a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap beta step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details net http in go before and x before allows remote attackers to cause a denial of service panic via a large header to readrequest or readresponse server transport and client can each be affected in some configurations publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution golang step up your open source security game with mend cve vulnerable library github com golang gddo go doc dot org dependency hierarchy github com mitchellh go server timing root library x github com golang gddo vulnerable library found in head commit a href found in base branch main vulnerability details go before and x before has incorrect privilege assignment when called with a non zero flags parameter the faccessat function could incorrectly report that a file is accessible publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
3,219
| 6,278,429,939
|
IssuesEvent
|
2017-07-18 14:22:04
|
syndesisio/syndesis-ui
|
https://api.github.com/repos/syndesisio/syndesis-ui
|
closed
|
yarn warnings - do we need to fix it?
|
bug dev process
|
```bash
yarn
yarn install v0.21.3
[1/5] Resolving packages...
[2/5] Fetching packages...
# -------------------------------------------- guessing this one is ok
warning fsevents@1.1.1: The platform "linux" is incompatible with this module.
info "fsevents@1.1.1" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/5] Linking dependencies...
# -------------------------------------------- should we worry about following ones?
warning "@ng2-dynamic-forms/core@1.3.13" has unmet peer dependency "reflect-metadata@^0.1.9".
warning "@angular/compiler-cli@4.0.0" has incorrect peer dependency "@angular/compiler@4.0.0".
warning "@angular/compiler-cli@4.0.0" has incorrect peer dependency "@angular/core@4.0.0".
warning "@angular/platform-server@4.0.0" has unmet peer dependency "@angular/animations@4.0.0".
warning "@angular/platform-server@4.0.0" has incorrect peer dependency "@angular/core@4.0.0".
warning "@angular/platform-server@4.0.0" has incorrect peer dependency "@angular/common@4.0.0".
warning "@angular/platform-server@4.0.0" has incorrect peer dependency "@angular/compiler@4.0.0".
warning "@angular/platform-server@4.0.0" has incorrect peer dependency "@angular/platform-browser@4.0.0".
[4/5] Building fresh packages...
[5/5] Cleaning modules...
Done in 20.85s.
```
|
1.0
|
yarn warnings - do we need to fix it? - ```bash
yarn
yarn install v0.21.3
[1/5] Resolving packages...
[2/5] Fetching packages...
# -------------------------------------------- guessing this one is ok
warning fsevents@1.1.1: The platform "linux" is incompatible with this module.
info "fsevents@1.1.1" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/5] Linking dependencies...
# -------------------------------------------- should we worry about following ones?
warning "@ng2-dynamic-forms/core@1.3.13" has unmet peer dependency "reflect-metadata@^0.1.9".
warning "@angular/compiler-cli@4.0.0" has incorrect peer dependency "@angular/compiler@4.0.0".
warning "@angular/compiler-cli@4.0.0" has incorrect peer dependency "@angular/core@4.0.0".
warning "@angular/platform-server@4.0.0" has unmet peer dependency "@angular/animations@4.0.0".
warning "@angular/platform-server@4.0.0" has incorrect peer dependency "@angular/core@4.0.0".
warning "@angular/platform-server@4.0.0" has incorrect peer dependency "@angular/common@4.0.0".
warning "@angular/platform-server@4.0.0" has incorrect peer dependency "@angular/compiler@4.0.0".
warning "@angular/platform-server@4.0.0" has incorrect peer dependency "@angular/platform-browser@4.0.0".
[4/5] Building fresh packages...
[5/5] Cleaning modules...
Done in 20.85s.
```
|
process
|
yarn warnings do we need to fix it bash yarn yarn install resolving packages fetching packages guessing this one is ok warning fsevents the platform linux is incompatible with this module info fsevents is an optional dependency and failed compatibility check excluding it from installation linking dependencies should we worry about following ones warning dynamic forms core has unmet peer dependency reflect metadata warning angular compiler cli has incorrect peer dependency angular compiler warning angular compiler cli has incorrect peer dependency angular core warning angular platform server has unmet peer dependency angular animations warning angular platform server has incorrect peer dependency angular core warning angular platform server has incorrect peer dependency angular common warning angular platform server has incorrect peer dependency angular compiler warning angular platform server has incorrect peer dependency angular platform browser building fresh packages cleaning modules done in
| 1
|
24,535
| 4,006,832,501
|
IssuesEvent
|
2016-05-12 16:04:12
|
Supmenow/sup-issues
|
https://api.github.com/repos/Supmenow/sup-issues
|
closed
|
My friends disappear when I hide
|
defect
|
When I click show it takes time for them to come back.
Friends are not in sync with the hide button. I can sometimes see friends in dark mode and not in friends mode.
|
1.0
|
My friends disappear when I hide - When I click show it takes time for them to come back.
Friends are not in sync with the hide button. I can sometimes see friends in dark mode and not in friends mode.
|
non_process
|
my friends disappear when i hide when i click show it takes time for them to come back friends are not in sync with the hide button i can sometimes see friends in dark mode and not in friends mode
| 0
|
2,848
| 5,809,457,475
|
IssuesEvent
|
2017-05-04 13:28:10
|
P0cL4bs/WiFi-Pumpkin
|
https://api.github.com/repos/P0cL4bs/WiFi-Pumpkin
|
closed
|
2 wireless cards for WiFi-Pumpkin [Kali Linux]
|
enhancement in process
|
Running WiFi-Pumpkin in Kali the problem is the known 2 card error, it doesn't want to use my internal card as an internet connection provider.The program that i found that manages to do exactly this is called mitmAP but it's missing some key features can its connection setup be added to WiFi-Pumpkin?
|
1.0
|
2 wireless cards for WiFi-Pumpkin [Kali Linux] - Running WiFi-Pumpkin in Kali the problem is the known 2 card error, it doesn't want to use my internal card as an internet connection provider.The program that i found that manages to do exactly this is called mitmAP but it's missing some key features can its connection setup be added to WiFi-Pumpkin?
|
process
|
wireless cards for wifi pumpkin running wifi pumpkin in kali the problem is the known card error it doesn t want to use my internal card as an internet connection provider the program that i found that manages to do exactly this is called mitmap but it s missing some key features can its connection setup be added to wifi pumpkin
| 1
|
17,867
| 5,522,722,210
|
IssuesEvent
|
2017-03-20 01:42:13
|
wallabyjs/public
|
https://api.github.com/repos/wallabyjs/public
|
closed
|
Wallaby UI elements not showing up in VSCode (e.g. code coverage indicators, footer test icon, etc.)
|
VS Code
|
### Issue description or question
Wallaby UI elements are not showing up in VSCode (e.g. code coverage indicators, footer test icon, etc.). I can see in the VScode wallaby output that the tests are in fact running, but I just see no indication that they are in the vscode IDE.
I can also successfully navigate to the test overview page to verify that the tests are running as well.

### Wallaby.js configuration file
```javascript
module.exports = function wallabyConfig(wallaby) {
return {
files: [
'lib/use-extender/*.js',
'modules/**/*.js',
'plugins/**/*.js'
],
tests: [
'test/lib/*test.js'
],
env: {
type: 'node'
}
};
};
```
### Code editor or IDE name and version
Visual Studio Code v1.10.2 (1.10.2)
### OS name and version
OSX 10.12.3
|
1.0
|
Wallaby UI elements not showing up in VSCode (e.g. code coverage indicators, footer test icon, etc.) - ### Issue description or question
Wallaby UI elements are not showing up in VSCode (e.g. code coverage indicators, footer test icon, etc.). I can see in the VScode wallaby output that the tests are in fact running, but I just see no indication that they are in the vscode IDE.
I can also successfully navigate to the test overview page to verify that the tests are running as well.

### Wallaby.js configuration file
```javascript
module.exports = function wallabyConfig(wallaby) {
return {
files: [
'lib/use-extender/*.js',
'modules/**/*.js',
'plugins/**/*.js'
],
tests: [
'test/lib/*test.js'
],
env: {
type: 'node'
}
};
};
```
### Code editor or IDE name and version
Visual Studio Code v1.10.2 (1.10.2)
### OS name and version
OSX 10.12.3
|
non_process
|
wallaby ui elements not showing up in vscode e g code coverage indicators footer test icon etc issue description or question wallaby ui elements are not showing up in vscode e g code coverage indicators footer test icon etc i can see in the vscode wallaby output that the tests are in fact running but i just see no indication that they are in the vscode ide i can also successfully navigate to the test overview page to verify that the tests are running as well wallaby js configuration file javascript module exports function wallabyconfig wallaby return files lib use extender js modules js plugins js tests test lib test js env type node code editor or ide name and version visual studio code os name and version osx
| 0
|
145,391
| 5,575,047,132
|
IssuesEvent
|
2017-03-28 00:17:03
|
SIMRacingApps/SIMRacingApps
|
https://api.github.com/repos/SIMRacingApps/SIMRacingApps
|
closed
|
Teamspeak 3.1.3 Issue
|
enhancement high priority
|
Teamspeak app no longer works with today's Teamspeak update 3.13. I get the following error:
`20170324144302.759: WARNING: TeamSpeak: _getClientList(): returned error 1796, currently not possible: com.SIMRacingApps.TeamSpeak._getClientList(TeamSpeak.java:320)[TeamSpeak.Listener]`
|
1.0
|
Teamspeak 3.1.3 Issue - Teamspeak app no longer works with today's Teamspeak update 3.13. I get the following error:
`20170324144302.759: WARNING: TeamSpeak: _getClientList(): returned error 1796, currently not possible: com.SIMRacingApps.TeamSpeak._getClientList(TeamSpeak.java:320)[TeamSpeak.Listener]`
|
non_process
|
teamspeak issue teamspeak app no longer works with today s teamspeak update i get the following error warning teamspeak getclientlist returned error currently not possible com simracingapps teamspeak getclientlist teamspeak java
| 0
|
19,449
| 25,730,480,296
|
IssuesEvent
|
2022-12-07 19:53:59
|
OpenDataScotland/the_od_bods
|
https://api.github.com/repos/OpenDataScotland/the_od_bods
|
closed
|
Populate WhatDoTheyKnow IDs
|
good first issue data processing
|

We added a new field to orgs for WhatDoTheyKnow IDs so we can link to to their list of FOI requests published on https://www.whatdotheyknow.com/. However, not all orgs have this value populated. This could be a good exercise for someone to go through all the orgs in opendata.scot and populate them accordingly.
To populate, the user needs to find the relevant organisation on WDTK (e.g. https://www.whatdotheyknow.com/body/aberdeen_city_council for Aberdeen City Council), take the ID which is the text after `https://www.whatdotheyknow.com/body/` (i.e. aberdeen_city_council) and copy that into the WhatDoTheyKnow organization ID field for the org.
You can see what organisations are missing their ID by checking the WDTK column on our [platform health report](https://opendata.scot/analytics/platform-health/)
|
1.0
|
Populate WhatDoTheyKnow IDs - 
We added a new field to orgs for WhatDoTheyKnow IDs so we can link to to their list of FOI requests published on https://www.whatdotheyknow.com/. However, not all orgs have this value populated. This could be a good exercise for someone to go through all the orgs in opendata.scot and populate them accordingly.
To populate, the user needs to find the relevant organisation on WDTK (e.g. https://www.whatdotheyknow.com/body/aberdeen_city_council for Aberdeen City Council), take the ID which is the text after `https://www.whatdotheyknow.com/body/` (i.e. aberdeen_city_council) and copy that into the WhatDoTheyKnow organization ID field for the org.
You can see what organisations are missing their ID by checking the WDTK column on our [platform health report](https://opendata.scot/analytics/platform-health/)
|
process
|
populate whatdotheyknow ids we added a new field to orgs for whatdotheyknow ids so we can link to to their list of foi requests published on however not all orgs have this value populated this could be a good exercise for someone to go through all the orgs in opendata scot and populate them accordingly to populate the user needs to find the relevant organisation on wdtk e g for aberdeen city council take the id which is the text after i e aberdeen city council and copy that into the whatdotheyknow organization id field for the org you can see what organisations are missing their id by checking the wdtk column on our
| 1
|
7,052
| 10,210,877,807
|
IssuesEvent
|
2019-08-14 15:39:01
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
closed
|
Wrong slice information on QC output
|
bug good first issue sct_process_segmentation
|
The QC plot of `sct_process_segmentation` gives the wrong slice number.
To reproduce the issue, run:
~~~
sct_download_data -d sct_testing_data
cd sct_testing_data/data/t2
sct_process_segmentation -i t2_seg_manual.nii.gz -z 2:5 -perslice 1 -qc qc-test
~~~
The output QC looks like this:

As can be seen in the plot abscissa, slice numbering go from 0 to 50+, while the input slice numbers are 2,3,4,5.
SCT version: git-release-666f7278e35d54a084e9dea51dfcdee9397a1d88
|
1.0
|
Wrong slice information on QC output - The QC plot of `sct_process_segmentation` gives the wrong slice number.
To reproduce the issue, run:
~~~
sct_download_data -d sct_testing_data
cd sct_testing_data/data/t2
sct_process_segmentation -i t2_seg_manual.nii.gz -z 2:5 -perslice 1 -qc qc-test
~~~
The output QC looks like this:

As can be seen in the plot abscissa, slice numbering go from 0 to 50+, while the input slice numbers are 2,3,4,5.
SCT version: git-release-666f7278e35d54a084e9dea51dfcdee9397a1d88
|
process
|
wrong slice information on qc output the qc plot of sct process segmentation gives the wrong slice number to reproduce the issue run sct download data d sct testing data cd sct testing data data sct process segmentation i seg manual nii gz z perslice qc qc test the output qc looks like this as can be seen in the plot abscissa slice numbering go from to while the input slice numbers are sct version git release
| 1
|
8,872
| 11,965,286,925
|
IssuesEvent
|
2020-04-05 22:45:50
|
arcum-omni/Coquo
|
https://api.github.com/repos/arcum-omni/Coquo
|
closed
|
Setup Entity Framework
|
dev process
|
Use the following tutorial for reference to configure Entity Framework.
[Tutorial: Get started with EF Core in an ASP.NET MVC web app](https://docs.microsoft.com/en-us/aspnet/core/data/ef-mvc/intro?view=aspnetcore-3.1)
|
1.0
|
Setup Entity Framework - Use the following tutorial for reference to configure Entity Framework.
[Tutorial: Get started with EF Core in an ASP.NET MVC web app](https://docs.microsoft.com/en-us/aspnet/core/data/ef-mvc/intro?view=aspnetcore-3.1)
|
process
|
setup entity framework use the following tutorial for reference to configure entity framework
| 1
|
10,848
| 13,628,209,609
|
IssuesEvent
|
2020-09-24 13:37:59
|
Open-EO/openeo-processes
|
https://api.github.com/repos/Open-EO/openeo-processes
|
closed
|
Establish proposal procedure
|
chore help wanted new process question
|
With new processes coming in, it's sometimes not easy to decide whether to make them core and release them as part of the "stable contract" or not. Currently an example is #192, which seems like a useful process, but alternatives are also discussed. Nevertheless, an openEO partner wants such a process and thus it's good to give implementers a broader choice for use cases, which they can built on as this leads to less dispersion. Also, the process should be implemented first, so that we can make changes if required and then afterwards we could push into core. Surely, these processes should all make clear by "experimental: true" that changes may occur.
Anyway, long story short: We should make a folder or branch for proposals or experimental processes that are in development and have some procedures around it. Do you agree?
|
1.0
|
Establish proposal procedure - With new processes coming in, it's sometimes not easy to decide whether to make them core and release them as part of the "stable contract" or not. Currently an example is #192, which seems like a useful process, but alternatives are also discussed. Nevertheless, an openEO partner wants such a process and thus it's good to give implementers a broader choice for use cases, which they can built on as this leads to less dispersion. Also, the process should be implemented first, so that we can make changes if required and then afterwards we could push into core. Surely, these processes should all make clear by "experimental: true" that changes may occur.
Anyway, long story short: We should make a folder or branch for proposals or experimental processes that are in development and have some procedures around it. Do you agree?
|
process
|
establish proposal procedure with new processes coming in it s sometimes not easy to decide whether to make them core and release them as part of the stable contract or not currently an example is which seems like a useful process but alternatives are also discussed nevertheless an openeo partner wants such a process and thus it s good to give implementers a broader choice for use cases which they can built on as this leads to less dispersion also the process should be implemented first so that we can make changes if required and then afterwards we could push into core surely these processes should all make clear by experimental true that changes may occur anyway long story short we should make a folder or branch for proposals or experimental processes that are in development and have some procedures around it do you agree
| 1
|
9,484
| 12,477,988,790
|
IssuesEvent
|
2020-05-29 15:51:32
|
Devnilson/fisima
|
https://api.github.com/repos/Devnilson/fisima
|
closed
|
Add editorconfig
|
process
|
# Editor configuration, see https://editorconfig.org/
root = true
[*]
charset = utf-8
indent_style = space
indent_size = 2
insert_final_newline = true
trim_trailing_whitespace = true
[*.md]
max_line_length = off
trim_trailing_whitespace = false
|
1.0
|
Add editorconfig - # Editor configuration, see https://editorconfig.org/
root = true
[*]
charset = utf-8
indent_style = space
indent_size = 2
insert_final_newline = true
trim_trailing_whitespace = true
[*.md]
max_line_length = off
trim_trailing_whitespace = false
|
process
|
add editorconfig editor configuration see root true charset utf indent style space indent size insert final newline true trim trailing whitespace true max line length off trim trailing whitespace false
| 1
|
169,050
| 20,828,013,700
|
IssuesEvent
|
2022-03-19 01:18:06
|
RG4421/modular
|
https://api.github.com/repos/RG4421/modular
|
opened
|
CVE-2021-44906 (Medium) detected in minimist-1.2.5.tgz
|
security vulnerability
|
## CVE-2021-44906 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.5.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimist</p>
<p>
Dependency Hierarchy:
- core-7.14.6.tgz (Root Library)
- json5-2.2.0.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44906">https://nvd.nist.gov/vuln/detail/CVE-2021-44906</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"1.2.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@babel/core:7.14.6;json5:2.2.0;minimist:1.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-44906","vulnerabilityDetails":"Minimist \u003c\u003d1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-44906 (Medium) detected in minimist-1.2.5.tgz - ## CVE-2021-44906 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.5.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimist</p>
<p>
Dependency Hierarchy:
- core-7.14.6.tgz (Root Library)
- json5-2.2.0.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44906">https://nvd.nist.gov/vuln/detail/CVE-2021-44906</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"1.2.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@babel/core:7.14.6;json5:2.2.0;minimist:1.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-44906","vulnerabilityDetails":"Minimist \u003c\u003d1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in minimist tgz cve medium severity vulnerability vulnerable library minimist tgz parse argument options library home page a href path to dependency file package json path to vulnerable library node modules minimist dependency hierarchy core tgz root library tgz x minimist tgz vulnerable library found in base branch main vulnerability details minimist is vulnerable to prototype pollution via file index js function setkey lines publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bumperlane public service contracts prerelease cloudscribe templates virteom tenant mobile bluetooth prerelease showingvault dotnet sdk prerelease envisia dotnet templates yarnpkg yarn virteom tenant mobile framework uwp prerelease virteom tenant mobile framework ios prerelease bumperlane public api clientmodule prerelease vuejs netcore dianoga virteom tenant mobile bluetooth ios prerelease virteom public utilities prerelease indianadavy vuejswebapitemplate csharp nordron angulartemplate virteom tenant mobile framework prerelease virteom tenant mobile bluetooth android prerelease dotnet scaffold raml parser corevuewebtest dotnetng template sitecoremaster truedynamicplaceholders virteom tenant mobile framework android prerelease fable template elmish react blazorpolyfill build fable snowpack template bumperlane public api client prerelease yarn msbuild blazor tailwindcss bunit bridge aws tslint safe template gr pagerender razor midiator webclient isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree babel core minimist isminimumfixversionavailable true minimumfixversion bumperlane public service contracts prerelease cloudscribe templates virteom tenant mobile bluetooth prerelease showingvault dotnet sdk prerelease envisia dotnet templates yarnpkg yarn virteom tenant mobile framework uwp prerelease virteom tenant mobile framework ios prerelease bumperlane public api clientmodule prerelease vuejs netcore dianoga virteom tenant mobile bluetooth ios prerelease virteom public utilities prerelease indianadavy vuejswebapitemplate csharp nordron angulartemplate virteom tenant mobile framework prerelease virteom tenant mobile bluetooth android prerelease dotnet scaffold raml parser corevuewebtest dotnetng template sitecoremaster truedynamicplaceholders virteom tenant mobile framework android prerelease fable template elmish react blazorpolyfill build fable snowpack template bumperlane public api client prerelease yarn msbuild blazor tailwindcss bunit bridge aws tslint safe template gr pagerender razor midiator webclient isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails minimist is vulnerable to prototype pollution via file index js function setkey lines vulnerabilityurl
| 0
|
105,380
| 11,449,874,492
|
IssuesEvent
|
2020-02-06 08:23:20
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
Shouldn't the CLI extensions have an Install section?
|
Documentation Extensions
|
Hi guys!
reading the documentation it's not clear that a "manual " installation is needed before using the commands. Therefore I miss a section where the install command is shown, something like:
**To enable the extension run:**
`az extension add --name subscription`
What are your thoughts on this?
Also If we come up with a standard way I can submit a pull request.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5ea3833d-6724-93a3-120a-6c855357acad
* Version Independent ID: 9c423dee-a9aa-b32b-2260-db47d9f639cf
* Content: [az](https://docs.microsoft.com/en-us/cli/azure/ext/subscription/?view=azure-cli-latest)
* Content Source: [latest/docs-ref-autogen/ext/subscription/index.yml](https://github.com/Azure/azure-docs-cli-python/blob/live/latest/docs-ref-autogen/ext/subscription/index.yml)
* Service: **unspecified**
* Product: **unspecified**
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw**
|
1.0
|
Shouldn't the CLI extensions have an Install section? - Hi guys!
reading the documentation it's not clear that a "manual " installation is needed before using the commands. Therefore I miss a section where the install command is shown, something like:
**To enable the extension run:**
`az extension add --name subscription`
What are your thoughts on this?
Also If we come up with a standard way I can submit a pull request.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5ea3833d-6724-93a3-120a-6c855357acad
* Version Independent ID: 9c423dee-a9aa-b32b-2260-db47d9f639cf
* Content: [az](https://docs.microsoft.com/en-us/cli/azure/ext/subscription/?view=azure-cli-latest)
* Content Source: [latest/docs-ref-autogen/ext/subscription/index.yml](https://github.com/Azure/azure-docs-cli-python/blob/live/latest/docs-ref-autogen/ext/subscription/index.yml)
* Service: **unspecified**
* Product: **unspecified**
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw**
|
non_process
|
shouldn t the cli extensions have an install section hi guys reading the documentation it s not clear that a manual installation is needed before using the commands therefore i miss a section where the install command is shown something like to enable the extension run az extension add name subscription what are your thoughts on this also if we come up with a standard way i can submit a pull request document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service unspecified product unspecified github login rloutlaw microsoft alias routlaw
| 0
|
648,753
| 21,193,316,492
|
IssuesEvent
|
2022-04-08 20:12:50
|
googleapis/python-bigtable
|
https://api.github.com/repos/googleapis/python-bigtable
|
closed
|
tests.system.test_instance_admin: test_cluster_create failed
|
api: bigtable type: bug priority: p2 flakybot: issue flakybot: flaky
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: b9ecfa97281ae21dcf233e60c70cacc701f12c32
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d7fb5bba-347b-4ea3-88ac-4f916baba586), [Sponge](http://sponge2/d7fb5bba-347b-4ea3-88ac-4f916baba586)
status: failed
<details><summary>Test output</summary><br><pre>target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>>)
predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f3bd47304c0>
sleep_generator = <generator object exponential_sleep_generator at 0x7f3bd0fac9e0>
deadline = 60, on_error = None
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target. The last sleep
period is shortened as necessary, so that the last retry runs at
``deadline`` (and not considerably beyond it).
on_error (Callable[Exception]): A function to call while processing a
retryable exception. Any error raised by this function will *not*
be caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
"""
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
> return target()
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:190:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>
retry = <google.api_core.retry.Retry object at 0x7f3bd472f730>
def _done_or_raise(self, retry=DEFAULT_RETRY):
"""Check if the future is done and raise if it's not."""
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
if not self.done(**kwargs):
> raise _OperationNotComplete()
E google.api_core.future.polling._OperationNotComplete
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py:89: _OperationNotComplete
The above exception was the direct cause of the following exception:
self = <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>
timeout = 60, retry = <google.api_core.retry.Retry object at 0x7f3bd472f730>
def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
> retry_(self._done_or_raise)(**kwargs)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (), kwargs = {}
target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>>)
sleep_generator = <generator object exponential_sleep_generator at 0x7f3bd0fac9e0>
@functools.wraps(func)
def retry_wrapped_func(*args, **kwargs):
"""A wrapper that calls target function with retry."""
target = functools.partial(func, *args, **kwargs)
sleep_generator = exponential_sleep_generator(
self._initial, self._maximum, multiplier=self._multiplier
)
> return retry_target(
target,
self._predicate,
sleep_generator,
self._deadline,
on_error=on_error,
)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:283:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>>)
predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f3bd47304c0>
sleep_generator = <generator object exponential_sleep_generator at 0x7f3bd0fac9e0>
deadline = 60, on_error = None
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target. The last sleep
period is shortened as necessary, so that the last retry runs at
``deadline`` (and not considerably beyond it).
on_error (Callable[Exception]): A function to call while processing a
retryable exception. Any error raised by this function will *not*
be caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
"""
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
return target()
# pylint: disable=broad-except
# This function explicitly must deal with broad exceptions.
except Exception as exc:
if not predicate(exc):
raise
last_exc = exc
if on_error is not None:
on_error(exc)
now = datetime_helpers.utcnow()
if deadline_datetime is not None:
if deadline_datetime <= now:
> raise exceptions.RetryError(
"Deadline of {:.1f}s exceeded while calling {}".format(
deadline, target
),
last_exc,
) from last_exc
E google.api_core.exceptions.RetryError: Deadline of 60.0s exceeded while calling functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>>), last exception:
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:205: RetryError
During handling of the above exception, another exception occurred:
admin_instance_populated = <google.cloud.bigtable.instance.Instance object at 0x7f3bd1b9cfd0>
admin_instance_id = 'g-c-p-1634843098874', skip_on_emulator = None
def test_cluster_create(
admin_instance_populated, admin_instance_id, skip_on_emulator,
):
alt_cluster_id = f"{admin_instance_id}-c2"
alt_location_id = "us-central1-f"
serve_nodes = 2
cluster_2 = admin_instance_populated.cluster(
alt_cluster_id,
location_id=alt_location_id,
serve_nodes=serve_nodes,
default_storage_type=(enums.StorageType.SSD),
)
operation = cluster_2.create()
> operation.result(timeout=60) # Ensure the operation completes.
tests/system/test_instance_admin.py:576:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py:132: in result
self._blocking_poll(timeout=timeout, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>
timeout = 60, retry = <google.api_core.retry.Retry object at 0x7f3bd472f730>
def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
retry_(self._done_or_raise)(**kwargs)
except exceptions.RetryError:
> raise concurrent.futures.TimeoutError(
"Operation did not complete within the designated " "timeout."
)
E concurrent.futures._base.TimeoutError: Operation did not complete within the designated timeout.
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py:112: TimeoutError</pre></details>
|
1.0
|
tests.system.test_instance_admin: test_cluster_create failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: b9ecfa97281ae21dcf233e60c70cacc701f12c32
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d7fb5bba-347b-4ea3-88ac-4f916baba586), [Sponge](http://sponge2/d7fb5bba-347b-4ea3-88ac-4f916baba586)
status: failed
<details><summary>Test output</summary><br><pre>target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>>)
predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f3bd47304c0>
sleep_generator = <generator object exponential_sleep_generator at 0x7f3bd0fac9e0>
deadline = 60, on_error = None
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target. The last sleep
period is shortened as necessary, so that the last retry runs at
``deadline`` (and not considerably beyond it).
on_error (Callable[Exception]): A function to call while processing a
retryable exception. Any error raised by this function will *not*
be caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
"""
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
> return target()
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:190:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>
retry = <google.api_core.retry.Retry object at 0x7f3bd472f730>
def _done_or_raise(self, retry=DEFAULT_RETRY):
"""Check if the future is done and raise if it's not."""
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
if not self.done(**kwargs):
> raise _OperationNotComplete()
E google.api_core.future.polling._OperationNotComplete
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py:89: _OperationNotComplete
The above exception was the direct cause of the following exception:
self = <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>
timeout = 60, retry = <google.api_core.retry.Retry object at 0x7f3bd472f730>
def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
> retry_(self._done_or_raise)(**kwargs)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (), kwargs = {}
target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>>)
sleep_generator = <generator object exponential_sleep_generator at 0x7f3bd0fac9e0>
@functools.wraps(func)
def retry_wrapped_func(*args, **kwargs):
"""A wrapper that calls target function with retry."""
target = functools.partial(func, *args, **kwargs)
sleep_generator = exponential_sleep_generator(
self._initial, self._maximum, multiplier=self._multiplier
)
> return retry_target(
target,
self._predicate,
sleep_generator,
self._deadline,
on_error=on_error,
)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:283:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>>)
predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f3bd47304c0>
sleep_generator = <generator object exponential_sleep_generator at 0x7f3bd0fac9e0>
deadline = 60, on_error = None
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target. The last sleep
period is shortened as necessary, so that the last retry runs at
``deadline`` (and not considerably beyond it).
on_error (Callable[Exception]): A function to call while processing a
retryable exception. Any error raised by this function will *not*
be caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
"""
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
return target()
# pylint: disable=broad-except
# This function explicitly must deal with broad exceptions.
except Exception as exc:
if not predicate(exc):
raise
last_exc = exc
if on_error is not None:
on_error(exc)
now = datetime_helpers.utcnow()
if deadline_datetime is not None:
if deadline_datetime <= now:
> raise exceptions.RetryError(
"Deadline of {:.1f}s exceeded while calling {}".format(
deadline, target
),
last_exc,
) from last_exc
E google.api_core.exceptions.RetryError: Deadline of 60.0s exceeded while calling functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>>), last exception:
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:205: RetryError
During handling of the above exception, another exception occurred:
admin_instance_populated = <google.cloud.bigtable.instance.Instance object at 0x7f3bd1b9cfd0>
admin_instance_id = 'g-c-p-1634843098874', skip_on_emulator = None
def test_cluster_create(
admin_instance_populated, admin_instance_id, skip_on_emulator,
):
alt_cluster_id = f"{admin_instance_id}-c2"
alt_location_id = "us-central1-f"
serve_nodes = 2
cluster_2 = admin_instance_populated.cluster(
alt_cluster_id,
location_id=alt_location_id,
serve_nodes=serve_nodes,
default_storage_type=(enums.StorageType.SSD),
)
operation = cluster_2.create()
> operation.result(timeout=60) # Ensure the operation completes.
tests/system/test_instance_admin.py:576:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py:132: in result
self._blocking_poll(timeout=timeout, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.api_core.operation.Operation object at 0x7f3bd0fc8f40>
timeout = 60, retry = <google.api_core.retry.Retry object at 0x7f3bd472f730>
def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
retry_(self._done_or_raise)(**kwargs)
except exceptions.RetryError:
> raise concurrent.futures.TimeoutError(
"Operation did not complete within the designated " "timeout."
)
E concurrent.futures._base.TimeoutError: Operation did not complete within the designated timeout.
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py:112: TimeoutError</pre></details>
|
non_process
|
tests system test instance admin test cluster create failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output target functools partial predicate if exception type predicate at sleep generator deadline on error none def retry target target predicate sleep generator deadline on error none call a function and retry if it fails this is the lowest level retry helper generally you ll use the higher level retry helper class retry args target callable the function to call and retry this must be a nullary function apply arguments with functools partial predicate callable a callable used to determine if an exception raised by the target should be considered retryable it should return true to retry or false otherwise sleep generator iterable an infinite iterator that determines how long to sleep between retries deadline float how long to keep retrying the target the last sleep period is shortened as necessary so that the last retry runs at deadline and not considerably beyond it on error callable a function to call while processing a retryable exception any error raised by this function will not be caught returns any the return value of the target function raises google api core retryerror if the deadline is exceeded while retrying valueerror if the sleep generator stops yielding values exception if the target raises a method that isn t retryable if deadline is not none deadline datetime datetime helpers utcnow datetime timedelta seconds deadline else deadline datetime none last exc none for sleep in sleep generator try return target nox system lib site packages google api core retry py self retry def done or raise self retry default retry check if the future is done and raise if it s not kwargs if retry is default retry else retry retry if not self done kwargs raise operationnotcomplete e google api core future polling operationnotcomplete nox system lib site packages google api core future polling py operationnotcomplete the above exception was the direct cause of the following exception self timeout retry def blocking poll self timeout none retry default retry poll and wait for the future to be resolved args timeout int how long in seconds to wait for the operation to complete if none wait indefinitely if self result set return retry self retry with deadline timeout try kwargs if retry is default retry else retry retry retry self done or raise kwargs nox system lib site packages google api core future polling py args kwargs target functools partial sleep generator functools wraps func def retry wrapped func args kwargs a wrapper that calls target function with retry target functools partial func args kwargs sleep generator exponential sleep generator self initial self maximum multiplier self multiplier return retry target target self predicate sleep generator self deadline on error on error nox system lib site packages google api core retry py target functools partial predicate if exception type predicate at sleep generator deadline on error none def retry target target predicate sleep generator deadline on error none call a function and retry if it fails this is the lowest level retry helper generally you ll use the higher level retry helper class retry args target callable the function to call and retry this must be a nullary function apply arguments with functools partial predicate callable a callable used to determine if an exception raised by the target should be considered retryable it should return true to retry or false otherwise sleep generator iterable an infinite iterator that determines how long to sleep between retries deadline float how long to keep retrying the target the last sleep period is shortened as necessary so that the last retry runs at deadline and not considerably beyond it on error callable a function to call while processing a retryable exception any error raised by this function will not be caught returns any the return value of the target function raises google api core retryerror if the deadline is exceeded while retrying valueerror if the sleep generator stops yielding values exception if the target raises a method that isn t retryable if deadline is not none deadline datetime datetime helpers utcnow datetime timedelta seconds deadline else deadline datetime none last exc none for sleep in sleep generator try return target pylint disable broad except this function explicitly must deal with broad exceptions except exception as exc if not predicate exc raise last exc exc if on error is not none on error exc now datetime helpers utcnow if deadline datetime is not none if deadline datetime now raise exceptions retryerror deadline of s exceeded while calling format deadline target last exc from last exc e google api core exceptions retryerror deadline of exceeded while calling functools partial last exception nox system lib site packages google api core retry py retryerror during handling of the above exception another exception occurred admin instance populated admin instance id g c p skip on emulator none def test cluster create admin instance populated admin instance id skip on emulator alt cluster id f admin instance id alt location id us f serve nodes cluster admin instance populated cluster alt cluster id location id alt location id serve nodes serve nodes default storage type enums storagetype ssd operation cluster create operation result timeout ensure the operation completes tests system test instance admin py nox system lib site packages google api core future polling py in result self blocking poll timeout timeout kwargs self timeout retry def blocking poll self timeout none retry default retry poll and wait for the future to be resolved args timeout int how long in seconds to wait for the operation to complete if none wait indefinitely if self result set return retry self retry with deadline timeout try kwargs if retry is default retry else retry retry retry self done or raise kwargs except exceptions retryerror raise concurrent futures timeouterror operation did not complete within the designated timeout e concurrent futures base timeouterror operation did not complete within the designated timeout nox system lib site packages google api core future polling py timeouterror
| 0
|
5,881
| 32,007,379,014
|
IssuesEvent
|
2023-09-21 15:37:02
|
centerofci/mathesar-website
|
https://api.github.com/repos/centerofci/mathesar-website
|
closed
|
Add Survey Banner to Website
|
restricted: maintainers status: draft type: enhancement work: frontend work: product
|
To gather feedback and understand our users' needs, we're introducing a user survey. The goal is to embed a survey banner on the Mathesar website to facilitate feedback collection, allowing us to improve our product based on user insights.
- The banner should be fixed at the top of the website.
- It should be visible on all pages of the website
- Provide users an option to close the banner. Once closed, the banner should not reappear for that user during the same browsing session.
- The banner should contain a direct link to the provided "Quick User Check-in" survey form.
#### Suggested Banner Content:
> **Help Shape Mathesar's Future!**
> Your insights are invaluable to us. Please spare less than 2 minutes for our survey and let us know how we can enhance Mathesar for you. [Take the Survey](https://docs.google.com/forms/d/1A2yS2C2Sw7-usDJ6wrNUIn177AxJavETrZJU9kdp25I/edit) | [X] (to close the banner)
---
**To Do**:
- [ ] Design the banner
- [ ] Embed the banner at the top of the website.
|
True
|
Add Survey Banner to Website - To gather feedback and understand our users' needs, we're introducing a user survey. The goal is to embed a survey banner on the Mathesar website to facilitate feedback collection, allowing us to improve our product based on user insights.
- The banner should be fixed at the top of the website.
- It should be visible on all pages of the website
- Provide users an option to close the banner. Once closed, the banner should not reappear for that user during the same browsing session.
- The banner should contain a direct link to the provided "Quick User Check-in" survey form.
#### Suggested Banner Content:
> **Help Shape Mathesar's Future!**
> Your insights are invaluable to us. Please spare less than 2 minutes for our survey and let us know how we can enhance Mathesar for you. [Take the Survey](https://docs.google.com/forms/d/1A2yS2C2Sw7-usDJ6wrNUIn177AxJavETrZJU9kdp25I/edit) | [X] (to close the banner)
---
**To Do**:
- [ ] Design the banner
- [ ] Embed the banner at the top of the website.
|
non_process
|
add survey banner to website to gather feedback and understand our users needs we re introducing a user survey the goal is to embed a survey banner on the mathesar website to facilitate feedback collection allowing us to improve our product based on user insights the banner should be fixed at the top of the website it should be visible on all pages of the website provide users an option to close the banner once closed the banner should not reappear for that user during the same browsing session the banner should contain a direct link to the provided quick user check in survey form suggested banner content help shape mathesar s future your insights are invaluable to us please spare less than minutes for our survey and let us know how we can enhance mathesar for you to close the banner to do design the banner embed the banner at the top of the website
| 0
|
135,191
| 30,259,811,387
|
IssuesEvent
|
2023-07-07 07:18:25
|
JuliaLang/julia
|
https://api.github.com/repos/JuliaLang/julia
|
opened
|
`ssertion `!ctx.ssavalue_assigned.at(ssaidx_0based)' failed.` when testing some packages
|
regression codegen
|
When testing the packages
"LsqFit"
"Polyhedra"
"PowerModelsRestoration"
"SinusoidalRegressions"
"EasyFit
with an assert build they error with:
```
julia: /source/src/codegen.cpp:5042: void emit_ssaval_assign(jl_codectx_t&, ssize_t, jl_value_t*): Assertion `!ctx.ssavalue_assigned.at(ssaidx_0based)' failed.
```
Example PkgEval log: https://s3.amazonaws.com/julialang-reports/nanosoldier/pkgeval/by_hash/c9a32f4_vs_e4ee485/LsqFit.primary.log
|
1.0
|
`ssertion `!ctx.ssavalue_assigned.at(ssaidx_0based)' failed.` when testing some packages - When testing the packages
"LsqFit"
"Polyhedra"
"PowerModelsRestoration"
"SinusoidalRegressions"
"EasyFit
with an assert build they error with:
```
julia: /source/src/codegen.cpp:5042: void emit_ssaval_assign(jl_codectx_t&, ssize_t, jl_value_t*): Assertion `!ctx.ssavalue_assigned.at(ssaidx_0based)' failed.
```
Example PkgEval log: https://s3.amazonaws.com/julialang-reports/nanosoldier/pkgeval/by_hash/c9a32f4_vs_e4ee485/LsqFit.primary.log
|
non_process
|
ssertion ctx ssavalue assigned at ssaidx failed when testing some packages when testing the packages lsqfit polyhedra powermodelsrestoration sinusoidalregressions easyfit with an assert build they error with julia source src codegen cpp void emit ssaval assign jl codectx t ssize t jl value t assertion ctx ssavalue assigned at ssaidx failed example pkgeval log
| 0
|
10,908
| 13,686,525,475
|
IssuesEvent
|
2020-09-30 08:49:38
|
prisma/prisma-engines
|
https://api.github.com/repos/prisma/prisma-engines
|
closed
|
All field attributes except relation should work with native types
|
engines/data model parser process/candidate team/engines
|
related https://github.com/prisma/prisma-engines/issues/1160
They all error if they are being used with arguments.
|
1.0
|
All field attributes except relation should work with native types - related https://github.com/prisma/prisma-engines/issues/1160
They all error if they are being used with arguments.
|
process
|
all field attributes except relation should work with native types related they all error if they are being used with arguments
| 1
|
418,825
| 12,203,741,690
|
IssuesEvent
|
2020-04-30 11:06:33
|
ballerina-platform/module-ballerinax-sfdc
|
https://api.github.com/repos/ballerina-platform/module-ballerinax-sfdc
|
closed
|
Change module name to sfdc
|
Priority/Highest Type/Task
|
**Description:**
As we planned to remove API version from the module name, we need to change the module name of this connector to `sfdc`
|
1.0
|
Change module name to sfdc - **Description:**
As we planned to remove API version from the module name, we need to change the module name of this connector to `sfdc`
|
non_process
|
change module name to sfdc description as we planned to remove api version from the module name we need to change the module name of this connector to sfdc
| 0
|
10,874
| 13,643,829,084
|
IssuesEvent
|
2020-09-25 17:48:46
|
IanDarwin/pdfshow
|
https://api.github.com/repos/IanDarwin/pdfshow
|
closed
|
Need UI Test Plan
|
development process
|
The User Interface requires a test plan that can be implemented manually until some automated UI test framework is deployed, at which time it will guide the development of the tests.
|
1.0
|
Need UI Test Plan - The User Interface requires a test plan that can be implemented manually until some automated UI test framework is deployed, at which time it will guide the development of the tests.
|
process
|
need ui test plan the user interface requires a test plan that can be implemented manually until some automated ui test framework is deployed at which time it will guide the development of the tests
| 1
|
18,604
| 24,576,901,372
|
IssuesEvent
|
2022-10-13 13:01:45
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Spatial span memory > Some of the activities will be in Resume status even though participant submits the response successfully
|
Bug P1 iOS Process: Fixed Process: Tested dev
|
Spatial span memory > Some of the activities will be in Resume status even though the participant submits the response successfully
Study id: CopyofNew09
Study name: Copy of NewStudy09/09
Activity name: spmnew10
|
2.0
|
[iOS] Spatial span memory > Some of the activities will be in Resume status even though participant submits the response successfully - Spatial span memory > Some of the activities will be in Resume status even though the participant submits the response successfully
Study id: CopyofNew09
Study name: Copy of NewStudy09/09
Activity name: spmnew10
|
process
|
spatial span memory some of the activities will be in resume status even though participant submits the response successfully spatial span memory some of the activities will be in resume status even though the participant submits the response successfully study id study name copy of activity name
| 1
|
13,775
| 16,531,689,472
|
IssuesEvent
|
2021-05-27 07:00:37
|
melink14/rikaikun
|
https://api.github.com/repos/melink14/rikaikun
|
opened
|
Migrate to typescript 4.3
|
process
|
Typescript 4.3 breaks the build probably due to some incompatibilities in `awesome-typescript-loader` (See PR: #513)
This issue tracks what's required to finish the migration:
- [ ] Remove all compiler errors output during TS compilation when using awesome-typescript-loader.
- [ ] Replace awesome-typescript-loader with ts-loader as it's actively developed.
- [ ] Tell Dependabot to rebase and then upgrade.
|
1.0
|
Migrate to typescript 4.3 - Typescript 4.3 breaks the build probably due to some incompatibilities in `awesome-typescript-loader` (See PR: #513)
This issue tracks what's required to finish the migration:
- [ ] Remove all compiler errors output during TS compilation when using awesome-typescript-loader.
- [ ] Replace awesome-typescript-loader with ts-loader as it's actively developed.
- [ ] Tell Dependabot to rebase and then upgrade.
|
process
|
migrate to typescript typescript breaks the build probably due to some incompatibilities in awesome typescript loader see pr this issue tracks what s required to finish the migration remove all compiler errors output during ts compilation when using awesome typescript loader replace awesome typescript loader with ts loader as it s actively developed tell dependabot to rebase and then upgrade
| 1
|
3,765
| 2,690,578,982
|
IssuesEvent
|
2015-03-31 16:49:46
|
GoogleCloudPlatform/kubernetes
|
https://api.github.com/repos/GoogleCloudPlatform/kubernetes
|
opened
|
Automated and standard build for kubernetes project Docker images
|
area/test priority/P2 team/any team/testing
|
This is particularly useful for those images used in e2e tests. Today each image has its own scripts to build and deploy, there is no automation and no conventions throughout.
/cc @ArtfulCoder who has done some work on auto-building images.
|
2.0
|
Automated and standard build for kubernetes project Docker images - This is particularly useful for those images used in e2e tests. Today each image has its own scripts to build and deploy, there is no automation and no conventions throughout.
/cc @ArtfulCoder who has done some work on auto-building images.
|
non_process
|
automated and standard build for kubernetes project docker images this is particularly useful for those images used in tests today each image has its own scripts to build and deploy there is no automation and no conventions throughout cc artfulcoder who has done some work on auto building images
| 0
|
32,973
| 4,448,858,814
|
IssuesEvent
|
2016-08-22 03:03:00
|
NKeisuke/ampersands-new-site
|
https://api.github.com/repos/NKeisuke/ampersands-new-site
|
closed
|
【SP ヘッダー/フッター】メニューのサイズ大きく、ロゴを小さく、縦幅縮める
|
DESIGN
|
- [x] 赤線で囲ったメニューのテキストのサイズ、120%ほど大きくして下さい(ヘッダー、フッター共に)
- [x] ロゴを若干小さくして下さい(ヘッダー)
- [x] 縦幅を若干縮めて下さい(ヘッダー)
Invisionのデザイン参照して下さい(→ https://projects.invisionapp.com/d/main#/console/6009524/154311447/preview )

|
1.0
|
【SP ヘッダー/フッター】メニューのサイズ大きく、ロゴを小さく、縦幅縮める - - [x] 赤線で囲ったメニューのテキストのサイズ、120%ほど大きくして下さい(ヘッダー、フッター共に)
- [x] ロゴを若干小さくして下さい(ヘッダー)
- [x] 縦幅を若干縮めて下さい(ヘッダー)
Invisionのデザイン参照して下さい(→ https://projects.invisionapp.com/d/main#/console/6009524/154311447/preview )

|
non_process
|
【sp ヘッダー/フッター】メニューのサイズ大きく、ロゴを小さく、縦幅縮める 赤線で囲ったメニューのテキストのサイズ、 ほど大きくして下さい(ヘッダー、フッター共に) ロゴを若干小さくして下さい(ヘッダー) 縦幅を若干縮めて下さい(ヘッダー) invisionのデザイン参照して下さい(→ )
| 0
|
58,156
| 14,242,206,355
|
IssuesEvent
|
2020-11-19 01:13:04
|
RG4421/lounge
|
https://api.github.com/repos/RG4421/lounge
|
closed
|
WS-2019-0491 (High) detected in handlebars-4.1.2.tgz - autoclosed
|
security vulnerability
|
## WS-2019-0491 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: lounge/package.json</p>
<p>Path to vulnerable library: lounge/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jsdoc-to-markdown-5.0.0.tgz (Root Library)
- dmd-4.0.0.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 4.4.5 is vulnerable to Denial of Service. The package's parser may be forced into an endless loop while processing specially-crafted templates. This may allow attackers to exhaust system resources leading to Denial of Service.
<p>Publish Date: 2019-11-04
<p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-11-04</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.1.2","isTransitiveDependency":true,"dependencyTree":"jsdoc-to-markdown:5.0.0;dmd:4.0.0;handlebars:4.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 4.4.5"}],"vulnerabilityIdentifier":"WS-2019-0491","vulnerabilityDetails":"handlebars before 4.4.5 is vulnerable to Denial of Service. The package\u0027s parser may be forced into an endless loop while processing specially-crafted templates. This may allow attackers to exhaust system resources leading to Denial of Service.","vulnerabilityUrl":"https://github.com/handlebars-lang/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2019-0491 (High) detected in handlebars-4.1.2.tgz - autoclosed - ## WS-2019-0491 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: lounge/package.json</p>
<p>Path to vulnerable library: lounge/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jsdoc-to-markdown-5.0.0.tgz (Root Library)
- dmd-4.0.0.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 4.4.5 is vulnerable to Denial of Service. The package's parser may be forced into an endless loop while processing specially-crafted templates. This may allow attackers to exhaust system resources leading to Denial of Service.
<p>Publish Date: 2019-11-04
<p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-11-04</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.1.2","isTransitiveDependency":true,"dependencyTree":"jsdoc-to-markdown:5.0.0;dmd:4.0.0;handlebars:4.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 4.4.5"}],"vulnerabilityIdentifier":"WS-2019-0491","vulnerabilityDetails":"handlebars before 4.4.5 is vulnerable to Denial of Service. The package\u0027s parser may be forced into an endless loop while processing specially-crafted templates. This may allow attackers to exhaust system resources leading to Denial of Service.","vulnerabilityUrl":"https://github.com/handlebars-lang/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
ws high detected in handlebars tgz autoclosed ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file lounge package json path to vulnerable library lounge node modules handlebars package json dependency hierarchy jsdoc to markdown tgz root library dmd tgz x handlebars tgz vulnerable library vulnerability details handlebars before is vulnerable to denial of service the package s parser may be forced into an endless loop while processing specially crafted templates this may allow attackers to exhaust system resources leading to denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails handlebars before is vulnerable to denial of service the package parser may be forced into an endless loop while processing specially crafted templates this may allow attackers to exhaust system resources leading to denial of service vulnerabilityurl
| 0
|
16,878
| 2,955,886,311
|
IssuesEvent
|
2015-07-08 07:43:23
|
ubershmekel/python3wos
|
https://api.github.com/repos/ubershmekel/python3wos
|
closed
|
Incorrect reporting for python-dateutil
|
auto-migrated Priority-Medium Type-Defect
|
```
python-dateutil appears in red (i.e. no Python 3 support), although on PyPI, it
does use the trove classifier indicating Python 3 support:
http://pypi.python.org/pypi/python-dateutil
```
Original issue reported on code.google.com by `tak...@gmail.com` on 12 Jan 2012 at 1:49
|
1.0
|
Incorrect reporting for python-dateutil - ```
python-dateutil appears in red (i.e. no Python 3 support), although on PyPI, it
does use the trove classifier indicating Python 3 support:
http://pypi.python.org/pypi/python-dateutil
```
Original issue reported on code.google.com by `tak...@gmail.com` on 12 Jan 2012 at 1:49
|
non_process
|
incorrect reporting for python dateutil python dateutil appears in red i e no python support although on pypi it does use the trove classifier indicating python support original issue reported on code google com by tak gmail com on jan at
| 0
|
618,125
| 19,425,871,325
|
IssuesEvent
|
2021-12-21 05:21:42
|
pombase/canto
|
https://api.github.com/repos/pombase/canto
|
closed
|
Fix warning from etc/test_initialise.pl
|
low priority quick next internal
|
James Seager pointed this out:
Use of uninitialized value in string ne at /usr/local/share/perl/5.26.1/PomBase/Chobo/OntologyConf.pm line 84.
Use of uninitialized value in concatenation (.) or string at /usr/local/share/perl/5.26.1/PomBase/Chobo/OntologyConf.pm line 86.
"def:" line differ
versus:
X part_of Y if X is a subregion of Y. at /usr/local/share/perl/5.26.1/PomBase/Chobo/OntologyConf.pm line 86.
Creating database for 0_curs
Creating database for 1_curs
|
1.0
|
Fix warning from etc/test_initialise.pl - James Seager pointed this out:
Use of uninitialized value in string ne at /usr/local/share/perl/5.26.1/PomBase/Chobo/OntologyConf.pm line 84.
Use of uninitialized value in concatenation (.) or string at /usr/local/share/perl/5.26.1/PomBase/Chobo/OntologyConf.pm line 86.
"def:" line differ
versus:
X part_of Y if X is a subregion of Y. at /usr/local/share/perl/5.26.1/PomBase/Chobo/OntologyConf.pm line 86.
Creating database for 0_curs
Creating database for 1_curs
|
non_process
|
fix warning from etc test initialise pl james seager pointed this out use of uninitialized value in string ne at usr local share perl pombase chobo ontologyconf pm line use of uninitialized value in concatenation or string at usr local share perl pombase chobo ontologyconf pm line def line differ versus x part of y if x is a subregion of y at usr local share perl pombase chobo ontologyconf pm line creating database for curs creating database for curs
| 0
|
12,887
| 15,280,055,757
|
IssuesEvent
|
2021-02-23 05:30:51
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
reopened
|
Recommended Challenges: sub community filter does not work
|
P2 ShapeupProcess challenge- recommender-tool
|
When the recommended challenges toggle is on, the sub-community filter does not work
.
<img width="1440" alt="Screenshot 2021-02-19 at 5 47 04 PM" src="https://user-images.githubusercontent.com/58783823/108503537-8c775580-72da-11eb-974c-f4c2a9022131.png">
<img width="1440" alt="Screenshot 2021-02-19 at 5 47 11 PM" src="https://user-images.githubusercontent.com/58783823/108503547-913c0980-72da-11eb-9ac6-dce35f73bd30.png">
|
1.0
|
Recommended Challenges: sub community filter does not work - When the recommended challenges toggle is on, the sub-community filter does not work
.
<img width="1440" alt="Screenshot 2021-02-19 at 5 47 04 PM" src="https://user-images.githubusercontent.com/58783823/108503537-8c775580-72da-11eb-974c-f4c2a9022131.png">
<img width="1440" alt="Screenshot 2021-02-19 at 5 47 11 PM" src="https://user-images.githubusercontent.com/58783823/108503547-913c0980-72da-11eb-9ac6-dce35f73bd30.png">
|
process
|
recommended challenges sub community filter does not work when the recommended challenges toggle is on the sub community filter does not work img width alt screenshot at pm src img width alt screenshot at pm src
| 1
|
4,281
| 7,190,601,920
|
IssuesEvent
|
2018-02-02 17:50:24
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
grabABI: Pick up ABI files from ENS and/or Swarm (when ready)
|
apps-all status-inprocess type-enhancement
|
Future version of solidity will store ABI files in Swarm. Use that as source of ABIs instead of ehterscan.io. Also--what is natspec?
From https://github.com/Great-Hill-Corporation/ethslurp/issues/99
|
1.0
|
grabABI: Pick up ABI files from ENS and/or Swarm (when ready) - Future version of solidity will store ABI files in Swarm. Use that as source of ABIs instead of ehterscan.io. Also--what is natspec?
From https://github.com/Great-Hill-Corporation/ethslurp/issues/99
|
process
|
grababi pick up abi files from ens and or swarm when ready future version of solidity will store abi files in swarm use that as source of abis instead of ehterscan io also what is natspec from
| 1
|
9,189
| 12,228,777,226
|
IssuesEvent
|
2020-05-03 20:53:45
|
chfor183/data_science_articles
|
https://api.github.com/repos/chfor183/data_science_articles
|
opened
|
Exploratory Data Analysis (EDA)
|
Data Preprocessing EDA
|
## TL;DR
Yes !
## Key Takeaways
- 1
- 2
## Useful Code Snippets
```
function test() {
console.log("notice the blank line before this function?");
}
```
## Articles/Ressources
https://towardsdatascience.com/an-extensive-guide-to-exploratory-data-analysis-ddd99a03199e
|
1.0
|
Exploratory Data Analysis (EDA) - ## TL;DR
Yes !
## Key Takeaways
- 1
- 2
## Useful Code Snippets
```
function test() {
console.log("notice the blank line before this function?");
}
```
## Articles/Ressources
https://towardsdatascience.com/an-extensive-guide-to-exploratory-data-analysis-ddd99a03199e
|
process
|
exploratory data analysis eda tl dr yes key takeaways useful code snippets function test console log notice the blank line before this function articles ressources
| 1
|
2,536
| 8,657,436,740
|
IssuesEvent
|
2018-11-27 21:19:31
|
Kapeli/Dash-User-Contributions
|
https://api.github.com/repos/Kapeli/Dash-User-Contributions
|
closed
|
Redux Docset maintainer needed
|
needs maintainer
|
I can no longer have time to maintain this docset and I am looking for additional contributors to assist. My repo is located at [https://github.com/epitaphmike/redux-dash](https://github.com/epitaphmike/redux-dash). If this is something you are interested in helping with please reach out. Thank you.
|
True
|
Redux Docset maintainer needed - I can no longer have time to maintain this docset and I am looking for additional contributors to assist. My repo is located at [https://github.com/epitaphmike/redux-dash](https://github.com/epitaphmike/redux-dash). If this is something you are interested in helping with please reach out. Thank you.
|
non_process
|
redux docset maintainer needed i can no longer have time to maintain this docset and i am looking for additional contributors to assist my repo is located at if this is something you are interested in helping with please reach out thank you
| 0
|
21,080
| 28,029,896,210
|
IssuesEvent
|
2023-03-28 11:41:22
|
camunda/issues
|
https://api.github.com/repos/camunda/issues
|
opened
|
Support FEEL directly in Script Task
|
component:desktopModeler component:webModeler component:zeebe-process-automation public kind:epic version:8.2 version:8.2-alpha3
|
### Value Proposition Statement
Use FEEL in BPMN Script Tasks without writing a Job Worker
### User Problem
As a developer, I want to calculate an expression in my workflow and store the result in a process instance variable. The expression is simple and does not represent essential or complex business logic.
FEEL is very powerful and ideal for this.
At the moment as a developer I have to create a Script Task and then implement a Job Worker to calculate such an expression. An alternative are Input / Output Mappings, but sometimes the expression should be explicit in the process.
### User Stories
**Modeler**
As a developer I can define a FEEL expression in a Script Task using the Modeler.
**Zeebe**
As a developer I can deploy a process with such expression in a Script Task on Zeebe.
As a developer I can trust that Zeebe evaluates the expression in the Zeebe broker itself without having to provide an additional job worker.
**Documentation**
As a developer I can read about the new capability in the documentation and understand how to use it.
### Implementation Notes
<!-- Notes to consider for implementation, for example:
* In Cawemo we already have the capability to manage templates via the feature that we call “catalog”
* What we would build now is the ability to a) use this feature in the web modeler to create templates and b) when the context pad opens for defining the type of a task, the templates that decorate service tasks are shown
* We should clarify terminology (integrations vs. connectors vs. job workers vs. element templates.) Particularly “element templates” might not be a term that a user intuitively understands.
* See these high level wireframes to capture the idea -->
### Breakdown
**Zeebe**
- [x] https://github.com/camunda/zeebe/issues/4213
**Modeler**
- [x] https://github.com/camunda/camunda-modeler/issues/3321
- [x] Web Modeler dependency upgrade TBD
**Docs**
- [ ] https://github.com/camunda/camunda-platform-docs/issues/1548
#### Discovery phase ##
<!-- Example: link to "Conduct customer interview with xyz" -->
#### Define phase ##
<!-- Consider: UI, UX, technical design, documentation design -->
<!-- Example: link to "Define User-Journey Flow" or "Define target architecture" -->
Design Planning
* No design necessary
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
<!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. -->
#### Validate phase ##
<!-- Example: link to "Evaluate usage data of last quarter" -->
### Links to additional collateral
- https://jira.camunda.com/browse/SUPPORT-15446
|
1.0
|
Support FEEL directly in Script Task - ### Value Proposition Statement
Use FEEL in BPMN Script Tasks without writing a Job Worker
### User Problem
As a developer, I want to calculate an expression in my workflow and store the result in a process instance variable. The expression is simple and does not represent essential or complex business logic.
FEEL is very powerful and ideal for this.
At the moment as a developer I have to create a Script Task and then implement a Job Worker to calculate such an expression. An alternative are Input / Output Mappings, but sometimes the expression should be explicit in the process.
### User Stories
**Modeler**
As a developer I can define a FEEL expression in a Script Task using the Modeler.
**Zeebe**
As a developer I can deploy a process with such expression in a Script Task on Zeebe.
As a developer I can trust that Zeebe evaluates the expression in the Zeebe broker itself without having to provide an additional job worker.
**Documentation**
As a developer I can read about the new capability in the documentation and understand how to use it.
### Implementation Notes
<!-- Notes to consider for implementation, for example:
* In Cawemo we already have the capability to manage templates via the feature that we call “catalog”
* What we would build now is the ability to a) use this feature in the web modeler to create templates and b) when the context pad opens for defining the type of a task, the templates that decorate service tasks are shown
* We should clarify terminology (integrations vs. connectors vs. job workers vs. element templates.) Particularly “element templates” might not be a term that a user intuitively understands.
* See these high level wireframes to capture the idea -->
### Breakdown
**Zeebe**
- [x] https://github.com/camunda/zeebe/issues/4213
**Modeler**
- [x] https://github.com/camunda/camunda-modeler/issues/3321
- [x] Web Modeler dependency upgrade TBD
**Docs**
- [ ] https://github.com/camunda/camunda-platform-docs/issues/1548
#### Discovery phase ##
<!-- Example: link to "Conduct customer interview with xyz" -->
#### Define phase ##
<!-- Consider: UI, UX, technical design, documentation design -->
<!-- Example: link to "Define User-Journey Flow" or "Define target architecture" -->
Design Planning
* No design necessary
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
<!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. -->
#### Validate phase ##
<!-- Example: link to "Evaluate usage data of last quarter" -->
### Links to additional collateral
- https://jira.camunda.com/browse/SUPPORT-15446
|
process
|
support feel directly in script task value proposition statement use feel in bpmn script tasks without writing a job worker user problem as a developer i want to calculate an expression in my workflow and store the result in a process instance variable the expression is simple and does not represent essential or complex business logic feel is very powerful and ideal for this at the moment as a developer i have to create a script task and then implement a job worker to calculate such an expression an alternative are input output mappings but sometimes the expression should be explicit in the process user stories modeler as a developer i can define a feel expression in a script task using the modeler zeebe as a developer i can deploy a process with such expression in a script task on zeebe as a developer i can trust that zeebe evaluates the expression in the zeebe broker itself without having to provide an additional job worker documentation as a developer i can read about the new capability in the documentation and understand how to use it implementation notes notes to consider for implementation for example in cawemo we already have the capability to manage templates via the feature that we call “catalog” what we would build now is the ability to a use this feature in the web modeler to create templates and b when the context pad opens for defining the type of a task the templates that decorate service tasks are shown we should clarify terminology integrations vs connectors vs job workers vs element templates particularly “element templates” might not be a term that a user intuitively understands see these high level wireframes to capture the idea breakdown zeebe modeler web modeler dependency upgrade tbd docs discovery phase define phase design planning no design necessary documentation planning risk management risk class risk treatment implement phase validate phase links to additional collateral
| 1
|
10,480
| 13,252,907,752
|
IssuesEvent
|
2020-08-20 06:32:26
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
Full chunk-based computing in coprocessor
|
difficulty/hard sig/coprocessor type/enhancement
|
Using [TiDB Chunk format](https://github.com/pingcap/tidb/blob/9627132db25e5cb560ba6ed89c2e39c32edb267a/util/chunk/chunk.go) in the `coprocessor` framework instead of `VectorValue`.
Detail Design (Draft): https://github.com/tikv/rfcs/pull/43/
## Development Task
- [x] Implement type safe chunk format.
- [x] Modify `rpn_fn` macro.
- [x] Modify all builtin functions using new signature.
## Mentor
* @TennyZhuang
* @breeswish
Please contact with mentors before start working on the task.
|
1.0
|
Full chunk-based computing in coprocessor - Using [TiDB Chunk format](https://github.com/pingcap/tidb/blob/9627132db25e5cb560ba6ed89c2e39c32edb267a/util/chunk/chunk.go) in the `coprocessor` framework instead of `VectorValue`.
Detail Design (Draft): https://github.com/tikv/rfcs/pull/43/
## Development Task
- [x] Implement type safe chunk format.
- [x] Modify `rpn_fn` macro.
- [x] Modify all builtin functions using new signature.
## Mentor
* @TennyZhuang
* @breeswish
Please contact with mentors before start working on the task.
|
process
|
full chunk based computing in coprocessor using in the coprocessor framework instead of vectorvalue detail design draft development task implement type safe chunk format modify rpn fn macro modify all builtin functions using new signature mentor tennyzhuang breeswish please contact with mentors before start working on the task
| 1
|
25,637
| 12,266,083,161
|
IssuesEvent
|
2020-05-07 08:21:09
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
Standard azurerm_eventhub_namespace defaulting to DENY firewall when created
|
bug service/event-hubs
|
<!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
```
Terraform v0.12.24
+ provider.azurerm v2.3.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_eventhub_namespace`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_eventhub_namespace" "eventhub_namespace" {
name = "testehn"
resource_group_name = "rgp-xxx-xx-xxx-xx"
location = "Eastus2"
sku = "standard"
capacity = 2
auto_inflate_enabled = true
maximum_throughput_units = 10
}
output "eventhub_namespace-object" {
value = azurerm_eventhub_namespace.eventhub_namespace
}
```
### Debug Output
https://gist.github.com/runecalico/fd55469ec16fc0f83af1788e91a66d32
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
Terraform should have created a Standard Eventhub Namespace without firewall as the docs say network_rulesets is optional, and no default value for deny_action is specified in the documentation.
### Actual Behavior
The eventhub is created with the firewall default_action of deny, which enables the IP firewall with no configuration. The only way to work around it is to explicitly set default_action to "Allow".
The output of the terraform resource clearly shows that it configured the default_action to deny for some reason ..
Is this a bug in Terraform? Is this the default behavior in Azure and the terraform azurerm documentation needs to be updated to reflect?
```
eventhub_namespace-object = {
"auto_inflate_enabled" = true
"capacity" = 2
"default_primary_connection_string" = "Endpoint=sb://testehn.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=xxx"
"default_primary_key" = "xxx"
"default_secondary_connection_string" = "Endpoint=sb://testehn.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=xx"
"default_secondary_key" = "xx"
"id" = "/subscriptions/xx/resourceGroups/xx/providers/Microsoft.EventHub/namespaces/testehn"
"location" = "eastus2"
"maximum_throughput_units" = 10
"name" = "testehn"
"network_rulesets" = [
{
"default_action" = "Deny"
"ip_rule" = []
"virtual_network_rule" = []
},
]
"resource_group_name" = "xx"
"sku" = "Standard"
}
```
This output in the debug seems interesting .. (not that I know why it enabled it)
```
2020-03-27T08:21:35.030-0500 [DEBUG] plugin.terraform-provider-azurerm_v2.3.0_x4.exe: X-Ms-Routing-Request-Id: NORTHCENTRALUS:20200327T132136Z:72dbbda7-f411-4f99-9820-fcb1cf010ff0
27T08:21:35.030-0500 [DEBUG] plugin.terraform-provider-azurerm_v2.3.0_x4.exe:
2020-03-27T08:21:35.030-0500 [DEBUG] plugin.terraform-provider-azurerm_v2.3.0_x4.exe: {"id":"/subscriptions/xxxxx/resourceGroups/xxxxx/providers/Microsoft.EventHub/namespaces/testehn/networkRuleSets
/default","name":"default","type":"Microsoft.EventHub/Namespaces/NetworkRuleSets","location":"East US 2","properties":{"defaultAction":"Deny","virtualNetworkRules":[],"ipRules":[]}}
2020-03-27T08:21:35.030-0500 [DEBUG] plugin.terraform-provider-azurerm_v2.3.0_x4.exe: [DEBUG] AzureRM Request:
2020-03-27T08:21:35.030-0500 [DEBUG] plugin.terraform-provider-azurerm_v2.3.0_x4.exe: POST /subscriptions/xxxxx/resourceGroups/xxxxx/providers/Microsoft.EventHub/namespaces/testehn/AuthorizationRule
s/RootManageSharedAccessKey/listKeys?api-version=2017-04-01 HTTP/1.1
```
### Steps to Reproduce
terraform apply --auto-approve
1. `terraform apply`
### Important Factoids
This is an "normal" Azure Cloud subscription.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation?
--->
* #0000
|
1.0
|
Standard azurerm_eventhub_namespace defaulting to DENY firewall when created - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
```
Terraform v0.12.24
+ provider.azurerm v2.3.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_eventhub_namespace`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_eventhub_namespace" "eventhub_namespace" {
name = "testehn"
resource_group_name = "rgp-xxx-xx-xxx-xx"
location = "Eastus2"
sku = "standard"
capacity = 2
auto_inflate_enabled = true
maximum_throughput_units = 10
}
output "eventhub_namespace-object" {
value = azurerm_eventhub_namespace.eventhub_namespace
}
```
### Debug Output
https://gist.github.com/runecalico/fd55469ec16fc0f83af1788e91a66d32
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
Terraform should have created a Standard Eventhub Namespace without firewall as the docs say network_rulesets is optional, and no default value for deny_action is specified in the documentation.
### Actual Behavior
The eventhub is created with the firewall default_action of deny, which enables the IP firewall with no configuration. The only way to work around it is to explicitly set default_action to "Allow".
The output of the terraform resource clearly shows that it configured the default_action to deny for some reason ..
Is this a bug in Terraform? Is this the default behavior in Azure and the terraform azurerm documentation needs to be updated to reflect?
```
eventhub_namespace-object = {
"auto_inflate_enabled" = true
"capacity" = 2
"default_primary_connection_string" = "Endpoint=sb://testehn.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=xxx"
"default_primary_key" = "xxx"
"default_secondary_connection_string" = "Endpoint=sb://testehn.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=xx"
"default_secondary_key" = "xx"
"id" = "/subscriptions/xx/resourceGroups/xx/providers/Microsoft.EventHub/namespaces/testehn"
"location" = "eastus2"
"maximum_throughput_units" = 10
"name" = "testehn"
"network_rulesets" = [
{
"default_action" = "Deny"
"ip_rule" = []
"virtual_network_rule" = []
},
]
"resource_group_name" = "xx"
"sku" = "Standard"
}
```
This output in the debug seems interesting .. (not that I know why it enabled it)
```
2020-03-27T08:21:35.030-0500 [DEBUG] plugin.terraform-provider-azurerm_v2.3.0_x4.exe: X-Ms-Routing-Request-Id: NORTHCENTRALUS:20200327T132136Z:72dbbda7-f411-4f99-9820-fcb1cf010ff0
27T08:21:35.030-0500 [DEBUG] plugin.terraform-provider-azurerm_v2.3.0_x4.exe:
2020-03-27T08:21:35.030-0500 [DEBUG] plugin.terraform-provider-azurerm_v2.3.0_x4.exe: {"id":"/subscriptions/xxxxx/resourceGroups/xxxxx/providers/Microsoft.EventHub/namespaces/testehn/networkRuleSets
/default","name":"default","type":"Microsoft.EventHub/Namespaces/NetworkRuleSets","location":"East US 2","properties":{"defaultAction":"Deny","virtualNetworkRules":[],"ipRules":[]}}
2020-03-27T08:21:35.030-0500 [DEBUG] plugin.terraform-provider-azurerm_v2.3.0_x4.exe: [DEBUG] AzureRM Request:
2020-03-27T08:21:35.030-0500 [DEBUG] plugin.terraform-provider-azurerm_v2.3.0_x4.exe: POST /subscriptions/xxxxx/resourceGroups/xxxxx/providers/Microsoft.EventHub/namespaces/testehn/AuthorizationRule
s/RootManageSharedAccessKey/listKeys?api-version=2017-04-01 HTTP/1.1
```
### Steps to Reproduce
terraform apply --auto-approve
1. `terraform apply`
### Important Factoids
This is an "normal" Azure Cloud subscription.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation?
--->
* #0000
|
non_process
|
standard azurerm eventhub namespace defaulting to deny firewall when created please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version terraform provider azurerm affected resource s azurerm eventhub namespace terraform configuration files hcl resource azurerm eventhub namespace eventhub namespace name testehn resource group name rgp xxx xx xxx xx location sku standard capacity auto inflate enabled true maximum throughput units output eventhub namespace object value azurerm eventhub namespace eventhub namespace debug output panic output expected behavior terraform should have created a standard eventhub namespace without firewall as the docs say network rulesets is optional and no default value for deny action is specified in the documentation actual behavior the eventhub is created with the firewall default action of deny which enables the ip firewall with no configuration the only way to work around it is to explicitly set default action to allow the output of the terraform resource clearly shows that it configured the default action to deny for some reason is this a bug in terraform is this the default behavior in azure and the terraform azurerm documentation needs to be updated to reflect eventhub namespace object auto inflate enabled true capacity default primary connection string endpoint sb testehn servicebus windows net sharedaccesskeyname rootmanagesharedaccesskey sharedaccesskey xxx default primary key xxx default secondary connection string endpoint sb testehn servicebus windows net sharedaccesskeyname rootmanagesharedaccesskey sharedaccesskey xx default secondary key xx id subscriptions xx resourcegroups xx providers microsoft eventhub namespaces testehn location maximum throughput units name testehn network rulesets default action deny ip rule virtual network rule resource group name xx sku standard this output in the debug seems interesting not that i know why it enabled it plugin terraform provider azurerm exe x ms routing request id northcentralus plugin terraform provider azurerm exe plugin terraform provider azurerm exe id subscriptions xxxxx resourcegroups xxxxx providers microsoft eventhub namespaces testehn networkrulesets default name default type microsoft eventhub namespaces networkrulesets location east us properties defaultaction deny virtualnetworkrules iprules plugin terraform provider azurerm exe azurerm request plugin terraform provider azurerm exe post subscriptions xxxxx resourcegroups xxxxx providers microsoft eventhub namespaces testehn authorizationrule s rootmanagesharedaccesskey listkeys api version http steps to reproduce terraform apply auto approve terraform apply important factoids this is an normal azure cloud subscription references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here such as vendor documentation
| 0
|
2,964
| 2,649,810,268
|
IssuesEvent
|
2015-03-15 09:40:41
|
sbt/sbt
|
https://api.github.com/repos/sbt/sbt
|
closed
|
SBT creating unnecessary empty directories
|
Needs Reproduction Test Case
|
When I compile my Play project, it creates directories like `target/scala-2.10/twirl/test` as empty directories. This is annoying because these empty directories get pulled into Eclipse and make it harder to find the meaningful directories. Can we avoid creating these directories until we know there's going to content in them?
|
1.0
|
SBT creating unnecessary empty directories - When I compile my Play project, it creates directories like `target/scala-2.10/twirl/test` as empty directories. This is annoying because these empty directories get pulled into Eclipse and make it harder to find the meaningful directories. Can we avoid creating these directories until we know there's going to content in them?
|
non_process
|
sbt creating unnecessary empty directories when i compile my play project it creates directories like target scala twirl test as empty directories this is annoying because these empty directories get pulled into eclipse and make it harder to find the meaningful directories can we avoid creating these directories until we know there s going to content in them
| 0
|
14,756
| 2,831,389,396
|
IssuesEvent
|
2015-05-24 15:54:16
|
nobodyguy/dslrdashboard
|
https://api.github.com/repos/nobodyguy/dslrdashboard
|
closed
|
D600 + Galaxy S3 no zooming on fingermoves on screen
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Simply connect D600 with OTG cable.
2. When touching the screen AF works perfectly, but no finger movement can
change the zoom of the lens.
3. You simply can not change the zoom of the lens. An icon for that would be
welcome.
What is the expected output? What do you see instead?
Nothing happens when fingers move toward each other or the other way around.
What version of the product are you using? On what operating system?
0.30.30
Please provide any additional information below.
Android 4.0.3
```
Original issue reported on code.google.com by `pjuha...@gmail.com` on 27 Dec 2013 at 9:07
|
1.0
|
D600 + Galaxy S3 no zooming on fingermoves on screen - ```
What steps will reproduce the problem?
1. Simply connect D600 with OTG cable.
2. When touching the screen AF works perfectly, but no finger movement can
change the zoom of the lens.
3. You simply can not change the zoom of the lens. An icon for that would be
welcome.
What is the expected output? What do you see instead?
Nothing happens when fingers move toward each other or the other way around.
What version of the product are you using? On what operating system?
0.30.30
Please provide any additional information below.
Android 4.0.3
```
Original issue reported on code.google.com by `pjuha...@gmail.com` on 27 Dec 2013 at 9:07
|
non_process
|
galaxy no zooming on fingermoves on screen what steps will reproduce the problem simply connect with otg cable when touching the screen af works perfectly but no finger movement can change the zoom of the lens you simply can not change the zoom of the lens an icon for that would be welcome what is the expected output what do you see instead nothing happens when fingers move toward each other or the other way around what version of the product are you using on what operating system please provide any additional information below android original issue reported on code google com by pjuha gmail com on dec at
| 0
|
59,477
| 24,792,225,160
|
IssuesEvent
|
2022-10-24 14:32:42
|
gradido/gradido
|
https://api.github.com/repos/gradido/gradido
|
closed
|
🐛 [Bug] Some user search queries fail in the admin interface
|
bug service: backend service: admin frontend
|
<!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates -->
## 🐛 Bugreport
<!-- Describe your issue in detail. Include screenshots if needed. Give us as much information as possible. Use a clear and concise description of what the bug is.-->
Some user search queries fail in the admin interface
The error is: `Cannot read properties of null (reading 'emailChecked')`

Can be tested with the query for `ulf` on production data
|
2.0
|
🐛 [Bug] Some user search queries fail in the admin interface - <!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates -->
## 🐛 Bugreport
<!-- Describe your issue in detail. Include screenshots if needed. Give us as much information as possible. Use a clear and concise description of what the bug is.-->
Some user search queries fail in the admin interface
The error is: `Cannot read properties of null (reading 'emailChecked')`

Can be tested with the query for `ulf` on production data
|
non_process
|
🐛 some user search queries fail in the admin interface 🐛 bugreport some user search queries fail in the admin interface the error is cannot read properties of null reading emailchecked can be tested with the query for ulf on production data
| 0
|
89,233
| 3,790,966,700
|
IssuesEvent
|
2016-03-21 23:47:33
|
himynameisdave/git-labelmaker
|
https://api.github.com/repos/himynameisdave/git-labelmaker
|
opened
|
Buttercup throwing error
|
Priority: High Status: Available Type: Bug
|
Not sure wtf is going on, but here's a log of the error:

Happened when trying to run it for the first time in like a month
|
1.0
|
Buttercup throwing error - Not sure wtf is going on, but here's a log of the error:

Happened when trying to run it for the first time in like a month
|
non_process
|
buttercup throwing error not sure wtf is going on but here s a log of the error happened when trying to run it for the first time in like a month
| 0
|
309,168
| 26,654,287,382
|
IssuesEvent
|
2023-01-25 15:46:03
|
nrwl/nx
|
https://api.github.com/repos/nrwl/nx
|
closed
|
Cypress E2E "Cannot read property 'baseUrl' of undefined" after upgrade to 15.6.1
|
type: bug scope: testing tools
|
### Current Behavior
When starting Cypress E2E tests for a React frontend, the process fails with:
```
TypeError: Cannot read property 'baseUrl' of undefined
at startDevServer_1 (/home/nick/dev/application-monorepo/node_modules/@nrwl/cypress/src/executors/cypress/cypress.impl.js:131:34)
at startDevServer_1.next (<anonymous>)
at resume (/home/nick/dev/application-monorepo/node_modules/tslib/tslib.js:230:48)
at fulfill (/home/nick/dev/application-monorepo/node_modules/tslib/tslib.js:232:35)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
```
This appears to be an error in the Cypress Executor accessing `output.info`
Cypress config is:
```ts
import { defineConfig } from 'cypress'
import { nxE2EPreset } from '@nrwl/cypress/plugins/cypress-preset'
const cypressJsonConfig: Parameters<typeof defineConfig>[number]['e2e'] = {
baseUrl: 'http://localhost:4202',
viewportHeight: 1080,
viewportWidth: 1920,
fileServerFolder: '.',
fixturesFolder: './src/fixtures',
video: true,
videosFolder: '../../../dist/cypress/apps/gss-toolkit/frontend-e2e/videos',
screenshotsFolder:
'../../../dist/cypress/apps/gss-toolkit/frontend-e2e/screenshots',
chromeWebSecurity: false,
defaultCommandTimeout: 60000,
requestTimeout: 60000,
retries: {
runMode: 2,
openMode: 0,
},
specPattern: 'src/e2e/**/*.cy.{js,jsx,ts,tsx}',
supportFile: 'src/support/e2e.ts',
}
export default defineConfig({
e2e: {
...nxE2EPreset(__dirname),
...cypressJsonConfig,
},
})
```
### Expected Behavior
Cypress starts
### Github Repo
_No response_
### Steps to Reproduce
1.
### Nx Report
```shell
Node : 14.21.2
OS : linux x64
yarn : 1.22.19
nx : 15.6.1
@nrwl/angular : Not Found
@nrwl/cypress : 15.6.1
@nrwl/detox : Not Found
@nrwl/devkit : 15.6.1
@nrwl/esbuild : Not Found
@nrwl/eslint-plugin-nx : 15.6.1
@nrwl/expo : Not Found
@nrwl/express : Not Found
@nrwl/jest : 15.6.1
@nrwl/js : 15.6.1
@nrwl/linter : 15.6.1
@nrwl/nest : 15.6.1
@nrwl/next : Not Found
@nrwl/node : 15.6.1
@nrwl/nx-cloud : Not Found
@nrwl/nx-plugin : 15.6.1
@nrwl/react : 15.6.1
@nrwl/react-native : Not Found
@nrwl/rollup : 15.6.1
@nrwl/schematics : Not Found
@nrwl/storybook : Not Found
@nrwl/web : 15.6.1
@nrwl/webpack : 15.6.1
@nrwl/workspace : 15.6.1
@nrwl/vite : Not Found
typescript : 4.8.4
---------------------------------------
Local workspace plugins:
---------------------------------------
Community plugins:
```
### Failure Logs
```shell
yarn run v1.22.19
$ yarn run nx e2e gss-toolkit-frontend-e2e --watch --browser=electron --verbose
$ nx e2e gss-toolkit-frontend-e2e --watch --browser=electron --verbose
> nx run gss-toolkit-frontend-e2e:e2e --watch --browser=electron
> nx run gss-toolkit-frontend:serve:development
> nx run gss-toolkit-api:serve
<i> [webpack-dev-server] Project is running at:
<i> [webpack-dev-server] Loopback: http://localhost:4202/, http://127.0.0.1:4202/
<i> [webpack-dev-server] 404s will fallback to '/index.html'
> NX Web Development Server is listening at http://localhost:4202/
asset main.js 1.11 MiB [emitted] [big] (name: main) 1 related asset
asset assets/.gitkeep 0 bytes [emitted] [from: apps/gss-toolkit/api/src/assets/.gitkeep] [copied]
cacheable modules 1.08 MiB
modules by path ./libs/common/ 134 KiB
modules by path ./libs/common/gss/src/ 110 KiB 47 modules
modules by path ./libs/common/gss-zod/src/ 24.5 KiB
./libs/common/gss-zod/src/index.ts 866 bytes [built] [code generated]
+ 13 modules
modules by path ./apps/gss-toolkit/api/src/ 972 KiB 44 modules
modules by path external "@aws-sdk/ 126 bytes
external "@aws-sdk/client-secrets-manager" 42 bytes [built] [code generated]
external "@aws-sdk/client-s3" 42 bytes [built] [code generated]
external "@aws-sdk/s3-request-presigner" 42 bytes [built] [code generated]
modules by path external "@trpc/ 84 bytes
external "@trpc/server/adapters/standalone" 42 bytes [built] [code generated]
external "@trpc/server" 42 bytes [built] [code generated]
+ 11 modules
webpack 5.75.0 compiled successfully in 3991 ms
Debugger listening on ws://localhost:9229/38dc5155-38e2-4598-8050-ceedc0cd9de4
For help, see: https://nodejs.org/en/docs/inspector
Type-checking in progress...
Listening on 3334
Playground serving on http://localhost:33342/
No errors found.
> NX Cannot read property 'baseUrl' of undefined
TypeError: Cannot read property 'baseUrl' of undefined
at startDevServer_1 (/home/nick/dev/application-monorepo/node_modules/@nrwl/cypress/src/executors/cypress/cypress.impl.js:131:34)
at startDevServer_1.next (<anonymous>)
at resume (/home/nick/dev/application-monorepo/node_modules/tslib/tslib.js:230:48)
at fulfill (/home/nick/dev/application-monorepo/node_modules/tslib/tslib.js:232:35)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
——————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————
> NX Running target e2e for project gss-toolkit-frontend-e2e failed
Failed tasks:
- gss-toolkit-frontend-e2e:e2e
Hint: run the command with --verbose for more details.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
### Additional Information
_No response_
|
1.0
|
Cypress E2E "Cannot read property 'baseUrl' of undefined" after upgrade to 15.6.1 - ### Current Behavior
When starting Cypress E2E tests for a React frontend, the process fails with:
```
TypeError: Cannot read property 'baseUrl' of undefined
at startDevServer_1 (/home/nick/dev/application-monorepo/node_modules/@nrwl/cypress/src/executors/cypress/cypress.impl.js:131:34)
at startDevServer_1.next (<anonymous>)
at resume (/home/nick/dev/application-monorepo/node_modules/tslib/tslib.js:230:48)
at fulfill (/home/nick/dev/application-monorepo/node_modules/tslib/tslib.js:232:35)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
```
This appears to be an error in the Cypress Executor accessing `output.info`
Cypress config is:
```ts
import { defineConfig } from 'cypress'
import { nxE2EPreset } from '@nrwl/cypress/plugins/cypress-preset'
const cypressJsonConfig: Parameters<typeof defineConfig>[number]['e2e'] = {
baseUrl: 'http://localhost:4202',
viewportHeight: 1080,
viewportWidth: 1920,
fileServerFolder: '.',
fixturesFolder: './src/fixtures',
video: true,
videosFolder: '../../../dist/cypress/apps/gss-toolkit/frontend-e2e/videos',
screenshotsFolder:
'../../../dist/cypress/apps/gss-toolkit/frontend-e2e/screenshots',
chromeWebSecurity: false,
defaultCommandTimeout: 60000,
requestTimeout: 60000,
retries: {
runMode: 2,
openMode: 0,
},
specPattern: 'src/e2e/**/*.cy.{js,jsx,ts,tsx}',
supportFile: 'src/support/e2e.ts',
}
export default defineConfig({
e2e: {
...nxE2EPreset(__dirname),
...cypressJsonConfig,
},
})
```
### Expected Behavior
Cypress starts
### Github Repo
_No response_
### Steps to Reproduce
1.
### Nx Report
```shell
Node : 14.21.2
OS : linux x64
yarn : 1.22.19
nx : 15.6.1
@nrwl/angular : Not Found
@nrwl/cypress : 15.6.1
@nrwl/detox : Not Found
@nrwl/devkit : 15.6.1
@nrwl/esbuild : Not Found
@nrwl/eslint-plugin-nx : 15.6.1
@nrwl/expo : Not Found
@nrwl/express : Not Found
@nrwl/jest : 15.6.1
@nrwl/js : 15.6.1
@nrwl/linter : 15.6.1
@nrwl/nest : 15.6.1
@nrwl/next : Not Found
@nrwl/node : 15.6.1
@nrwl/nx-cloud : Not Found
@nrwl/nx-plugin : 15.6.1
@nrwl/react : 15.6.1
@nrwl/react-native : Not Found
@nrwl/rollup : 15.6.1
@nrwl/schematics : Not Found
@nrwl/storybook : Not Found
@nrwl/web : 15.6.1
@nrwl/webpack : 15.6.1
@nrwl/workspace : 15.6.1
@nrwl/vite : Not Found
typescript : 4.8.4
---------------------------------------
Local workspace plugins:
---------------------------------------
Community plugins:
```
### Failure Logs
```shell
yarn run v1.22.19
$ yarn run nx e2e gss-toolkit-frontend-e2e --watch --browser=electron --verbose
$ nx e2e gss-toolkit-frontend-e2e --watch --browser=electron --verbose
> nx run gss-toolkit-frontend-e2e:e2e --watch --browser=electron
> nx run gss-toolkit-frontend:serve:development
> nx run gss-toolkit-api:serve
<i> [webpack-dev-server] Project is running at:
<i> [webpack-dev-server] Loopback: http://localhost:4202/, http://127.0.0.1:4202/
<i> [webpack-dev-server] 404s will fallback to '/index.html'
> NX Web Development Server is listening at http://localhost:4202/
asset main.js 1.11 MiB [emitted] [big] (name: main) 1 related asset
asset assets/.gitkeep 0 bytes [emitted] [from: apps/gss-toolkit/api/src/assets/.gitkeep] [copied]
cacheable modules 1.08 MiB
modules by path ./libs/common/ 134 KiB
modules by path ./libs/common/gss/src/ 110 KiB 47 modules
modules by path ./libs/common/gss-zod/src/ 24.5 KiB
./libs/common/gss-zod/src/index.ts 866 bytes [built] [code generated]
+ 13 modules
modules by path ./apps/gss-toolkit/api/src/ 972 KiB 44 modules
modules by path external "@aws-sdk/ 126 bytes
external "@aws-sdk/client-secrets-manager" 42 bytes [built] [code generated]
external "@aws-sdk/client-s3" 42 bytes [built] [code generated]
external "@aws-sdk/s3-request-presigner" 42 bytes [built] [code generated]
modules by path external "@trpc/ 84 bytes
external "@trpc/server/adapters/standalone" 42 bytes [built] [code generated]
external "@trpc/server" 42 bytes [built] [code generated]
+ 11 modules
webpack 5.75.0 compiled successfully in 3991 ms
Debugger listening on ws://localhost:9229/38dc5155-38e2-4598-8050-ceedc0cd9de4
For help, see: https://nodejs.org/en/docs/inspector
Type-checking in progress...
Listening on 3334
Playground serving on http://localhost:33342/
No errors found.
> NX Cannot read property 'baseUrl' of undefined
TypeError: Cannot read property 'baseUrl' of undefined
at startDevServer_1 (/home/nick/dev/application-monorepo/node_modules/@nrwl/cypress/src/executors/cypress/cypress.impl.js:131:34)
at startDevServer_1.next (<anonymous>)
at resume (/home/nick/dev/application-monorepo/node_modules/tslib/tslib.js:230:48)
at fulfill (/home/nick/dev/application-monorepo/node_modules/tslib/tslib.js:232:35)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
——————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————
> NX Running target e2e for project gss-toolkit-frontend-e2e failed
Failed tasks:
- gss-toolkit-frontend-e2e:e2e
Hint: run the command with --verbose for more details.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
### Additional Information
_No response_
|
non_process
|
cypress cannot read property baseurl of undefined after upgrade to current behavior when starting cypress tests for a react frontend the process fails with typeerror cannot read property baseurl of undefined at startdevserver home nick dev application monorepo node modules nrwl cypress src executors cypress cypress impl js at startdevserver next at resume home nick dev application monorepo node modules tslib tslib js at fulfill home nick dev application monorepo node modules tslib tslib js at processticksandrejections internal process task queues js this appears to be an error in the cypress executor accessing output info cypress config is ts import defineconfig from cypress import from nrwl cypress plugins cypress preset const cypressjsonconfig parameters baseurl viewportheight viewportwidth fileserverfolder fixturesfolder src fixtures video true videosfolder dist cypress apps gss toolkit frontend videos screenshotsfolder dist cypress apps gss toolkit frontend screenshots chromewebsecurity false defaultcommandtimeout requesttimeout retries runmode openmode specpattern src cy js jsx ts tsx supportfile src support ts export default defineconfig dirname cypressjsonconfig expected behavior cypress starts github repo no response steps to reproduce nx report shell node os linux yarn nx nrwl angular not found nrwl cypress nrwl detox not found nrwl devkit nrwl esbuild not found nrwl eslint plugin nx nrwl expo not found nrwl express not found nrwl jest nrwl js nrwl linter nrwl nest nrwl next not found nrwl node nrwl nx cloud not found nrwl nx plugin nrwl react nrwl react native not found nrwl rollup nrwl schematics not found nrwl storybook not found nrwl web nrwl webpack nrwl workspace nrwl vite not found typescript local workspace plugins community plugins failure logs shell yarn run yarn run nx gss toolkit frontend watch browser electron verbose nx gss toolkit frontend watch browser electron verbose nx run gss toolkit frontend watch browser electron nx run gss toolkit frontend serve development nx run gss toolkit api serve project is running at loopback will fallback to index html nx web development server is listening at asset main js mib name main related asset asset assets gitkeep bytes cacheable modules mib modules by path libs common kib modules by path libs common gss src kib modules modules by path libs common gss zod src kib libs common gss zod src index ts bytes modules modules by path apps gss toolkit api src kib modules modules by path external aws sdk bytes external aws sdk client secrets manager bytes external aws sdk client bytes external aws sdk request presigner bytes modules by path external trpc bytes external trpc server adapters standalone bytes external trpc server bytes modules webpack compiled successfully in ms debugger listening on ws localhost for help see type checking in progress listening on playground serving on no errors found nx cannot read property baseurl of undefined typeerror cannot read property baseurl of undefined at startdevserver home nick dev application monorepo node modules nrwl cypress src executors cypress cypress impl js at startdevserver next at resume home nick dev application monorepo node modules tslib tslib js at fulfill home nick dev application monorepo node modules tslib tslib js at processticksandrejections internal process task queues js —————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— nx running target for project gss toolkit frontend failed failed tasks gss toolkit frontend hint run the command with verbose for more details error command failed with exit code info visit for documentation about this command error command failed with exit code info visit for documentation about this command additional information no response
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.