Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
20,894
| 27,725,765,621
|
IssuesEvent
|
2023-03-15 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Wed, 15 Mar 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Implicit and Explicit Commonsense for Multi-sentence Video Captioning
- **Authors:** Shih-Han Chou, James J. Little, Leonid Sigal
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07545
- **Pdf link:** https://arxiv.org/pdf/2303.07545
- **Abstract**
Existing dense or paragraph video captioning approaches rely on holistic representations of videos, possibly coupled with learned object/action representations, to condition hierarchical language decoders. However, they fundamentally lack the commonsense knowledge of the world required to reason about progression of events, causality, and even function of certain objects within a scene. To address this limitation we propose a novel video captioning Transformer-based model, that takes into account both implicit (visuo-lingual and purely linguistic) and explicit (knowledge-base) commonsense knowledge. We show that these forms of knowledge, in isolation and in combination, enhance the quality of produced captions. Further, inspired by imitation learning, we propose a new task of instruction generation, where the goal is to produce a set of linguistic instructions from a video demonstration of its performance. We formalize the task using ALFRED dataset [52] generated using an AI2-THOR environment. While instruction generation is conceptually similar to paragraph captioning, it differs in the fact that it exhibits stronger object persistence, as well as spatially-aware and causal sentence structure. We show that our commonsense knowledge enhanced approach produces significant improvements on this task (up to 57% in METEOR and 8.5% in CIDEr), as well as the state-of-the-art result on more traditional video captioning in the ActivityNet Captions dataset [29].
### V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle Cooperative Perception
- **Authors:** Runsheng Xu, Xin Xia, Jinlong Li, Hanzhao Li, Shuo Zhang, Zhengzhong Tu, Zonglin Meng, Hao Xiang, Xiaoyu Dong, Rui Song, Hongkai Yu, Bolei Zhou, Jiaqi Ma
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07601
- **Pdf link:** https://arxiv.org/pdf/2303.07601
- **Abstract**
Modern perception systems of autonomous vehicles are known to be sensitive to occlusions and lack the capability of long perceiving range. It has been one of the key bottlenecks that prevents Level 5 autonomy. Recent research has demonstrated that the Vehicle-to-Vehicle (V2V) cooperative perception system has great potential to revolutionize the autonomous driving industry. However, the lack of a real-world dataset hinders the progress of this field. To facilitate the development of cooperative perception, we present V2V4Real, the first large-scale real-world multi-modal dataset for V2V perception. The data is collected by two vehicles equipped with multi-modal sensors driving together through diverse scenarios. Our V2V4Real dataset covers a driving area of 410 km, comprising 20K LiDAR frames, 40K RGB frames, 240K annotated 3D bounding boxes for 5 classes, and HDMaps that cover all the driving routes. V2V4Real introduces three perception tasks, including cooperative 3D object detection, cooperative 3D object tracking, and Sim2Real domain adaptation for cooperative perception. We provide comprehensive benchmarks of recent cooperative perception algorithms on three tasks. The V2V4Real dataset and codebase can be found at https://github.com/ucla-mobility/V2V4Real.
### Training Robust Spiking Neural Networks with ViewPoint Transform and SpatioTemporal Stretching
- **Authors:** Haibo Shen, Juyu Xiao, Yihao Luo, Xiang Cao, Liangqi Zhang, Tianjiang Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07609
- **Pdf link:** https://arxiv.org/pdf/2303.07609
- **Abstract**
Neuromorphic vision sensors (event cameras) simulate biological visual perception systems and have the advantages of high temporal resolution, less data redundancy, low power consumption, and large dynamic range. Since both events and spikes are modeled from neural signals, event cameras are inherently suitable for spiking neural networks (SNNs), which are considered promising models for artificial intelligence (AI) and theoretical neuroscience. However, the unconventional visual signals of these cameras pose a great challenge to the robustness of spiking neural networks. In this paper, we propose a novel data augmentation method, ViewPoint Transform and SpatioTemporal Stretching (VPT-STS). It improves the robustness of SNNs by transforming the rotation centers and angles in the spatiotemporal domain to generate samples from different viewpoints. Furthermore, we introduce the spatiotemporal stretching to avoid potential information loss in viewpoint transformation. Extensive experiments on prevailing neuromorphic datasets demonstrate that VPT-STS is broadly effective on multi-event representations and significantly outperforms pure spatial geometric transformations. Notably, the SNNs model with VPT-STS achieves a state-of-the-art accuracy of 84.4\% on the DVS-CIFAR10 dataset.
## Keyword: event camera
### Training Robust Spiking Neural Networks with ViewPoint Transform and SpatioTemporal Stretching
- **Authors:** Haibo Shen, Juyu Xiao, Yihao Luo, Xiang Cao, Liangqi Zhang, Tianjiang Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07609
- **Pdf link:** https://arxiv.org/pdf/2303.07609
- **Abstract**
Neuromorphic vision sensors (event cameras) simulate biological visual perception systems and have the advantages of high temporal resolution, less data redundancy, low power consumption, and large dynamic range. Since both events and spikes are modeled from neural signals, event cameras are inherently suitable for spiking neural networks (SNNs), which are considered promising models for artificial intelligence (AI) and theoretical neuroscience. However, the unconventional visual signals of these cameras pose a great challenge to the robustness of spiking neural networks. In this paper, we propose a novel data augmentation method, ViewPoint Transform and SpatioTemporal Stretching (VPT-STS). It improves the robustness of SNNs by transforming the rotation centers and angles in the spatiotemporal domain to generate samples from different viewpoints. Furthermore, we introduce the spatiotemporal stretching to avoid potential information loss in viewpoint transformation. Extensive experiments on prevailing neuromorphic datasets demonstrate that VPT-STS is broadly effective on multi-event representations and significantly outperforms pure spatial geometric transformations. Notably, the SNNs model with VPT-STS achieves a state-of-the-art accuracy of 84.4\% on the DVS-CIFAR10 dataset.
### BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow Estimation
- **Authors:** Yijin Li, Zhaoyang Huang, Shuo Chen, Xiaoyu Shi, Hongsheng Li, Hujun Bao, Zhaopeng Cui, Guofeng Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07716
- **Pdf link:** https://arxiv.org/pdf/2303.07716
- **Abstract**
Event cameras provide high temporal precision, low data rates, and high dynamic range visual perception, which are well-suited for optical flow estimation. While data-driven optical flow estimation has obtained great success in RGB cameras, its generalization performance is seriously hindered in event cameras mainly due to the limited and biased training data. In this paper, we present a novel simulator, BlinkSim, for the fast generation of large-scale data for event-based optical flow. BlinkSim consists of a configurable rendering engine and a flexible engine for event data simulation. By leveraging the wealth of current 3D assets, the rendering engine enables us to automatically build up thousands of scenes with different objects, textures, and motion patterns and render very high-frequency images for realistic event data simulation. Based on BlinkSim, we construct a large training dataset and evaluation benchmark BlinkFlow that contains sufficient, diversiform, and challenging event data with optical flow ground truth. Experiments show that BlinkFlow improves the generalization performance of state-of-the-art methods by more than 40% on average and up to 90%. Moreover, we further propose an Event optical Flow transFormer (E-FlowFormer) architecture. Powered by our BlinkFlow, E-FlowFormer outperforms the SOTA methods by up to 91% on MVSEC dataset and 14% on DSEC dataset and presents the best generalization performance.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### BoundaryCAM: A Boundary-based Refinement Framework for Weakly Supervised Semantic Segmentation of Medical Images
- **Authors:** Bharath Srinivas Prabakaran, Erik Ostrowski, Muhammad Shafique
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2303.07853
- **Pdf link:** https://arxiv.org/pdf/2303.07853
- **Abstract**
Weakly Supervised Semantic Segmentation (WSSS) with only image-level supervision is a promising approach to deal with the need for Segmentation networks, especially for generating a large number of pixel-wise masks in a given dataset. However, most state-of-the-art image-level WSSS techniques lack an understanding of the geometric features embedded in the images since the network cannot derive any object boundary information from just image-level labels. We define a boundary here as the line separating an object and its background, or two different objects. To address this drawback, we propose our novel BoundaryCAM framework, which deploys state-of-the-art class activation maps combined with various post-processing techniques in order to achieve fine-grained higher-accuracy segmentation masks. To achieve this, we investigate a state-of-the-art unsupervised semantic segmentation network that can be used to construct a boundary map, which enables BoundaryCAM to predict object locations with sharper boundaries. By applying our method to WSSS predictions, we were able to achieve up to 10% improvements even to the benefit of the current state-of-the-art WSSS methods for medical imaging. The framework is open-source and accessible online at https://github.com/bharathprabakaran/BoundaryCAM.
## Keyword: ISP
### LoG-CAN: local-global Class-aware Network for semantic segmentation of remote sensing images
- **Authors:** Xiaowen Ma, Mengting Ma, Chenlu Hu, Zhiyuan Song, Ziyan Zhao, Tian Feng, Wei Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07747
- **Pdf link:** https://arxiv.org/pdf/2303.07747
- **Abstract**
Remote sensing images are known of having complex backgrounds, high intra-class variance and large variation of scales, which bring challenge to semantic segmentation. We present LoG-CAN, a multi-scale semantic segmentation network with a global class-aware (GCA) module and local class-aware (LCA) modules to remote sensing images. Specifically, the GCA module captures the global representations of class-wise context modeling to circumvent background interference; the LCA modules generate local class representations as intermediate aware elements, indirectly associating pixels with global class representations to reduce variance within a class; and a multi-scale architecture with GCA and LCA modules yields effective segmentation of objects at different scales via cascaded refinement and fusion of features. Through the evaluation on the ISPRS Vaihingen dataset and the ISPRS Potsdam dataset, experimental results indicate that LoG-CAN outperforms the state-of-the-art methods for general semantic segmentation, while significantly reducing network parameters and computation. Code is available at~\href{https://github.com/xwmaxwma/rssegmentation}{https://github.com/xwmaxwma/rssegmentation}.
### Generation-Guided Multi-Level Unified Network for Video Grounding
- **Authors:** Xing Cheng, Xiangyu Wu, Dong Shen, Hezheng Lin, Fan Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2303.07748
- **Pdf link:** https://arxiv.org/pdf/2303.07748
- **Abstract**
Video grounding aims to locate the timestamps best matching the query description within an untrimmed video. Prevalent methods can be divided into moment-level and clip-level frameworks. Moment-level approaches directly predict the probability of each transient moment to be the boundary in a global perspective, and they usually perform better in coarse grounding. On the other hand, clip-level ones aggregate the moments in different time windows into proposals and then deduce the most similar one, leading to its advantage in fine-grained grounding. In this paper, we propose a multi-level unified framework to enhance performance by leveraging the merits of both moment-level and clip-level methods. Moreover, a novel generation-guided paradigm in both levels is adopted. It introduces a multi-modal generator to produce the implicit boundary feature and clip feature, later regarded as queries to calculate the boundary scores by a discriminator. The generation-guided solution enhances video grounding from a two-unique-modals' match task to a cross-modal attention task, which steps out of the previous framework and obtains notable gains. The proposed Generation-guided Multi-level Unified network (GMU) surpasses previous methods and reaches State-Of-The-Art on various benchmarks with disparate features, e.g., Charades-STA, ActivityNet captions.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### OVRL-V2: A simple state-of-art baseline for ImageNav and ObjectNav
- **Authors:** Karmesh Yadav, Arjun Majumdar, Ram Ramrakhya, Naoki Yokoyama, Alexei Baevski, Zsolt Kira, Oleksandr Maksymets, Dhruv Batra
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.07798
- **Pdf link:** https://arxiv.org/pdf/2303.07798
- **Abstract**
We present a single neural network architecture composed of task-agnostic components (ViTs, convolutions, and LSTMs) that achieves state-of-art results on both the ImageNav ("go to location in <this picture>") and ObjectNav ("find a chair") tasks without any task-specific modules like object detection, segmentation, mapping, or planning modules. Such general-purpose methods offer advantages of simplicity in design, positive scaling with available compute, and versatile applicability to multiple tasks. Our work builds upon the recent success of self-supervised learning (SSL) for pre-training vision transformers (ViT). However, while the training recipes for convolutional networks are mature and robust, the recipes for ViTs are contingent and brittle, and in the case of ViTs for visual navigation, yet to be fully discovered. Specifically, we find that vanilla ViTs do not outperform ResNets on visual navigation. We propose the use of a compression layer operating over ViT patch representations to preserve spatial information along with policy training improvements. These improvements allow us to demonstrate positive scaling laws for the first time in visual navigation tasks. Consequently, our model advances state-of-the-art performance on ImageNav from 54.2% to 82.0% success and performs competitively against concurrent state-of-art on ObjectNav with success rate of 64.0% vs. 65.0%. Overall, this work does not present a fundamentally new approach, but rather recommendations for training a general-purpose architecture that achieves state-of-art performance today and could serve as a strong baseline for future methods.
### Subjective and Objective Quality Assessment for in-the-Wild Computer Graphics Images
- **Authors:** Zicheng Zhang, Wei Sun, Tao Wang, Wei Lu, Quan Zhou, Jun he, Qiyuan Wang, Xiongkuo Min, Guangtao Zhai
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2303.08050
- **Pdf link:** https://arxiv.org/pdf/2303.08050
- **Abstract**
Computer graphics images (CGIs) are artificially generated by means of computer programs and are widely perceived under various scenarios, such as games, streaming media, etc. In practical, the quality of CGIs consistently suffers from poor rendering during the production and inevitable compression artifacts during the transmission of multimedia applications. However, few works have been dedicated to dealing with the challenge of computer graphics images quality assessment (CGIQA). Most image quality assessment (IQA) metrics are developed for natural scene images (NSIs) and validated on the databases consisting of NSIs with synthetic distortions, which are not suitable for in-the-wild CGIs. To bridge the gap between evaluating the quality of NSIs and CGIs, we construct a large-scale in-the-wild CGIQA database consisting of 6,000 CGIs (CGIQA-6k) and carry out the subjective experiment in a well-controlled laboratory environment to obtain the accurate perceptual ratings of the CGIs. Then, we propose an effective deep learning-based no-reference (NR) IQA model by utilizing multi-stage feature fusion strategy and multi-stage channel attention mechanism. The major motivation of the proposed model is to make full use of inter-channel information from low-level to high-level since CGIs have apparent patterns as well as rich interactive semantic content. Experimental results show that the proposed method outperforms all other state-of-the-art NR IQA methods on the constructed CGIQA-6k database and other CGIQA-related databases. The database along with the code will be released to facilitate further research.
## Keyword: RAW
### Deep Learning Approach for Classifying the Aggressive Comments on Social Media: Machine Translated Data Vs Real Life Data
- **Authors:** Mst Shapna Akter, Hossain Shahriar, Nova Ahmed, Alfredo Cuzzocrea
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07484
- **Pdf link:** https://arxiv.org/pdf/2303.07484
- **Abstract**
Aggressive comments on social media negatively impact human life. Such offensive contents are responsible for depression and suicidal-related activities. Since online social networking is increasing day by day, the hate content is also increasing. Several investigations have been done on the domain of cyberbullying, cyberaggression, hate speech, etc. The majority of the inquiry has been done in the English language. Some languages (Hindi and Bangla) still lack proper investigations due to the lack of a dataset. This paper particularly worked on the Hindi, Bangla, and English datasets to detect aggressive comments and have shown a novel way of generating machine-translated data to resolve data unavailability issues. A fully machine-translated English dataset has been analyzed with the models such as the Long Short term memory model (LSTM), Bidirectional Long-short term memory model (BiLSTM), LSTM-Autoencoder, word2vec, Bidirectional Encoder Representations from Transformers (BERT), and generative pre-trained transformer (GPT-2) to make an observation on how the models perform on a machine-translated noisy dataset. We have compared the performance of using the noisy data with two more datasets such as raw data, which does not contain any noises, and semi-noisy data, which contains a certain amount of noisy data. We have classified both the raw and semi-noisy data using the aforementioned models. To evaluate the performance of the models, we have used evaluation metrics such as F1-score,accuracy, precision, and recall. We have achieved the highest accuracy on raw data using the gpt2 model, semi-noisy data using the BERT model, and fully machine-translated data using the BERT model. Since many languages do not have proper data availability, our approach will help researchers create machine-translated datasets for several analysis purposes.
### PlanarTrack: A Large-scale Challenging Benchmark for Planar Object Tracking
- **Authors:** Xinran Liu, Xiaoqiong Liu, Ziruo Yi, Xin Zhou, Thanh Le, Libo Zhang, Yan Huang, Qing Yang, Heng Fan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07625
- **Pdf link:** https://arxiv.org/pdf/2303.07625
- **Abstract**
Planar object tracking is a critical computer vision problem and has drawn increasing interest owing to its key roles in robotics, augmented reality, etc. Despite rapid progress, its further development, especially in the deep learning era, is largely hindered due to the lack of large-scale challenging benchmarks. Addressing this, we introduce PlanarTrack, a large-scale challenging planar tracking benchmark. Specifically, PlanarTrack consists of 1,000 videos with more than 490K images. All these videos are collected in complex unconstrained scenarios from the wild, which makes PlanarTrack, compared with existing benchmarks, more challenging but realistic for real-world applications. To ensure the high-quality annotation, each frame in PlanarTrack is manually labeled using four corners with multiple-round careful inspection and refinement. To our best knowledge, PlanarTrack, to date, is the largest and most challenging dataset dedicated to planar object tracking. In order to analyze the proposed PlanarTrack, we evaluate 10 planar trackers and conduct comprehensive comparisons and in-depth analysis. Our results, not surprisingly, demonstrate that current top-performing planar trackers degenerate significantly on the challenging PlanarTrack and more efforts are needed to improve planar tracking in the future. In addition, we further derive a variant named PlanarTrack$_{\mathbf{BB}}$ for generic object tracking from PlanarTrack. Our evaluation of 10 excellent generic trackers on PlanarTrack$_{\mathrm{BB}}$ manifests that, surprisingly, PlanarTrack$_{\mathrm{BB}}$ is even more challenging than several popular generic tracking benchmarks and more attention should be paid to handle such planar objects, though they are rigid. All benchmarks and evaluations will be released at the project webpage.
### Data-Free Sketch-Based Image Retrieval
- **Authors:** Abhra Chaudhuri, Ayan Kumar Bhunia, Yi-Zhe Song, Anjan Dutta
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07775
- **Pdf link:** https://arxiv.org/pdf/2303.07775
- **Abstract**
Rising concerns about privacy and anonymity preservation of deep learning models have facilitated research in data-free learning (DFL). For the first time, we identify that for data-scarce tasks like Sketch-Based Image Retrieval (SBIR), where the difficulty in acquiring paired photos and hand-drawn sketches limits data-dependent cross-modal learning algorithms, DFL can prove to be a much more practical paradigm. We thus propose Data-Free (DF)-SBIR, where, unlike existing DFL problems, pre-trained, single-modality classification models have to be leveraged to learn a cross-modal metric-space for retrieval without access to any training data. The widespread availability of pre-trained classification models, along with the difficulty in acquiring paired photo-sketch datasets for SBIR justify the practicality of this setting. We present a methodology for DF-SBIR, which can leverage knowledge from models independently trained to perform classification on photos and sketches. We evaluate our model on the Sketchy, TU-Berlin, and QuickDraw benchmarks, designing a variety of baselines based on state-of-the-art DFL literature, and observe that our method surpasses all of them by significant margins. Our method also achieves mAPs competitive with data-dependent approaches, all the while requiring no training data. Implementation is available at \url{https://github.com/abhrac/data-free-sbir}.
### Quaternion Orthogonal Transformer for Facial Expression Recognition in the Wild
- **Authors:** Yu Zhou, Liyuan Guo, Lianghai Jin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07831
- **Pdf link:** https://arxiv.org/pdf/2303.07831
- **Abstract**
Facial expression recognition (FER) is a challenging topic in artificial intelligence. Recently, many researchers have attempted to introduce Vision Transformer (ViT) to the FER task. However, ViT cannot fully utilize emotional features extracted from raw images and requires a lot of computing resources. To overcome these problems, we propose a quaternion orthogonal transformer (QOT) for FER. Firstly, to reduce redundancy among features extracted from pre-trained ResNet-50, we use the orthogonal loss to decompose and compact these features into three sets of orthogonal sub-features. Secondly, three orthogonal sub-features are integrated into a quaternion matrix, which maintains the correlations between different orthogonal components. Finally, we develop a quaternion vision transformer (Q-ViT) for feature classification. The Q-ViT adopts quaternion operations instead of the original operations in ViT, which improves the final accuracies with fewer parameters. Experimental results on three in-the-wild FER datasets show that the proposed QOT outperforms several state-of-the-art models and reduces the computations.
### BoundaryCAM: A Boundary-based Refinement Framework for Weakly Supervised Semantic Segmentation of Medical Images
- **Authors:** Bharath Srinivas Prabakaran, Erik Ostrowski, Muhammad Shafique
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2303.07853
- **Pdf link:** https://arxiv.org/pdf/2303.07853
- **Abstract**
Weakly Supervised Semantic Segmentation (WSSS) with only image-level supervision is a promising approach to deal with the need for Segmentation networks, especially for generating a large number of pixel-wise masks in a given dataset. However, most state-of-the-art image-level WSSS techniques lack an understanding of the geometric features embedded in the images since the network cannot derive any object boundary information from just image-level labels. We define a boundary here as the line separating an object and its background, or two different objects. To address this drawback, we propose our novel BoundaryCAM framework, which deploys state-of-the-art class activation maps combined with various post-processing techniques in order to achieve fine-grained higher-accuracy segmentation masks. To achieve this, we investigate a state-of-the-art unsupervised semantic segmentation network that can be used to construct a boundary map, which enables BoundaryCAM to predict object locations with sharper boundaries. By applying our method to WSSS predictions, we were able to achieve up to 10% improvements even to the benefit of the current state-of-the-art WSSS methods for medical imaging. The framework is open-source and accessible online at https://github.com/bharathprabakaran/BoundaryCAM.
### You Can Ground Earlier than See: An Effective and Efficient Pipeline for Temporal Sentence Grounding in Compressed Videos
- **Authors:** Xiang Fang, Daizong Liu, Pan Zhou, Guoshun Nan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2303.07863
- **Pdf link:** https://arxiv.org/pdf/2303.07863
- **Abstract**
Given an untrimmed video, temporal sentence grounding (TSG) aims to locate a target moment semantically according to a sentence query. Although previous respectable works have made decent success, they only focus on high-level visual features extracted from the consecutive decoded frames and fail to handle the compressed videos for query modelling, suffering from insufficient representation capability and significant computational complexity during training and testing. In this paper, we pose a new setting, compressed-domain TSG, which directly utilizes compressed videos rather than fully-decompressed frames as the visual input. To handle the raw video bit-stream input, we propose a novel Three-branch Compressed-domain Spatial-temporal Fusion (TCSF) framework, which extracts and aggregates three kinds of low-level visual features (I-frame, motion vector and residual features) for effective and efficient grounding. Particularly, instead of encoding the whole decoded frames like previous works, we capture the appearance representation by only learning the I-frame feature to reduce delay or latency. Besides, we explore the motion information not only by learning the motion vector feature, but also by exploring the relations of neighboring frames via the residual feature. In this way, a three-branch spatial-temporal attention layer with an adaptive motion-appearance fusion module is further designed to extract and aggregate both appearance and motion information for the final grounding. Experiments on three challenging datasets shows that our TCSF achieves better performance than other state-of-the-art methods with lower complexity.
### Interpretable ODE-style Generative Diffusion Model via Force Field Construction
- **Authors:** Weiyang Jin, Yongpei Zhu, Yuxi Peng
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.08063
- **Pdf link:** https://arxiv.org/pdf/2303.08063
- **Abstract**
For a considerable time, researchers have focused on developing a method that establishes a deep connection between the generative diffusion model and mathematical physics. Despite previous efforts, progress has been limited to the pursuit of a single specialized method. In order to advance the interpretability of diffusion models and explore new research directions, it is essential to establish a unified ODE-style generative diffusion model. Such a model should draw inspiration from physical models and possess a clear geometric meaning. This paper aims to identify various physical models that are suitable for constructing ODE-style generative diffusion models accurately from a mathematical perspective. We then summarize these models into a unified method. Additionally, we perform a case study where we use the theoretical model identified by our method to develop a range of new diffusion model methods, and conduct experiments. Our experiments on CIFAR-10 demonstrate the effectiveness of our approach. We have constructed a computational framework that attains highly proficient results with regards to image generation speed, alongside an additional model that demonstrates exceptional performance in both Inception score and FID score. These results underscore the significance of our method in advancing the field of diffusion models.
## Keyword: raw image
### Quaternion Orthogonal Transformer for Facial Expression Recognition in the Wild
- **Authors:** Yu Zhou, Liyuan Guo, Lianghai Jin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07831
- **Pdf link:** https://arxiv.org/pdf/2303.07831
- **Abstract**
Facial expression recognition (FER) is a challenging topic in artificial intelligence. Recently, many researchers have attempted to introduce Vision Transformer (ViT) to the FER task. However, ViT cannot fully utilize emotional features extracted from raw images and requires a lot of computing resources. To overcome these problems, we propose a quaternion orthogonal transformer (QOT) for FER. Firstly, to reduce redundancy among features extracted from pre-trained ResNet-50, we use the orthogonal loss to decompose and compact these features into three sets of orthogonal sub-features. Secondly, three orthogonal sub-features are integrated into a quaternion matrix, which maintains the correlations between different orthogonal components. Finally, we develop a quaternion vision transformer (Q-ViT) for feature classification. The Q-ViT adopts quaternion operations instead of the original operations in ViT, which improves the final accuracies with fewer parameters. Experimental results on three in-the-wild FER datasets show that the proposed QOT outperforms several state-of-the-art models and reduces the computations.
|
2.0
|
New submissions for Wed, 15 Mar 23 - ## Keyword: events
### Implicit and Explicit Commonsense for Multi-sentence Video Captioning
- **Authors:** Shih-Han Chou, James J. Little, Leonid Sigal
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07545
- **Pdf link:** https://arxiv.org/pdf/2303.07545
- **Abstract**
Existing dense or paragraph video captioning approaches rely on holistic representations of videos, possibly coupled with learned object/action representations, to condition hierarchical language decoders. However, they fundamentally lack the commonsense knowledge of the world required to reason about progression of events, causality, and even function of certain objects within a scene. To address this limitation we propose a novel video captioning Transformer-based model, that takes into account both implicit (visuo-lingual and purely linguistic) and explicit (knowledge-base) commonsense knowledge. We show that these forms of knowledge, in isolation and in combination, enhance the quality of produced captions. Further, inspired by imitation learning, we propose a new task of instruction generation, where the goal is to produce a set of linguistic instructions from a video demonstration of its performance. We formalize the task using ALFRED dataset [52] generated using an AI2-THOR environment. While instruction generation is conceptually similar to paragraph captioning, it differs in the fact that it exhibits stronger object persistence, as well as spatially-aware and causal sentence structure. We show that our commonsense knowledge enhanced approach produces significant improvements on this task (up to 57% in METEOR and 8.5% in CIDEr), as well as the state-of-the-art result on more traditional video captioning in the ActivityNet Captions dataset [29].
### V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle Cooperative Perception
- **Authors:** Runsheng Xu, Xin Xia, Jinlong Li, Hanzhao Li, Shuo Zhang, Zhengzhong Tu, Zonglin Meng, Hao Xiang, Xiaoyu Dong, Rui Song, Hongkai Yu, Bolei Zhou, Jiaqi Ma
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07601
- **Pdf link:** https://arxiv.org/pdf/2303.07601
- **Abstract**
Modern perception systems of autonomous vehicles are known to be sensitive to occlusions and lack the capability of long perceiving range. It has been one of the key bottlenecks that prevents Level 5 autonomy. Recent research has demonstrated that the Vehicle-to-Vehicle (V2V) cooperative perception system has great potential to revolutionize the autonomous driving industry. However, the lack of a real-world dataset hinders the progress of this field. To facilitate the development of cooperative perception, we present V2V4Real, the first large-scale real-world multi-modal dataset for V2V perception. The data is collected by two vehicles equipped with multi-modal sensors driving together through diverse scenarios. Our V2V4Real dataset covers a driving area of 410 km, comprising 20K LiDAR frames, 40K RGB frames, 240K annotated 3D bounding boxes for 5 classes, and HDMaps that cover all the driving routes. V2V4Real introduces three perception tasks, including cooperative 3D object detection, cooperative 3D object tracking, and Sim2Real domain adaptation for cooperative perception. We provide comprehensive benchmarks of recent cooperative perception algorithms on three tasks. The V2V4Real dataset and codebase can be found at https://github.com/ucla-mobility/V2V4Real.
### Training Robust Spiking Neural Networks with ViewPoint Transform and SpatioTemporal Stretching
- **Authors:** Haibo Shen, Juyu Xiao, Yihao Luo, Xiang Cao, Liangqi Zhang, Tianjiang Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07609
- **Pdf link:** https://arxiv.org/pdf/2303.07609
- **Abstract**
Neuromorphic vision sensors (event cameras) simulate biological visual perception systems and have the advantages of high temporal resolution, less data redundancy, low power consumption, and large dynamic range. Since both events and spikes are modeled from neural signals, event cameras are inherently suitable for spiking neural networks (SNNs), which are considered promising models for artificial intelligence (AI) and theoretical neuroscience. However, the unconventional visual signals of these cameras pose a great challenge to the robustness of spiking neural networks. In this paper, we propose a novel data augmentation method, ViewPoint Transform and SpatioTemporal Stretching (VPT-STS). It improves the robustness of SNNs by transforming the rotation centers and angles in the spatiotemporal domain to generate samples from different viewpoints. Furthermore, we introduce the spatiotemporal stretching to avoid potential information loss in viewpoint transformation. Extensive experiments on prevailing neuromorphic datasets demonstrate that VPT-STS is broadly effective on multi-event representations and significantly outperforms pure spatial geometric transformations. Notably, the SNNs model with VPT-STS achieves a state-of-the-art accuracy of 84.4\% on the DVS-CIFAR10 dataset.
## Keyword: event camera
### Training Robust Spiking Neural Networks with ViewPoint Transform and SpatioTemporal Stretching
- **Authors:** Haibo Shen, Juyu Xiao, Yihao Luo, Xiang Cao, Liangqi Zhang, Tianjiang Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07609
- **Pdf link:** https://arxiv.org/pdf/2303.07609
- **Abstract**
Neuromorphic vision sensors (event cameras) simulate biological visual perception systems and have the advantages of high temporal resolution, less data redundancy, low power consumption, and large dynamic range. Since both events and spikes are modeled from neural signals, event cameras are inherently suitable for spiking neural networks (SNNs), which are considered promising models for artificial intelligence (AI) and theoretical neuroscience. However, the unconventional visual signals of these cameras pose a great challenge to the robustness of spiking neural networks. In this paper, we propose a novel data augmentation method, ViewPoint Transform and SpatioTemporal Stretching (VPT-STS). It improves the robustness of SNNs by transforming the rotation centers and angles in the spatiotemporal domain to generate samples from different viewpoints. Furthermore, we introduce the spatiotemporal stretching to avoid potential information loss in viewpoint transformation. Extensive experiments on prevailing neuromorphic datasets demonstrate that VPT-STS is broadly effective on multi-event representations and significantly outperforms pure spatial geometric transformations. Notably, the SNNs model with VPT-STS achieves a state-of-the-art accuracy of 84.4\% on the DVS-CIFAR10 dataset.
### BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow Estimation
- **Authors:** Yijin Li, Zhaoyang Huang, Shuo Chen, Xiaoyu Shi, Hongsheng Li, Hujun Bao, Zhaopeng Cui, Guofeng Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07716
- **Pdf link:** https://arxiv.org/pdf/2303.07716
- **Abstract**
Event cameras provide high temporal precision, low data rates, and high dynamic range visual perception, which are well-suited for optical flow estimation. While data-driven optical flow estimation has obtained great success in RGB cameras, its generalization performance is seriously hindered in event cameras mainly due to the limited and biased training data. In this paper, we present a novel simulator, BlinkSim, for the fast generation of large-scale data for event-based optical flow. BlinkSim consists of a configurable rendering engine and a flexible engine for event data simulation. By leveraging the wealth of current 3D assets, the rendering engine enables us to automatically build up thousands of scenes with different objects, textures, and motion patterns and render very high-frequency images for realistic event data simulation. Based on BlinkSim, we construct a large training dataset and evaluation benchmark BlinkFlow that contains sufficient, diversiform, and challenging event data with optical flow ground truth. Experiments show that BlinkFlow improves the generalization performance of state-of-the-art methods by more than 40% on average and up to 90%. Moreover, we further propose an Event optical Flow transFormer (E-FlowFormer) architecture. Powered by our BlinkFlow, E-FlowFormer outperforms the SOTA methods by up to 91% on MVSEC dataset and 14% on DSEC dataset and presents the best generalization performance.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### BoundaryCAM: A Boundary-based Refinement Framework for Weakly Supervised Semantic Segmentation of Medical Images
- **Authors:** Bharath Srinivas Prabakaran, Erik Ostrowski, Muhammad Shafique
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2303.07853
- **Pdf link:** https://arxiv.org/pdf/2303.07853
- **Abstract**
Weakly Supervised Semantic Segmentation (WSSS) with only image-level supervision is a promising approach to deal with the need for Segmentation networks, especially for generating a large number of pixel-wise masks in a given dataset. However, most state-of-the-art image-level WSSS techniques lack an understanding of the geometric features embedded in the images since the network cannot derive any object boundary information from just image-level labels. We define a boundary here as the line separating an object and its background, or two different objects. To address this drawback, we propose our novel BoundaryCAM framework, which deploys state-of-the-art class activation maps combined with various post-processing techniques in order to achieve fine-grained higher-accuracy segmentation masks. To achieve this, we investigate a state-of-the-art unsupervised semantic segmentation network that can be used to construct a boundary map, which enables BoundaryCAM to predict object locations with sharper boundaries. By applying our method to WSSS predictions, we were able to achieve up to 10% improvements even to the benefit of the current state-of-the-art WSSS methods for medical imaging. The framework is open-source and accessible online at https://github.com/bharathprabakaran/BoundaryCAM.
## Keyword: ISP
### LoG-CAN: local-global Class-aware Network for semantic segmentation of remote sensing images
- **Authors:** Xiaowen Ma, Mengting Ma, Chenlu Hu, Zhiyuan Song, Ziyan Zhao, Tian Feng, Wei Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07747
- **Pdf link:** https://arxiv.org/pdf/2303.07747
- **Abstract**
Remote sensing images are known of having complex backgrounds, high intra-class variance and large variation of scales, which bring challenge to semantic segmentation. We present LoG-CAN, a multi-scale semantic segmentation network with a global class-aware (GCA) module and local class-aware (LCA) modules to remote sensing images. Specifically, the GCA module captures the global representations of class-wise context modeling to circumvent background interference; the LCA modules generate local class representations as intermediate aware elements, indirectly associating pixels with global class representations to reduce variance within a class; and a multi-scale architecture with GCA and LCA modules yields effective segmentation of objects at different scales via cascaded refinement and fusion of features. Through the evaluation on the ISPRS Vaihingen dataset and the ISPRS Potsdam dataset, experimental results indicate that LoG-CAN outperforms the state-of-the-art methods for general semantic segmentation, while significantly reducing network parameters and computation. Code is available at~\href{https://github.com/xwmaxwma/rssegmentation}{https://github.com/xwmaxwma/rssegmentation}.
### Generation-Guided Multi-Level Unified Network for Video Grounding
- **Authors:** Xing Cheng, Xiangyu Wu, Dong Shen, Hezheng Lin, Fan Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2303.07748
- **Pdf link:** https://arxiv.org/pdf/2303.07748
- **Abstract**
Video grounding aims to locate the timestamps best matching the query description within an untrimmed video. Prevalent methods can be divided into moment-level and clip-level frameworks. Moment-level approaches directly predict the probability of each transient moment to be the boundary in a global perspective, and they usually perform better in coarse grounding. On the other hand, clip-level ones aggregate the moments in different time windows into proposals and then deduce the most similar one, leading to its advantage in fine-grained grounding. In this paper, we propose a multi-level unified framework to enhance performance by leveraging the merits of both moment-level and clip-level methods. Moreover, a novel generation-guided paradigm in both levels is adopted. It introduces a multi-modal generator to produce the implicit boundary feature and clip feature, later regarded as queries to calculate the boundary scores by a discriminator. The generation-guided solution enhances video grounding from a two-unique-modals' match task to a cross-modal attention task, which steps out of the previous framework and obtains notable gains. The proposed Generation-guided Multi-level Unified network (GMU) surpasses previous methods and reaches State-Of-The-Art on various benchmarks with disparate features, e.g., Charades-STA, ActivityNet captions.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### OVRL-V2: A simple state-of-art baseline for ImageNav and ObjectNav
- **Authors:** Karmesh Yadav, Arjun Majumdar, Ram Ramrakhya, Naoki Yokoyama, Alexei Baevski, Zsolt Kira, Oleksandr Maksymets, Dhruv Batra
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.07798
- **Pdf link:** https://arxiv.org/pdf/2303.07798
- **Abstract**
We present a single neural network architecture composed of task-agnostic components (ViTs, convolutions, and LSTMs) that achieves state-of-art results on both the ImageNav ("go to location in <this picture>") and ObjectNav ("find a chair") tasks without any task-specific modules like object detection, segmentation, mapping, or planning modules. Such general-purpose methods offer advantages of simplicity in design, positive scaling with available compute, and versatile applicability to multiple tasks. Our work builds upon the recent success of self-supervised learning (SSL) for pre-training vision transformers (ViT). However, while the training recipes for convolutional networks are mature and robust, the recipes for ViTs are contingent and brittle, and in the case of ViTs for visual navigation, yet to be fully discovered. Specifically, we find that vanilla ViTs do not outperform ResNets on visual navigation. We propose the use of a compression layer operating over ViT patch representations to preserve spatial information along with policy training improvements. These improvements allow us to demonstrate positive scaling laws for the first time in visual navigation tasks. Consequently, our model advances state-of-the-art performance on ImageNav from 54.2% to 82.0% success and performs competitively against concurrent state-of-art on ObjectNav with success rate of 64.0% vs. 65.0%. Overall, this work does not present a fundamentally new approach, but rather recommendations for training a general-purpose architecture that achieves state-of-art performance today and could serve as a strong baseline for future methods.
### Subjective and Objective Quality Assessment for in-the-Wild Computer Graphics Images
- **Authors:** Zicheng Zhang, Wei Sun, Tao Wang, Wei Lu, Quan Zhou, Jun he, Qiyuan Wang, Xiongkuo Min, Guangtao Zhai
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2303.08050
- **Pdf link:** https://arxiv.org/pdf/2303.08050
- **Abstract**
Computer graphics images (CGIs) are artificially generated by means of computer programs and are widely perceived under various scenarios, such as games, streaming media, etc. In practical, the quality of CGIs consistently suffers from poor rendering during the production and inevitable compression artifacts during the transmission of multimedia applications. However, few works have been dedicated to dealing with the challenge of computer graphics images quality assessment (CGIQA). Most image quality assessment (IQA) metrics are developed for natural scene images (NSIs) and validated on the databases consisting of NSIs with synthetic distortions, which are not suitable for in-the-wild CGIs. To bridge the gap between evaluating the quality of NSIs and CGIs, we construct a large-scale in-the-wild CGIQA database consisting of 6,000 CGIs (CGIQA-6k) and carry out the subjective experiment in a well-controlled laboratory environment to obtain the accurate perceptual ratings of the CGIs. Then, we propose an effective deep learning-based no-reference (NR) IQA model by utilizing multi-stage feature fusion strategy and multi-stage channel attention mechanism. The major motivation of the proposed model is to make full use of inter-channel information from low-level to high-level since CGIs have apparent patterns as well as rich interactive semantic content. Experimental results show that the proposed method outperforms all other state-of-the-art NR IQA methods on the constructed CGIQA-6k database and other CGIQA-related databases. The database along with the code will be released to facilitate further research.
## Keyword: RAW
### Deep Learning Approach for Classifying the Aggressive Comments on Social Media: Machine Translated Data Vs Real Life Data
- **Authors:** Mst Shapna Akter, Hossain Shahriar, Nova Ahmed, Alfredo Cuzzocrea
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07484
- **Pdf link:** https://arxiv.org/pdf/2303.07484
- **Abstract**
Aggressive comments on social media negatively impact human life. Such offensive contents are responsible for depression and suicidal-related activities. Since online social networking is increasing day by day, the hate content is also increasing. Several investigations have been done on the domain of cyberbullying, cyberaggression, hate speech, etc. The majority of the inquiry has been done in the English language. Some languages (Hindi and Bangla) still lack proper investigations due to the lack of a dataset. This paper particularly worked on the Hindi, Bangla, and English datasets to detect aggressive comments and have shown a novel way of generating machine-translated data to resolve data unavailability issues. A fully machine-translated English dataset has been analyzed with the models such as the Long Short term memory model (LSTM), Bidirectional Long-short term memory model (BiLSTM), LSTM-Autoencoder, word2vec, Bidirectional Encoder Representations from Transformers (BERT), and generative pre-trained transformer (GPT-2) to make an observation on how the models perform on a machine-translated noisy dataset. We have compared the performance of using the noisy data with two more datasets such as raw data, which does not contain any noises, and semi-noisy data, which contains a certain amount of noisy data. We have classified both the raw and semi-noisy data using the aforementioned models. To evaluate the performance of the models, we have used evaluation metrics such as F1-score,accuracy, precision, and recall. We have achieved the highest accuracy on raw data using the gpt2 model, semi-noisy data using the BERT model, and fully machine-translated data using the BERT model. Since many languages do not have proper data availability, our approach will help researchers create machine-translated datasets for several analysis purposes.
### PlanarTrack: A Large-scale Challenging Benchmark for Planar Object Tracking
- **Authors:** Xinran Liu, Xiaoqiong Liu, Ziruo Yi, Xin Zhou, Thanh Le, Libo Zhang, Yan Huang, Qing Yang, Heng Fan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07625
- **Pdf link:** https://arxiv.org/pdf/2303.07625
- **Abstract**
Planar object tracking is a critical computer vision problem and has drawn increasing interest owing to its key roles in robotics, augmented reality, etc. Despite rapid progress, its further development, especially in the deep learning era, is largely hindered due to the lack of large-scale challenging benchmarks. Addressing this, we introduce PlanarTrack, a large-scale challenging planar tracking benchmark. Specifically, PlanarTrack consists of 1,000 videos with more than 490K images. All these videos are collected in complex unconstrained scenarios from the wild, which makes PlanarTrack, compared with existing benchmarks, more challenging but realistic for real-world applications. To ensure the high-quality annotation, each frame in PlanarTrack is manually labeled using four corners with multiple-round careful inspection and refinement. To our best knowledge, PlanarTrack, to date, is the largest and most challenging dataset dedicated to planar object tracking. In order to analyze the proposed PlanarTrack, we evaluate 10 planar trackers and conduct comprehensive comparisons and in-depth analysis. Our results, not surprisingly, demonstrate that current top-performing planar trackers degenerate significantly on the challenging PlanarTrack and more efforts are needed to improve planar tracking in the future. In addition, we further derive a variant named PlanarTrack$_{\mathbf{BB}}$ for generic object tracking from PlanarTrack. Our evaluation of 10 excellent generic trackers on PlanarTrack$_{\mathrm{BB}}$ manifests that, surprisingly, PlanarTrack$_{\mathrm{BB}}$ is even more challenging than several popular generic tracking benchmarks and more attention should be paid to handle such planar objects, though they are rigid. All benchmarks and evaluations will be released at the project webpage.
### Data-Free Sketch-Based Image Retrieval
- **Authors:** Abhra Chaudhuri, Ayan Kumar Bhunia, Yi-Zhe Song, Anjan Dutta
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07775
- **Pdf link:** https://arxiv.org/pdf/2303.07775
- **Abstract**
Rising concerns about privacy and anonymity preservation of deep learning models have facilitated research in data-free learning (DFL). For the first time, we identify that for data-scarce tasks like Sketch-Based Image Retrieval (SBIR), where the difficulty in acquiring paired photos and hand-drawn sketches limits data-dependent cross-modal learning algorithms, DFL can prove to be a much more practical paradigm. We thus propose Data-Free (DF)-SBIR, where, unlike existing DFL problems, pre-trained, single-modality classification models have to be leveraged to learn a cross-modal metric-space for retrieval without access to any training data. The widespread availability of pre-trained classification models, along with the difficulty in acquiring paired photo-sketch datasets for SBIR justify the practicality of this setting. We present a methodology for DF-SBIR, which can leverage knowledge from models independently trained to perform classification on photos and sketches. We evaluate our model on the Sketchy, TU-Berlin, and QuickDraw benchmarks, designing a variety of baselines based on state-of-the-art DFL literature, and observe that our method surpasses all of them by significant margins. Our method also achieves mAPs competitive with data-dependent approaches, all the while requiring no training data. Implementation is available at \url{https://github.com/abhrac/data-free-sbir}.
### Quaternion Orthogonal Transformer for Facial Expression Recognition in the Wild
- **Authors:** Yu Zhou, Liyuan Guo, Lianghai Jin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07831
- **Pdf link:** https://arxiv.org/pdf/2303.07831
- **Abstract**
Facial expression recognition (FER) is a challenging topic in artificial intelligence. Recently, many researchers have attempted to introduce Vision Transformer (ViT) to the FER task. However, ViT cannot fully utilize emotional features extracted from raw images and requires a lot of computing resources. To overcome these problems, we propose a quaternion orthogonal transformer (QOT) for FER. Firstly, to reduce redundancy among features extracted from pre-trained ResNet-50, we use the orthogonal loss to decompose and compact these features into three sets of orthogonal sub-features. Secondly, three orthogonal sub-features are integrated into a quaternion matrix, which maintains the correlations between different orthogonal components. Finally, we develop a quaternion vision transformer (Q-ViT) for feature classification. The Q-ViT adopts quaternion operations instead of the original operations in ViT, which improves the final accuracies with fewer parameters. Experimental results on three in-the-wild FER datasets show that the proposed QOT outperforms several state-of-the-art models and reduces the computations.
### BoundaryCAM: A Boundary-based Refinement Framework for Weakly Supervised Semantic Segmentation of Medical Images
- **Authors:** Bharath Srinivas Prabakaran, Erik Ostrowski, Muhammad Shafique
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2303.07853
- **Pdf link:** https://arxiv.org/pdf/2303.07853
- **Abstract**
Weakly Supervised Semantic Segmentation (WSSS) with only image-level supervision is a promising approach to deal with the need for Segmentation networks, especially for generating a large number of pixel-wise masks in a given dataset. However, most state-of-the-art image-level WSSS techniques lack an understanding of the geometric features embedded in the images since the network cannot derive any object boundary information from just image-level labels. We define a boundary here as the line separating an object and its background, or two different objects. To address this drawback, we propose our novel BoundaryCAM framework, which deploys state-of-the-art class activation maps combined with various post-processing techniques in order to achieve fine-grained higher-accuracy segmentation masks. To achieve this, we investigate a state-of-the-art unsupervised semantic segmentation network that can be used to construct a boundary map, which enables BoundaryCAM to predict object locations with sharper boundaries. By applying our method to WSSS predictions, we were able to achieve up to 10% improvements even to the benefit of the current state-of-the-art WSSS methods for medical imaging. The framework is open-source and accessible online at https://github.com/bharathprabakaran/BoundaryCAM.
### You Can Ground Earlier than See: An Effective and Efficient Pipeline for Temporal Sentence Grounding in Compressed Videos
- **Authors:** Xiang Fang, Daizong Liu, Pan Zhou, Guoshun Nan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2303.07863
- **Pdf link:** https://arxiv.org/pdf/2303.07863
- **Abstract**
Given an untrimmed video, temporal sentence grounding (TSG) aims to locate a target moment semantically according to a sentence query. Although previous respectable works have made decent success, they only focus on high-level visual features extracted from the consecutive decoded frames and fail to handle the compressed videos for query modelling, suffering from insufficient representation capability and significant computational complexity during training and testing. In this paper, we pose a new setting, compressed-domain TSG, which directly utilizes compressed videos rather than fully-decompressed frames as the visual input. To handle the raw video bit-stream input, we propose a novel Three-branch Compressed-domain Spatial-temporal Fusion (TCSF) framework, which extracts and aggregates three kinds of low-level visual features (I-frame, motion vector and residual features) for effective and efficient grounding. Particularly, instead of encoding the whole decoded frames like previous works, we capture the appearance representation by only learning the I-frame feature to reduce delay or latency. Besides, we explore the motion information not only by learning the motion vector feature, but also by exploring the relations of neighboring frames via the residual feature. In this way, a three-branch spatial-temporal attention layer with an adaptive motion-appearance fusion module is further designed to extract and aggregate both appearance and motion information for the final grounding. Experiments on three challenging datasets shows that our TCSF achieves better performance than other state-of-the-art methods with lower complexity.
### Interpretable ODE-style Generative Diffusion Model via Force Field Construction
- **Authors:** Weiyang Jin, Yongpei Zhu, Yuxi Peng
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.08063
- **Pdf link:** https://arxiv.org/pdf/2303.08063
- **Abstract**
For a considerable time, researchers have focused on developing a method that establishes a deep connection between the generative diffusion model and mathematical physics. Despite previous efforts, progress has been limited to the pursuit of a single specialized method. In order to advance the interpretability of diffusion models and explore new research directions, it is essential to establish a unified ODE-style generative diffusion model. Such a model should draw inspiration from physical models and possess a clear geometric meaning. This paper aims to identify various physical models that are suitable for constructing ODE-style generative diffusion models accurately from a mathematical perspective. We then summarize these models into a unified method. Additionally, we perform a case study where we use the theoretical model identified by our method to develop a range of new diffusion model methods, and conduct experiments. Our experiments on CIFAR-10 demonstrate the effectiveness of our approach. We have constructed a computational framework that attains highly proficient results with regards to image generation speed, alongside an additional model that demonstrates exceptional performance in both Inception score and FID score. These results underscore the significance of our method in advancing the field of diffusion models.
## Keyword: raw image
### Quaternion Orthogonal Transformer for Facial Expression Recognition in the Wild
- **Authors:** Yu Zhou, Liyuan Guo, Lianghai Jin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.07831
- **Pdf link:** https://arxiv.org/pdf/2303.07831
- **Abstract**
Facial expression recognition (FER) is a challenging topic in artificial intelligence. Recently, many researchers have attempted to introduce Vision Transformer (ViT) to the FER task. However, ViT cannot fully utilize emotional features extracted from raw images and requires a lot of computing resources. To overcome these problems, we propose a quaternion orthogonal transformer (QOT) for FER. Firstly, to reduce redundancy among features extracted from pre-trained ResNet-50, we use the orthogonal loss to decompose and compact these features into three sets of orthogonal sub-features. Secondly, three orthogonal sub-features are integrated into a quaternion matrix, which maintains the correlations between different orthogonal components. Finally, we develop a quaternion vision transformer (Q-ViT) for feature classification. The Q-ViT adopts quaternion operations instead of the original operations in ViT, which improves the final accuracies with fewer parameters. Experimental results on three in-the-wild FER datasets show that the proposed QOT outperforms several state-of-the-art models and reduces the computations.
|
process
|
new submissions for wed mar keyword events implicit and explicit commonsense for multi sentence video captioning authors shih han chou james j little leonid sigal subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract existing dense or paragraph video captioning approaches rely on holistic representations of videos possibly coupled with learned object action representations to condition hierarchical language decoders however they fundamentally lack the commonsense knowledge of the world required to reason about progression of events causality and even function of certain objects within a scene to address this limitation we propose a novel video captioning transformer based model that takes into account both implicit visuo lingual and purely linguistic and explicit knowledge base commonsense knowledge we show that these forms of knowledge in isolation and in combination enhance the quality of produced captions further inspired by imitation learning we propose a new task of instruction generation where the goal is to produce a set of linguistic instructions from a video demonstration of its performance we formalize the task using alfred dataset generated using an thor environment while instruction generation is conceptually similar to paragraph captioning it differs in the fact that it exhibits stronger object persistence as well as spatially aware and causal sentence structure we show that our commonsense knowledge enhanced approach produces significant improvements on this task up to in meteor and in cider as well as the state of the art result on more traditional video captioning in the activitynet captions dataset a real world large scale dataset for vehicle to vehicle cooperative perception authors runsheng xu xin xia jinlong li hanzhao li shuo zhang zhengzhong tu zonglin meng hao xiang xiaoyu dong rui song hongkai yu bolei zhou jiaqi ma subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract modern perception systems of autonomous vehicles are known to be sensitive to occlusions and lack the capability of long perceiving range it has been one of the key bottlenecks that prevents level autonomy recent research has demonstrated that the vehicle to vehicle cooperative perception system has great potential to revolutionize the autonomous driving industry however the lack of a real world dataset hinders the progress of this field to facilitate the development of cooperative perception we present the first large scale real world multi modal dataset for perception the data is collected by two vehicles equipped with multi modal sensors driving together through diverse scenarios our dataset covers a driving area of km comprising lidar frames rgb frames annotated bounding boxes for classes and hdmaps that cover all the driving routes introduces three perception tasks including cooperative object detection cooperative object tracking and domain adaptation for cooperative perception we provide comprehensive benchmarks of recent cooperative perception algorithms on three tasks the dataset and codebase can be found at training robust spiking neural networks with viewpoint transform and spatiotemporal stretching authors haibo shen juyu xiao yihao luo xiang cao liangqi zhang tianjiang wang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract neuromorphic vision sensors event cameras simulate biological visual perception systems and have the advantages of high temporal resolution less data redundancy low power consumption and large dynamic range since both events and spikes are modeled from neural signals event cameras are inherently suitable for spiking neural networks snns which are considered promising models for artificial intelligence ai and theoretical neuroscience however the unconventional visual signals of these cameras pose a great challenge to the robustness of spiking neural networks in this paper we propose a novel data augmentation method viewpoint transform and spatiotemporal stretching vpt sts it improves the robustness of snns by transforming the rotation centers and angles in the spatiotemporal domain to generate samples from different viewpoints furthermore we introduce the spatiotemporal stretching to avoid potential information loss in viewpoint transformation extensive experiments on prevailing neuromorphic datasets demonstrate that vpt sts is broadly effective on multi event representations and significantly outperforms pure spatial geometric transformations notably the snns model with vpt sts achieves a state of the art accuracy of on the dvs dataset keyword event camera training robust spiking neural networks with viewpoint transform and spatiotemporal stretching authors haibo shen juyu xiao yihao luo xiang cao liangqi zhang tianjiang wang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract neuromorphic vision sensors event cameras simulate biological visual perception systems and have the advantages of high temporal resolution less data redundancy low power consumption and large dynamic range since both events and spikes are modeled from neural signals event cameras are inherently suitable for spiking neural networks snns which are considered promising models for artificial intelligence ai and theoretical neuroscience however the unconventional visual signals of these cameras pose a great challenge to the robustness of spiking neural networks in this paper we propose a novel data augmentation method viewpoint transform and spatiotemporal stretching vpt sts it improves the robustness of snns by transforming the rotation centers and angles in the spatiotemporal domain to generate samples from different viewpoints furthermore we introduce the spatiotemporal stretching to avoid potential information loss in viewpoint transformation extensive experiments on prevailing neuromorphic datasets demonstrate that vpt sts is broadly effective on multi event representations and significantly outperforms pure spatial geometric transformations notably the snns model with vpt sts achieves a state of the art accuracy of on the dvs dataset blinkflow a dataset to push the limits of event based optical flow estimation authors yijin li zhaoyang huang shuo chen xiaoyu shi hongsheng li hujun bao zhaopeng cui guofeng zhang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract event cameras provide high temporal precision low data rates and high dynamic range visual perception which are well suited for optical flow estimation while data driven optical flow estimation has obtained great success in rgb cameras its generalization performance is seriously hindered in event cameras mainly due to the limited and biased training data in this paper we present a novel simulator blinksim for the fast generation of large scale data for event based optical flow blinksim consists of a configurable rendering engine and a flexible engine for event data simulation by leveraging the wealth of current assets the rendering engine enables us to automatically build up thousands of scenes with different objects textures and motion patterns and render very high frequency images for realistic event data simulation based on blinksim we construct a large training dataset and evaluation benchmark blinkflow that contains sufficient diversiform and challenging event data with optical flow ground truth experiments show that blinkflow improves the generalization performance of state of the art methods by more than on average and up to moreover we further propose an event optical flow transformer e flowformer architecture powered by our blinkflow e flowformer outperforms the sota methods by up to on mvsec dataset and on dsec dataset and presents the best generalization performance keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb boundarycam a boundary based refinement framework for weakly supervised semantic segmentation of medical images authors bharath srinivas prabakaran erik ostrowski muhammad shafique subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract weakly supervised semantic segmentation wsss with only image level supervision is a promising approach to deal with the need for segmentation networks especially for generating a large number of pixel wise masks in a given dataset however most state of the art image level wsss techniques lack an understanding of the geometric features embedded in the images since the network cannot derive any object boundary information from just image level labels we define a boundary here as the line separating an object and its background or two different objects to address this drawback we propose our novel boundarycam framework which deploys state of the art class activation maps combined with various post processing techniques in order to achieve fine grained higher accuracy segmentation masks to achieve this we investigate a state of the art unsupervised semantic segmentation network that can be used to construct a boundary map which enables boundarycam to predict object locations with sharper boundaries by applying our method to wsss predictions we were able to achieve up to improvements even to the benefit of the current state of the art wsss methods for medical imaging the framework is open source and accessible online at keyword isp log can local global class aware network for semantic segmentation of remote sensing images authors xiaowen ma mengting ma chenlu hu zhiyuan song ziyan zhao tian feng wei zhang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract remote sensing images are known of having complex backgrounds high intra class variance and large variation of scales which bring challenge to semantic segmentation we present log can a multi scale semantic segmentation network with a global class aware gca module and local class aware lca modules to remote sensing images specifically the gca module captures the global representations of class wise context modeling to circumvent background interference the lca modules generate local class representations as intermediate aware elements indirectly associating pixels with global class representations to reduce variance within a class and a multi scale architecture with gca and lca modules yields effective segmentation of objects at different scales via cascaded refinement and fusion of features through the evaluation on the isprs vaihingen dataset and the isprs potsdam dataset experimental results indicate that log can outperforms the state of the art methods for general semantic segmentation while significantly reducing network parameters and computation code is available at href generation guided multi level unified network for video grounding authors xing cheng xiangyu wu dong shen hezheng lin fan yang subjects computer vision and pattern recognition cs cv multimedia cs mm arxiv link pdf link abstract video grounding aims to locate the timestamps best matching the query description within an untrimmed video prevalent methods can be divided into moment level and clip level frameworks moment level approaches directly predict the probability of each transient moment to be the boundary in a global perspective and they usually perform better in coarse grounding on the other hand clip level ones aggregate the moments in different time windows into proposals and then deduce the most similar one leading to its advantage in fine grained grounding in this paper we propose a multi level unified framework to enhance performance by leveraging the merits of both moment level and clip level methods moreover a novel generation guided paradigm in both levels is adopted it introduces a multi modal generator to produce the implicit boundary feature and clip feature later regarded as queries to calculate the boundary scores by a discriminator the generation guided solution enhances video grounding from a two unique modals match task to a cross modal attention task which steps out of the previous framework and obtains notable gains the proposed generation guided multi level unified network gmu surpasses previous methods and reaches state of the art on various benchmarks with disparate features e g charades sta activitynet captions keyword image signal processing there is no result keyword image signal process there is no result keyword compression ovrl a simple state of art baseline for imagenav and objectnav authors karmesh yadav arjun majumdar ram ramrakhya naoki yokoyama alexei baevski zsolt kira oleksandr maksymets dhruv batra subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract we present a single neural network architecture composed of task agnostic components vits convolutions and lstms that achieves state of art results on both the imagenav go to location in and objectnav find a chair tasks without any task specific modules like object detection segmentation mapping or planning modules such general purpose methods offer advantages of simplicity in design positive scaling with available compute and versatile applicability to multiple tasks our work builds upon the recent success of self supervised learning ssl for pre training vision transformers vit however while the training recipes for convolutional networks are mature and robust the recipes for vits are contingent and brittle and in the case of vits for visual navigation yet to be fully discovered specifically we find that vanilla vits do not outperform resnets on visual navigation we propose the use of a compression layer operating over vit patch representations to preserve spatial information along with policy training improvements these improvements allow us to demonstrate positive scaling laws for the first time in visual navigation tasks consequently our model advances state of the art performance on imagenav from to success and performs competitively against concurrent state of art on objectnav with success rate of vs overall this work does not present a fundamentally new approach but rather recommendations for training a general purpose architecture that achieves state of art performance today and could serve as a strong baseline for future methods subjective and objective quality assessment for in the wild computer graphics images authors zicheng zhang wei sun tao wang wei lu quan zhou jun he qiyuan wang xiongkuo min guangtao zhai subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract computer graphics images cgis are artificially generated by means of computer programs and are widely perceived under various scenarios such as games streaming media etc in practical the quality of cgis consistently suffers from poor rendering during the production and inevitable compression artifacts during the transmission of multimedia applications however few works have been dedicated to dealing with the challenge of computer graphics images quality assessment cgiqa most image quality assessment iqa metrics are developed for natural scene images nsis and validated on the databases consisting of nsis with synthetic distortions which are not suitable for in the wild cgis to bridge the gap between evaluating the quality of nsis and cgis we construct a large scale in the wild cgiqa database consisting of cgis cgiqa and carry out the subjective experiment in a well controlled laboratory environment to obtain the accurate perceptual ratings of the cgis then we propose an effective deep learning based no reference nr iqa model by utilizing multi stage feature fusion strategy and multi stage channel attention mechanism the major motivation of the proposed model is to make full use of inter channel information from low level to high level since cgis have apparent patterns as well as rich interactive semantic content experimental results show that the proposed method outperforms all other state of the art nr iqa methods on the constructed cgiqa database and other cgiqa related databases the database along with the code will be released to facilitate further research keyword raw deep learning approach for classifying the aggressive comments on social media machine translated data vs real life data authors mst shapna akter hossain shahriar nova ahmed alfredo cuzzocrea subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract aggressive comments on social media negatively impact human life such offensive contents are responsible for depression and suicidal related activities since online social networking is increasing day by day the hate content is also increasing several investigations have been done on the domain of cyberbullying cyberaggression hate speech etc the majority of the inquiry has been done in the english language some languages hindi and bangla still lack proper investigations due to the lack of a dataset this paper particularly worked on the hindi bangla and english datasets to detect aggressive comments and have shown a novel way of generating machine translated data to resolve data unavailability issues a fully machine translated english dataset has been analyzed with the models such as the long short term memory model lstm bidirectional long short term memory model bilstm lstm autoencoder bidirectional encoder representations from transformers bert and generative pre trained transformer gpt to make an observation on how the models perform on a machine translated noisy dataset we have compared the performance of using the noisy data with two more datasets such as raw data which does not contain any noises and semi noisy data which contains a certain amount of noisy data we have classified both the raw and semi noisy data using the aforementioned models to evaluate the performance of the models we have used evaluation metrics such as score accuracy precision and recall we have achieved the highest accuracy on raw data using the model semi noisy data using the bert model and fully machine translated data using the bert model since many languages do not have proper data availability our approach will help researchers create machine translated datasets for several analysis purposes planartrack a large scale challenging benchmark for planar object tracking authors xinran liu xiaoqiong liu ziruo yi xin zhou thanh le libo zhang yan huang qing yang heng fan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract planar object tracking is a critical computer vision problem and has drawn increasing interest owing to its key roles in robotics augmented reality etc despite rapid progress its further development especially in the deep learning era is largely hindered due to the lack of large scale challenging benchmarks addressing this we introduce planartrack a large scale challenging planar tracking benchmark specifically planartrack consists of videos with more than images all these videos are collected in complex unconstrained scenarios from the wild which makes planartrack compared with existing benchmarks more challenging but realistic for real world applications to ensure the high quality annotation each frame in planartrack is manually labeled using four corners with multiple round careful inspection and refinement to our best knowledge planartrack to date is the largest and most challenging dataset dedicated to planar object tracking in order to analyze the proposed planartrack we evaluate planar trackers and conduct comprehensive comparisons and in depth analysis our results not surprisingly demonstrate that current top performing planar trackers degenerate significantly on the challenging planartrack and more efforts are needed to improve planar tracking in the future in addition we further derive a variant named planartrack mathbf bb for generic object tracking from planartrack our evaluation of excellent generic trackers on planartrack mathrm bb manifests that surprisingly planartrack mathrm bb is even more challenging than several popular generic tracking benchmarks and more attention should be paid to handle such planar objects though they are rigid all benchmarks and evaluations will be released at the project webpage data free sketch based image retrieval authors abhra chaudhuri ayan kumar bhunia yi zhe song anjan dutta subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract rising concerns about privacy and anonymity preservation of deep learning models have facilitated research in data free learning dfl for the first time we identify that for data scarce tasks like sketch based image retrieval sbir where the difficulty in acquiring paired photos and hand drawn sketches limits data dependent cross modal learning algorithms dfl can prove to be a much more practical paradigm we thus propose data free df sbir where unlike existing dfl problems pre trained single modality classification models have to be leveraged to learn a cross modal metric space for retrieval without access to any training data the widespread availability of pre trained classification models along with the difficulty in acquiring paired photo sketch datasets for sbir justify the practicality of this setting we present a methodology for df sbir which can leverage knowledge from models independently trained to perform classification on photos and sketches we evaluate our model on the sketchy tu berlin and quickdraw benchmarks designing a variety of baselines based on state of the art dfl literature and observe that our method surpasses all of them by significant margins our method also achieves maps competitive with data dependent approaches all the while requiring no training data implementation is available at url quaternion orthogonal transformer for facial expression recognition in the wild authors yu zhou liyuan guo lianghai jin subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract facial expression recognition fer is a challenging topic in artificial intelligence recently many researchers have attempted to introduce vision transformer vit to the fer task however vit cannot fully utilize emotional features extracted from raw images and requires a lot of computing resources to overcome these problems we propose a quaternion orthogonal transformer qot for fer firstly to reduce redundancy among features extracted from pre trained resnet we use the orthogonal loss to decompose and compact these features into three sets of orthogonal sub features secondly three orthogonal sub features are integrated into a quaternion matrix which maintains the correlations between different orthogonal components finally we develop a quaternion vision transformer q vit for feature classification the q vit adopts quaternion operations instead of the original operations in vit which improves the final accuracies with fewer parameters experimental results on three in the wild fer datasets show that the proposed qot outperforms several state of the art models and reduces the computations boundarycam a boundary based refinement framework for weakly supervised semantic segmentation of medical images authors bharath srinivas prabakaran erik ostrowski muhammad shafique subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract weakly supervised semantic segmentation wsss with only image level supervision is a promising approach to deal with the need for segmentation networks especially for generating a large number of pixel wise masks in a given dataset however most state of the art image level wsss techniques lack an understanding of the geometric features embedded in the images since the network cannot derive any object boundary information from just image level labels we define a boundary here as the line separating an object and its background or two different objects to address this drawback we propose our novel boundarycam framework which deploys state of the art class activation maps combined with various post processing techniques in order to achieve fine grained higher accuracy segmentation masks to achieve this we investigate a state of the art unsupervised semantic segmentation network that can be used to construct a boundary map which enables boundarycam to predict object locations with sharper boundaries by applying our method to wsss predictions we were able to achieve up to improvements even to the benefit of the current state of the art wsss methods for medical imaging the framework is open source and accessible online at you can ground earlier than see an effective and efficient pipeline for temporal sentence grounding in compressed videos authors xiang fang daizong liu pan zhou guoshun nan subjects computer vision and pattern recognition cs cv artificial intelligence cs ai multimedia cs mm arxiv link pdf link abstract given an untrimmed video temporal sentence grounding tsg aims to locate a target moment semantically according to a sentence query although previous respectable works have made decent success they only focus on high level visual features extracted from the consecutive decoded frames and fail to handle the compressed videos for query modelling suffering from insufficient representation capability and significant computational complexity during training and testing in this paper we pose a new setting compressed domain tsg which directly utilizes compressed videos rather than fully decompressed frames as the visual input to handle the raw video bit stream input we propose a novel three branch compressed domain spatial temporal fusion tcsf framework which extracts and aggregates three kinds of low level visual features i frame motion vector and residual features for effective and efficient grounding particularly instead of encoding the whole decoded frames like previous works we capture the appearance representation by only learning the i frame feature to reduce delay or latency besides we explore the motion information not only by learning the motion vector feature but also by exploring the relations of neighboring frames via the residual feature in this way a three branch spatial temporal attention layer with an adaptive motion appearance fusion module is further designed to extract and aggregate both appearance and motion information for the final grounding experiments on three challenging datasets shows that our tcsf achieves better performance than other state of the art methods with lower complexity interpretable ode style generative diffusion model via force field construction authors weiyang jin yongpei zhu yuxi peng subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract for a considerable time researchers have focused on developing a method that establishes a deep connection between the generative diffusion model and mathematical physics despite previous efforts progress has been limited to the pursuit of a single specialized method in order to advance the interpretability of diffusion models and explore new research directions it is essential to establish a unified ode style generative diffusion model such a model should draw inspiration from physical models and possess a clear geometric meaning this paper aims to identify various physical models that are suitable for constructing ode style generative diffusion models accurately from a mathematical perspective we then summarize these models into a unified method additionally we perform a case study where we use the theoretical model identified by our method to develop a range of new diffusion model methods and conduct experiments our experiments on cifar demonstrate the effectiveness of our approach we have constructed a computational framework that attains highly proficient results with regards to image generation speed alongside an additional model that demonstrates exceptional performance in both inception score and fid score these results underscore the significance of our method in advancing the field of diffusion models keyword raw image quaternion orthogonal transformer for facial expression recognition in the wild authors yu zhou liyuan guo lianghai jin subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract facial expression recognition fer is a challenging topic in artificial intelligence recently many researchers have attempted to introduce vision transformer vit to the fer task however vit cannot fully utilize emotional features extracted from raw images and requires a lot of computing resources to overcome these problems we propose a quaternion orthogonal transformer qot for fer firstly to reduce redundancy among features extracted from pre trained resnet we use the orthogonal loss to decompose and compact these features into three sets of orthogonal sub features secondly three orthogonal sub features are integrated into a quaternion matrix which maintains the correlations between different orthogonal components finally we develop a quaternion vision transformer q vit for feature classification the q vit adopts quaternion operations instead of the original operations in vit which improves the final accuracies with fewer parameters experimental results on three in the wild fer datasets show that the proposed qot outperforms several state of the art models and reduces the computations
| 1
|
263,297
| 23,047,467,522
|
IssuesEvent
|
2022-07-24 05:26:14
|
namhyung/uftrace
|
https://api.github.com/repos/namhyung/uftrace
|
closed
|
add an option -k to runtest.py to keep the compiled binaries
|
good first issue tests
|
`runtest.py` builds target binaries and runs them with uftrace for testing, then it removes the compiled binaries right after testing.
https://github.com/namhyung/uftrace/blob/v0.12/tests/runtest.py#L621-L622
We sometimes need to keep those binaries for further testing, but there is no way to keep them for now.
It'd be better to provide an option `-k`/`--keep` to keep the binaries.
|
1.0
|
add an option -k to runtest.py to keep the compiled binaries - `runtest.py` builds target binaries and runs them with uftrace for testing, then it removes the compiled binaries right after testing.
https://github.com/namhyung/uftrace/blob/v0.12/tests/runtest.py#L621-L622
We sometimes need to keep those binaries for further testing, but there is no way to keep them for now.
It'd be better to provide an option `-k`/`--keep` to keep the binaries.
|
non_process
|
add an option k to runtest py to keep the compiled binaries runtest py builds target binaries and runs them with uftrace for testing then it removes the compiled binaries right after testing we sometimes need to keep those binaries for further testing but there is no way to keep them for now it d be better to provide an option k keep to keep the binaries
| 0
|
21,897
| 30,345,431,453
|
IssuesEvent
|
2023-07-11 15:08:17
|
ukri-excalibur/excalibur-tests
|
https://api.github.com/repos/ukri-excalibur/excalibur-tests
|
opened
|
Plot x axis as numeric
|
postprocessing
|
Currently, the x axis of the (generic) plots is categorical. In order to accommodate scaling (and other) types of plots, this has to be numerical. As categorical generic plots will keep being useful, add a field in the config for the user to choose between the two.
|
1.0
|
Plot x axis as numeric - Currently, the x axis of the (generic) plots is categorical. In order to accommodate scaling (and other) types of plots, this has to be numerical. As categorical generic plots will keep being useful, add a field in the config for the user to choose between the two.
|
process
|
plot x axis as numeric currently the x axis of the generic plots is categorical in order to accommodate scaling and other types of plots this has to be numerical as categorical generic plots will keep being useful add a field in the config for the user to choose between the two
| 1
|
511,286
| 14,857,657,333
|
IssuesEvent
|
2021-01-18 15:43:03
|
prometheus/prometheus
|
https://api.github.com/repos/prometheus/prometheus
|
closed
|
write error , out-of-order series added with label set
|
component/tsdb kind/bug priority/P3
|
1、Prometheus 2.6.0 ( in docker), remote_write + influxdb( in docker).
2、Prometheus always OOM,use 【count by (__name__)({__name__=~".+"}) >10000】got metric 【node_cpu_seconds_total】
3、Stop prometheus ,login influxDB,【drop measurement node_cpu_seconds_total】,In prometheus.yaml add
`metric_relabel_configs:
- source_labels: [__name__]
regex: 'node_cpu_seconds_total'
action: drop`
4、start Prometheus,see:
prometheus | level=error ts=2019-05-31T08:36:52.501Z caller=db.go:363 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
5、update Prometheus to version 2.10.0,Still the same mistake。


|
1.0
|
write error , out-of-order series added with label set - 1、Prometheus 2.6.0 ( in docker), remote_write + influxdb( in docker).
2、Prometheus always OOM,use 【count by (__name__)({__name__=~".+"}) >10000】got metric 【node_cpu_seconds_total】
3、Stop prometheus ,login influxDB,【drop measurement node_cpu_seconds_total】,In prometheus.yaml add
`metric_relabel_configs:
- source_labels: [__name__]
regex: 'node_cpu_seconds_total'
action: drop`
4、start Prometheus,see:
prometheus | level=error ts=2019-05-31T08:36:52.501Z caller=db.go:363 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
5、update Prometheus to version 2.10.0,Still the same mistake。


|
non_process
|
write error out of order series added with label set 、prometheus ( in docker), remote write influxdb( in docker) 、prometheus always oom,use 【count by name name 】got metric 【node cpu seconds total】 、stop prometheus ,login influxdb,【drop measurement node cpu seconds total】 in prometheus yaml add metric relabel configs source labels regex node cpu seconds total action drop 、start prometheus,see: prometheus level error ts caller db go component tsdb msg compaction failed err persist head block write compaction add series out of order series added with label set 、update prometheus to version ,still the same mistake。
| 0
|
17,524
| 23,332,954,684
|
IssuesEvent
|
2022-08-09 07:27:26
|
goblint/analyzer
|
https://api.github.com/repos/goblint/analyzer
|
closed
|
Server mode command for just preprocessing to get files list
|
feature usability preprocessing
|
In #606 a command was added to get the list of involved files in the analysis. However, currently this is only available in the server after doing a complete analysis.
For https://github.com/goblint/GobPie/issues/17 and https://github.com/goblint/GobPie/issues/31, it would be useful to get the list of files already on startup (so GobPie knows which files should even trigger analysis and for which server instance) without having to wait for the entire analysis to complete.
This would be possible because getting the list of files only requires preprocessing and parsing (not even merging),
|
1.0
|
Server mode command for just preprocessing to get files list - In #606 a command was added to get the list of involved files in the analysis. However, currently this is only available in the server after doing a complete analysis.
For https://github.com/goblint/GobPie/issues/17 and https://github.com/goblint/GobPie/issues/31, it would be useful to get the list of files already on startup (so GobPie knows which files should even trigger analysis and for which server instance) without having to wait for the entire analysis to complete.
This would be possible because getting the list of files only requires preprocessing and parsing (not even merging),
|
process
|
server mode command for just preprocessing to get files list in a command was added to get the list of involved files in the analysis however currently this is only available in the server after doing a complete analysis for and it would be useful to get the list of files already on startup so gobpie knows which files should even trigger analysis and for which server instance without having to wait for the entire analysis to complete this would be possible because getting the list of files only requires preprocessing and parsing not even merging
| 1
|
774,763
| 27,209,008,798
|
IssuesEvent
|
2023-02-20 15:09:06
|
Luka85/FindAdogBreed
|
https://api.github.com/repos/Luka85/FindAdogBreed
|
closed
|
Bugfix: styling fix
|
high priority
|
<img width="736" alt="Screenshot 2022-11-18 at 07 47 40" src="https://user-images.githubusercontent.com/3824010/202639149-7720c46a-b833-48a1-97d9-8629de6cd825.png">
On hover, when I want to trigger search, there is an inconsistency with bottom border.
|
1.0
|
Bugfix: styling fix - <img width="736" alt="Screenshot 2022-11-18 at 07 47 40" src="https://user-images.githubusercontent.com/3824010/202639149-7720c46a-b833-48a1-97d9-8629de6cd825.png">
On hover, when I want to trigger search, there is an inconsistency with bottom border.
|
non_process
|
bugfix styling fix img width alt screenshot at src on hover when i want to trigger search there is an inconsistency with bottom border
| 0
|
68,481
| 21,664,940,423
|
IssuesEvent
|
2022-05-07 03:05:13
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
saml auth only works with the default homeserver
|
T-Defect A-Login A-SSO
|
It forgets what HS its supposed be using by the time it gets redirected back to after sso login
|
1.0
|
saml auth only works with the default homeserver - It forgets what HS its supposed be using by the time it gets redirected back to after sso login
|
non_process
|
saml auth only works with the default homeserver it forgets what hs its supposed be using by the time it gets redirected back to after sso login
| 0
|
1,505
| 4,087,370,819
|
IssuesEvent
|
2016-06-01 09:48:56
|
ryankeefe92/Episodes
|
https://api.github.com/repos/ryankeefe92/Episodes
|
closed
|
Show list should be in sync with iTunes
|
feature frontend/UI preventative process:
|
This way, the app is never searching for an episode that has already been downloaded and added to iTunes (unless it is searching for a higher quality version).
|
1.0
|
Show list should be in sync with iTunes - This way, the app is never searching for an episode that has already been downloaded and added to iTunes (unless it is searching for a higher quality version).
|
process
|
show list should be in sync with itunes this way the app is never searching for an episode that has already been downloaded and added to itunes unless it is searching for a higher quality version
| 1
|
7,938
| 11,136,512,630
|
IssuesEvent
|
2019-12-20 16:46:31
|
koalaverse/homlr
|
https://api.github.com/repos/koalaverse/homlr
|
opened
|
Proposed Ch. 2 exercises
|
02 Modeling Process exercises
|
1. Load the Boston Housing data set from the pdp package. This data comes from a classic paper that analyzed the relationship between several characteristics (i.e. crime rate, average rooms per dwelling, property tax value) and the median home value (i.e. `cmedv`).
- What are the dimensions of this dataset?
- Perform some exploratory data analysis on this dataset.
- Assess the distribution of the target variable (`cmedv`)
2. Split the Boston Housing data into a training and test set with a 70-30% split.
- How many observations are in the training and test set?
- Compare the distribution of `cmedv` between the training and test set.
3. Load the spam dataset from the kernlab package.
- What is the distribution of the target variable (`type`) across the entire dataset?
- Create a 70-30 training and test split stratified by the target variable.
- Compare the distribution of the target variable between the training and test set.
4. Using the Boston Housing training dataset, create three linear regression models that use all available features to predict `cmedv`.
- Create a model with `lm()`, `glm()`, and `caret::train()`.
- How do the coefficients compare across these models?
- How does the MSE/RMSE compare across these models?
- Which method is `caret::train()` using to compute the linear regression model?
5. Using the Boston Housing training dataset, perform a 10-fold cross validated linear regression model, repeated 5 times, that uses all available features to predict `cmedv`.
- What is the average RMSE across all 50 model iterations?
- Plot the distribution of the RMSE across all 50 model iterations.
- Provide an explanation of the results.
- Repeat this process for the kernlab spam data that predicts the `type` variable.
6. Repeat exercise 5; however, instead of a linear regression model, use a K-nearest neighbor model that executes a hyperparameter grid search where _k_ ranges from 2-20. How does this model's results compare to the linear regression results?
|
1.0
|
Proposed Ch. 2 exercises - 1. Load the Boston Housing data set from the pdp package. This data comes from a classic paper that analyzed the relationship between several characteristics (i.e. crime rate, average rooms per dwelling, property tax value) and the median home value (i.e. `cmedv`).
- What are the dimensions of this dataset?
- Perform some exploratory data analysis on this dataset.
- Assess the distribution of the target variable (`cmedv`)
2. Split the Boston Housing data into a training and test set with a 70-30% split.
- How many observations are in the training and test set?
- Compare the distribution of `cmedv` between the training and test set.
3. Load the spam dataset from the kernlab package.
- What is the distribution of the target variable (`type`) across the entire dataset?
- Create a 70-30 training and test split stratified by the target variable.
- Compare the distribution of the target variable between the training and test set.
4. Using the Boston Housing training dataset, create three linear regression models that use all available features to predict `cmedv`.
- Create a model with `lm()`, `glm()`, and `caret::train()`.
- How do the coefficients compare across these models?
- How does the MSE/RMSE compare across these models?
- Which method is `caret::train()` using to compute the linear regression model?
5. Using the Boston Housing training dataset, perform a 10-fold cross validated linear regression model, repeated 5 times, that uses all available features to predict `cmedv`.
- What is the average RMSE across all 50 model iterations?
- Plot the distribution of the RMSE across all 50 model iterations.
- Provide an explanation of the results.
- Repeat this process for the kernlab spam data that predicts the `type` variable.
6. Repeat exercise 5; however, instead of a linear regression model, use a K-nearest neighbor model that executes a hyperparameter grid search where _k_ ranges from 2-20. How does this model's results compare to the linear regression results?
|
process
|
proposed ch exercises load the boston housing data set from the pdp package this data comes from a classic paper that analyzed the relationship between several characteristics i e crime rate average rooms per dwelling property tax value and the median home value i e cmedv what are the dimensions of this dataset perform some exploratory data analysis on this dataset assess the distribution of the target variable cmedv split the boston housing data into a training and test set with a split how many observations are in the training and test set compare the distribution of cmedv between the training and test set load the spam dataset from the kernlab package what is the distribution of the target variable type across the entire dataset create a training and test split stratified by the target variable compare the distribution of the target variable between the training and test set using the boston housing training dataset create three linear regression models that use all available features to predict cmedv create a model with lm glm and caret train how do the coefficients compare across these models how does the mse rmse compare across these models which method is caret train using to compute the linear regression model using the boston housing training dataset perform a fold cross validated linear regression model repeated times that uses all available features to predict cmedv what is the average rmse across all model iterations plot the distribution of the rmse across all model iterations provide an explanation of the results repeat this process for the kernlab spam data that predicts the type variable repeat exercise however instead of a linear regression model use a k nearest neighbor model that executes a hyperparameter grid search where k ranges from how does this model s results compare to the linear regression results
| 1
|
15,130
| 18,873,430,028
|
IssuesEvent
|
2021-11-13 15:53:42
|
kwongustj/project_Silbi
|
https://api.github.com/repos/kwongustj/project_Silbi
|
opened
|
2차 카운슬링 결과에 대한 적용
|
project_process
|
안녕하세요 11/9 (화)에 멘토님과 2차 멘토링이 있었고,**결과보고서에 추가**했으면 좋은 것들을 알려주셨습니다.
이에 관련 내용에 대해 얘기를 나누고자 이슈를 만들었습니다.
정리한 내용은 밑에 다가 다시 올려드리겠습니다. 참고하여 적용할 부분을 알려주시면 감사하겠습니다.
|
1.0
|
2차 카운슬링 결과에 대한 적용 - 안녕하세요 11/9 (화)에 멘토님과 2차 멘토링이 있었고,**결과보고서에 추가**했으면 좋은 것들을 알려주셨습니다.
이에 관련 내용에 대해 얘기를 나누고자 이슈를 만들었습니다.
정리한 내용은 밑에 다가 다시 올려드리겠습니다. 참고하여 적용할 부분을 알려주시면 감사하겠습니다.
|
process
|
카운슬링 결과에 대한 적용 안녕하세요 화 에 멘토님과 멘토링이 있었고 결과보고서에 추가 했으면 좋은 것들을 알려주셨습니다 이에 관련 내용에 대해 얘기를 나누고자 이슈를 만들었습니다 정리한 내용은 밑에 다가 다시 올려드리겠습니다 참고하여 적용할 부분을 알려주시면 감사하겠습니다
| 1
|
73,924
| 9,740,526,937
|
IssuesEvent
|
2019-06-01 21:24:52
|
novoid/lazyblorg
|
https://api.github.com/repos/novoid/lazyblorg
|
closed
|
pypandoc not found
|
documentation question
|
Hi,
When I issue:
./example_invocation.sh
I get:
Warning: MEMACS_FILE_WITH_IMAGE_FILE_INDEX is not empty but contains no existing file. Please fill it with an existing filename containing a Memacs file index or set either MEMACS_FILE_WITH_IMAGE_FILE_INDEX or CUSTOMIZED_IMAGE_LINK_KEY to an empty string.
Could not find Python module "pypandoc".
Please install it, e.g., with "sudo pip install pypandoc".
But, pypandoc is installed and the other stuff as well:
Installing:
sudo pip install memacs
sudo apt install python3-werkzeug
sudo apt install pandoc
sudo pip install python3-pypandoc
sudo apt install python3-opencv
sudo npm install -g sass
git clone https://github.com/novoid/lazyblorg.git
cd ~/Downloads/lazyblorg/
./example_invocation.sh
How do I fix those errors?
Thx a lot...
|
1.0
|
pypandoc not found - Hi,
When I issue:
./example_invocation.sh
I get:
Warning: MEMACS_FILE_WITH_IMAGE_FILE_INDEX is not empty but contains no existing file. Please fill it with an existing filename containing a Memacs file index or set either MEMACS_FILE_WITH_IMAGE_FILE_INDEX or CUSTOMIZED_IMAGE_LINK_KEY to an empty string.
Could not find Python module "pypandoc".
Please install it, e.g., with "sudo pip install pypandoc".
But, pypandoc is installed and the other stuff as well:
Installing:
sudo pip install memacs
sudo apt install python3-werkzeug
sudo apt install pandoc
sudo pip install python3-pypandoc
sudo apt install python3-opencv
sudo npm install -g sass
git clone https://github.com/novoid/lazyblorg.git
cd ~/Downloads/lazyblorg/
./example_invocation.sh
How do I fix those errors?
Thx a lot...
|
non_process
|
pypandoc not found hi when i issue example invocation sh i get warning memacs file with image file index is not empty but contains no existing file please fill it with an existing filename containing a memacs file index or set either memacs file with image file index or customized image link key to an empty string could not find python module pypandoc please install it e g with sudo pip install pypandoc but pypandoc is installed and the other stuff as well installing sudo pip install memacs sudo apt install werkzeug sudo apt install pandoc sudo pip install pypandoc sudo apt install opencv sudo npm install g sass git clone cd downloads lazyblorg example invocation sh how do i fix those errors thx a lot
| 0
|
14,784
| 18,060,092,386
|
IssuesEvent
|
2021-09-20 13:07:41
|
LOVDnl/LOVD3
|
https://api.github.com/repos/LOVDnl/LOVD3
|
opened
|
Make genome builds configurable
|
cat: API cat: core cat: installer cat: interface cat: submission process feature request cat: Variant Validator integration
|
[Intro]
- [ ] Design a Genome Build table (id, name, column_suffix, created_by, created_date)
- [ ] Create Genome Build table for:
- [ ] For new installs
- [ ] For upgraded installs
- [ ] Automatically fill with the standard genome build that has been selected
- [ ] For new installs
- [ ] For upgraded installs
- [ ] Remove `refseq_build` from `TABLE_CONFIG`.
- [ ] Fix dependencies.
- [ ] Remove data duplication from `inc-init.php` and `inc-sql-chromosomes.php`.
- [ ] Fix dependencies.
- [ ] Allow user to activate a genome build.
- [ ] Entry must be created in the Genome Build table.
- [ ] VOG table must be altered (adding start, end, DNA field).
- [ ] Custom DNA column must be created.
- [ ] Transcripts table must be altered (adding start, end fields).
- [ ] Allow user to deactivate a genome build.
- [ ] Entry must be deleted in the Genome Build table.
- [ ] VOG table must be altered (removing start, end, DNA field).
- [ ] Custom DNA column must be deactivated.
- [ ] Transcripts table must be altered (removing start, end fields).
- [ ] Fix dependencies.
- [ ] Add details from multiple genome builds to the API.
- [ ] Allow submission starting any genome build.
- [ ] Submission form
- [ ] Submission API
- [ ] Allow import using any genome build
|
1.0
|
Make genome builds configurable - [Intro]
- [ ] Design a Genome Build table (id, name, column_suffix, created_by, created_date)
- [ ] Create Genome Build table for:
- [ ] For new installs
- [ ] For upgraded installs
- [ ] Automatically fill with the standard genome build that has been selected
- [ ] For new installs
- [ ] For upgraded installs
- [ ] Remove `refseq_build` from `TABLE_CONFIG`.
- [ ] Fix dependencies.
- [ ] Remove data duplication from `inc-init.php` and `inc-sql-chromosomes.php`.
- [ ] Fix dependencies.
- [ ] Allow user to activate a genome build.
- [ ] Entry must be created in the Genome Build table.
- [ ] VOG table must be altered (adding start, end, DNA field).
- [ ] Custom DNA column must be created.
- [ ] Transcripts table must be altered (adding start, end fields).
- [ ] Allow user to deactivate a genome build.
- [ ] Entry must be deleted in the Genome Build table.
- [ ] VOG table must be altered (removing start, end, DNA field).
- [ ] Custom DNA column must be deactivated.
- [ ] Transcripts table must be altered (removing start, end fields).
- [ ] Fix dependencies.
- [ ] Add details from multiple genome builds to the API.
- [ ] Allow submission starting any genome build.
- [ ] Submission form
- [ ] Submission API
- [ ] Allow import using any genome build
|
process
|
make genome builds configurable design a genome build table id name column suffix created by created date create genome build table for for new installs for upgraded installs automatically fill with the standard genome build that has been selected for new installs for upgraded installs remove refseq build from table config fix dependencies remove data duplication from inc init php and inc sql chromosomes php fix dependencies allow user to activate a genome build entry must be created in the genome build table vog table must be altered adding start end dna field custom dna column must be created transcripts table must be altered adding start end fields allow user to deactivate a genome build entry must be deleted in the genome build table vog table must be altered removing start end dna field custom dna column must be deactivated transcripts table must be altered removing start end fields fix dependencies add details from multiple genome builds to the api allow submission starting any genome build submission form submission api allow import using any genome build
| 1
|
27,295
| 6,828,350,025
|
IssuesEvent
|
2017-11-08 20:05:59
|
fabric8-ui/fabric8-ux
|
https://api.github.com/repos/fabric8-ui/fabric8-ux
|
closed
|
Dev: Update User Profile to Match Visuals (simple)
|
area/platform team/westford work-type/code
|
Update the User Profile section to match the following pieces from visual design story https://github.com/fabric8-ui/fabric8-ux/issues/621.
- [x] User Profile dashboard
- [x] New Cards
- [x] My Work Items
- [x] My Spaces
- [x] Connected Accounts
|
1.0
|
Dev: Update User Profile to Match Visuals (simple) - Update the User Profile section to match the following pieces from visual design story https://github.com/fabric8-ui/fabric8-ux/issues/621.
- [x] User Profile dashboard
- [x] New Cards
- [x] My Work Items
- [x] My Spaces
- [x] Connected Accounts
|
non_process
|
dev update user profile to match visuals simple update the user profile section to match the following pieces from visual design story user profile dashboard new cards my work items my spaces connected accounts
| 0
|
126,772
| 17,106,611,948
|
IssuesEvent
|
2021-07-09 18:51:27
|
hydroshare/hydroshare
|
https://api.github.com/repos/hydroshare/hydroshare
|
closed
|
File Operations Slow, Buggy, and Inconsistent
|
File System Performance Resource Landing Page bug design phase needed
|
There have now been multiple issues reported by multiple users about weird things happening with file uploads to resources. See #2253, #2224, and #2226. For relatively small numbers of files, the drag and drop upload seems to work OK. But, with larger numbers of files, the performance seems to be unpredictable. This also includes the unzipping procedure, which I have found to also be unpredictable. I've not been able to fully test this because each operation is so slow, but I did some work tonight using this resource and testing in Chrome on my Mac:
https://www.hydroshare.org/resource/11655ab5cc584f54bd6c2c7223cc64d3/
In this resource, there is (was) an 8.6 MB zip file that contains a single folder that has inside of it 478 individual CSV files, each of which is on the order of 65 KB. So, while the number of files is relatively large, this is not a large volume of data.
Here's some things that I have observed. These all likely need to be tested more systematically and perhaps divided up into multiple issues to fix.
1. The zip file uploads to a new resource fine. No problems there.
2. Unzipping the file takes multiple minutes. This seems really slow. But, the first time I tried it, it did work.
3. Once unzipped, it takes many seconds to minutes to open the folder of files created by the unzip operation just to view its contents.
4. It takes many seconds to minutes before I can initiate a download of any of the files (e.g., this is the time that elapses with a spinning wheel from the time that I right click and choose download file to when the download actually happens). The actual download itself appears to be happening quickly. I'm not sure what is happening in the time leading up to the actual download.
5. Deletion of the folder resulted in a "Bad Gateway" error of some sort. After getting this error, it appeared as though the folder had been removed, but I don't think it was done cleanly, and this is where the real issues started.
6. After deleting the folder, at one point the entire resource appeared empty - i.e., the files just disappeared. After exiting edit mode, the zip file reappeared. I don't have the exact sequence that caused this.
7. After deleting the folder and then unzipping the original file again, the folder reappears, but when I double click on it to view its contents it takes multiple minutes to show me the folder contents - and it's empty. The unzip created the folder, but the files do not appear to be there. Then, when I back out to the "contents" folder, the original zip file and the folder are gone (the content directory looks empty). Navigating up one folder to the "contents" directory also booted me out of edit mode. When I went back into edit mode, the zip file and folder magically appeared again.
8. After fiddling with this several times now, it appears as though the zip file is there, and the folder is there, but if I try to download the zip file, I get an error saying that it "Failed - Server problem"
9. The resource is now sufficiently broken that Bag Creation fails. I get an error that just says "Sorry. Bag creation failed." I suspect that there is now a complete mismatch between what files are on disk (if any are left on disk) and what files are listed in the Django database.
Given that we have resource types like composite and model instance that may well have hundreds of files in them within a folder hierarchy, we need to make this all a bit more robust.
Since the resource I was testing on now seems to be corrupt, I have shared the original zip file here:
https://drive.google.com/open?id=0B0mUuf2-qdlTX3JVWlZXUkdkUDQ
|
1.0
|
File Operations Slow, Buggy, and Inconsistent - There have now been multiple issues reported by multiple users about weird things happening with file uploads to resources. See #2253, #2224, and #2226. For relatively small numbers of files, the drag and drop upload seems to work OK. But, with larger numbers of files, the performance seems to be unpredictable. This also includes the unzipping procedure, which I have found to also be unpredictable. I've not been able to fully test this because each operation is so slow, but I did some work tonight using this resource and testing in Chrome on my Mac:
https://www.hydroshare.org/resource/11655ab5cc584f54bd6c2c7223cc64d3/
In this resource, there is (was) an 8.6 MB zip file that contains a single folder that has inside of it 478 individual CSV files, each of which is on the order of 65 KB. So, while the number of files is relatively large, this is not a large volume of data.
Here's some things that I have observed. These all likely need to be tested more systematically and perhaps divided up into multiple issues to fix.
1. The zip file uploads to a new resource fine. No problems there.
2. Unzipping the file takes multiple minutes. This seems really slow. But, the first time I tried it, it did work.
3. Once unzipped, it takes many seconds to minutes to open the folder of files created by the unzip operation just to view its contents.
4. It takes many seconds to minutes before I can initiate a download of any of the files (e.g., this is the time that elapses with a spinning wheel from the time that I right click and choose download file to when the download actually happens). The actual download itself appears to be happening quickly. I'm not sure what is happening in the time leading up to the actual download.
5. Deletion of the folder resulted in a "Bad Gateway" error of some sort. After getting this error, it appeared as though the folder had been removed, but I don't think it was done cleanly, and this is where the real issues started.
6. After deleting the folder, at one point the entire resource appeared empty - i.e., the files just disappeared. After exiting edit mode, the zip file reappeared. I don't have the exact sequence that caused this.
7. After deleting the folder and then unzipping the original file again, the folder reappears, but when I double click on it to view its contents it takes multiple minutes to show me the folder contents - and it's empty. The unzip created the folder, but the files do not appear to be there. Then, when I back out to the "contents" folder, the original zip file and the folder are gone (the content directory looks empty). Navigating up one folder to the "contents" directory also booted me out of edit mode. When I went back into edit mode, the zip file and folder magically appeared again.
8. After fiddling with this several times now, it appears as though the zip file is there, and the folder is there, but if I try to download the zip file, I get an error saying that it "Failed - Server problem"
9. The resource is now sufficiently broken that Bag Creation fails. I get an error that just says "Sorry. Bag creation failed." I suspect that there is now a complete mismatch between what files are on disk (if any are left on disk) and what files are listed in the Django database.
Given that we have resource types like composite and model instance that may well have hundreds of files in them within a folder hierarchy, we need to make this all a bit more robust.
Since the resource I was testing on now seems to be corrupt, I have shared the original zip file here:
https://drive.google.com/open?id=0B0mUuf2-qdlTX3JVWlZXUkdkUDQ
|
non_process
|
file operations slow buggy and inconsistent there have now been multiple issues reported by multiple users about weird things happening with file uploads to resources see and for relatively small numbers of files the drag and drop upload seems to work ok but with larger numbers of files the performance seems to be unpredictable this also includes the unzipping procedure which i have found to also be unpredictable i ve not been able to fully test this because each operation is so slow but i did some work tonight using this resource and testing in chrome on my mac in this resource there is was an mb zip file that contains a single folder that has inside of it individual csv files each of which is on the order of kb so while the number of files is relatively large this is not a large volume of data here s some things that i have observed these all likely need to be tested more systematically and perhaps divided up into multiple issues to fix the zip file uploads to a new resource fine no problems there unzipping the file takes multiple minutes this seems really slow but the first time i tried it it did work once unzipped it takes many seconds to minutes to open the folder of files created by the unzip operation just to view its contents it takes many seconds to minutes before i can initiate a download of any of the files e g this is the time that elapses with a spinning wheel from the time that i right click and choose download file to when the download actually happens the actual download itself appears to be happening quickly i m not sure what is happening in the time leading up to the actual download deletion of the folder resulted in a bad gateway error of some sort after getting this error it appeared as though the folder had been removed but i don t think it was done cleanly and this is where the real issues started after deleting the folder at one point the entire resource appeared empty i e the files just disappeared after exiting edit mode the zip file reappeared i don t have the exact sequence that caused this after deleting the folder and then unzipping the original file again the folder reappears but when i double click on it to view its contents it takes multiple minutes to show me the folder contents and it s empty the unzip created the folder but the files do not appear to be there then when i back out to the contents folder the original zip file and the folder are gone the content directory looks empty navigating up one folder to the contents directory also booted me out of edit mode when i went back into edit mode the zip file and folder magically appeared again after fiddling with this several times now it appears as though the zip file is there and the folder is there but if i try to download the zip file i get an error saying that it failed server problem the resource is now sufficiently broken that bag creation fails i get an error that just says sorry bag creation failed i suspect that there is now a complete mismatch between what files are on disk if any are left on disk and what files are listed in the django database given that we have resource types like composite and model instance that may well have hundreds of files in them within a folder hierarchy we need to make this all a bit more robust since the resource i was testing on now seems to be corrupt i have shared the original zip file here
| 0
|
57,841
| 16,100,979,469
|
IssuesEvent
|
2021-04-27 09:14:55
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Invites(?) cause console to flood with "Room does not have an m.room.create event" warnings
|
T-Defect
|
With spaces enabled on today's nightly, my console has nothing but this in it:
```
10:11:57.373 rageshake.js?432e:65 Room !whereverZflxV:matrix.org does not have an m.room.create event
consoleObj.<computed> @ rageshake.js?432e:65
eval @ logger.ts?6b0b:50
Room.getType @ room.js?6146:1966
Room.isSpaceRoom @ room.js?6146:1977
isRoomVisible @ VisibilityProvider.ts?84e8:53
get globalState @ RoomNotificationStateStore.ts?54ff:51
updateStatusIndicator @ MatrixChat.tsx?b3f7:1899
eval @ MatrixChat.tsx?b3f7:1407
emit @ events.js?faa1:158
SyncApi._updateSyncState @ sync.js?cdb6:1695
SyncApi._sync @ sync.js?cdb6:803
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:762
|
1.0
|
Invites(?) cause console to flood with "Room does not have an m.room.create event" warnings - With spaces enabled on today's nightly, my console has nothing but this in it:
```
10:11:57.373 rageshake.js?432e:65 Room !whereverZflxV:matrix.org does not have an m.room.create event
consoleObj.<computed> @ rageshake.js?432e:65
eval @ logger.ts?6b0b:50
Room.getType @ room.js?6146:1966
Room.isSpaceRoom @ room.js?6146:1977
isRoomVisible @ VisibilityProvider.ts?84e8:53
get globalState @ RoomNotificationStateStore.ts?54ff:51
updateStatusIndicator @ MatrixChat.tsx?b3f7:1899
eval @ MatrixChat.tsx?b3f7:1407
emit @ events.js?faa1:158
SyncApi._updateSyncState @ sync.js?cdb6:1695
SyncApi._sync @ sync.js?cdb6:803
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:744
SyncApi._sync @ sync.js?cdb6:820
async function (async)
SyncApi._sync @ sync.js?cdb6:762
|
non_process
|
invites cause console to flood with room does not have an m room create event warnings with spaces enabled on today s nightly my console has nothing but this in it rageshake js room whereverzflxv matrix org does not have an m room create event consoleobj rageshake js eval logger ts room gettype room js room isspaceroom room js isroomvisible visibilityprovider ts get globalstate roomnotificationstatestore ts updatestatusindicator matrixchat tsx eval matrixchat tsx emit events js syncapi updatesyncstate sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js syncapi sync sync js async function async syncapi sync sync js
| 0
|
301,221
| 9,217,750,230
|
IssuesEvent
|
2019-03-11 11:35:27
|
project-koku/koku-ui
|
https://api.github.com/repos/project-koku/koku-ui
|
opened
|
Historical tool tips cut off be modal size
|
bug priority - low
|
**Describe the bug**
Historical tool tips cut off be modal size
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'details page'
2. Click on 'historical data'
3. Show first day tool tip
**Expected behavior**
Tool tip data should be readable
**Screenshots**


**Desktop (please complete the following information):**
- OS: Fedora
- Browser Firefox
- Version 65
**Additional context**
Testing against CI insightsbeta
|
1.0
|
Historical tool tips cut off be modal size - **Describe the bug**
Historical tool tips cut off be modal size
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'details page'
2. Click on 'historical data'
3. Show first day tool tip
**Expected behavior**
Tool tip data should be readable
**Screenshots**


**Desktop (please complete the following information):**
- OS: Fedora
- Browser Firefox
- Version 65
**Additional context**
Testing against CI insightsbeta
|
non_process
|
historical tool tips cut off be modal size describe the bug historical tool tips cut off be modal size to reproduce steps to reproduce the behavior go to details page click on historical data show first day tool tip expected behavior tool tip data should be readable screenshots desktop please complete the following information os fedora browser firefox version additional context testing against ci insightsbeta
| 0
|
6,716
| 9,821,052,142
|
IssuesEvent
|
2019-06-14 05:48:46
|
antonvsdata/amp
|
https://api.github.com/repos/antonvsdata/amp
|
closed
|
Drift correction functions
|
Preprocessing
|
Separate functions for actual drift correction and checking the results
|
1.0
|
Drift correction functions - Separate functions for actual drift correction and checking the results
|
process
|
drift correction functions separate functions for actual drift correction and checking the results
| 1
|
22,085
| 30,608,236,930
|
IssuesEvent
|
2023-07-23 09:29:51
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
ke-fe-cli 0.1.26 has 25 guarddog issues
|
npm-install-script shady-links npm-silent-process-execution
|
```{"npm-install-script":[{"code":" \"prepare\": \"yarn run build\",","location":"package/dist/server/node_modules/@sqltools/formatter/package.json:80","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"node scripts/check-node-version.js\",","location":"package/dist/server/node_modules/aws-sdk/package.json:169","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/dist/server/node_modules/colorette/package.json:70","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"node -e \\\"try{require('./postinstall')}catch(e){}\\\"\",","location":"package/dist/server/node_modules/core-js/package.json:102","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"node ./postinstall.js\",","location":"package/dist/server/node_modules/ejs/package.json:68","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"install\": \"node install.js\",","location":"package/dist/server/node_modules/fsevents/package.json:65","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc\",","location":"package/dist/server/node_modules/get-caller-file/package.json:68","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/dist/server/node_modules/koa/package.json:108","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/dist/server/node_modules/path-to-regexp/package.json:103","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"lerna bootstrap\",","location":"package/dist/server/node_modules/resolve/test/resolver/multirepo/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build-nopack\",","location":"package/dist/server/node_modules/ts-node/package.json:121","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cross-env NODE_ENV=production npm run build\",","location":"package/dist/server/node_modules/type-detect/package.json:162","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/dist/server/node_modules/typeorm/node_modules/cliui/package.json:106","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/dist/server/node_modules/typeorm/node_modules/yargs/package.json:140","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"gulp build-eslint-rules\",","location":"package/dist/server/node_modules/typescript/package.json:155","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"runmd --output=README.md README_js.md\",","location":"package/dist/server/node_modules/uuid/package.json:94","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/dist/server/node_modules/y18n/package.json:93","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/dist/server/node_modules/yargs-parser/package.json:109","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/dist/server/node_modules/zen-observable-ts/package.json:64","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"\t\tspawn(process.execPath, [path.join(__dirname, 'check.js'), JSON.stringify(this.options)], {\n\t\t\tdetached: true,\n\t\t\tstdio: 'ignore'\n\t\t}).unref();","location":"package/dist/server/node_modules/update-notifier/index.js:96","message":"This package is silently executing another executable"}],"shady-links":[{"code":"Website: https://www.tcl.tk/about/language.html","location":"package/dist/server/node_modules/highlight.js/lib/languages/tcl.js:38","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" `The option \\`${optionName}\\` is incompatible with the unified topology, please read more by visiting http://bit.ly/2D8WfT6`,","location":"package/dist/server/node_modules/mongodb/lib/core/sdam/topology.js:130","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" `The \\`${eventName}\\` event is no longer supported by the unified topology, please read more by visiting http://bit.ly/2D8WfT6`,","location":"package/dist/server/node_modules/mongodb/lib/operations/connect.js:480","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" 'http://➡.ws/➡' : {","location":"package/dist/server/node_modules/url/test.js:508","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" 'href': 'http://xn--hgi.ws/➡',","location":"package/dist/server/node_modules/url/test.js:509","message":"This package contains an URL to a domain with a suspicious extension"}]}```
|
1.0
|
ke-fe-cli 0.1.26 has 25 guarddog issues - ```{"npm-install-script":[{"code":" \"prepare\": \"yarn run build\",","location":"package/dist/server/node_modules/@sqltools/formatter/package.json:80","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"node scripts/check-node-version.js\",","location":"package/dist/server/node_modules/aws-sdk/package.json:169","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/dist/server/node_modules/colorette/package.json:70","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"node -e \\\"try{require('./postinstall')}catch(e){}\\\"\",","location":"package/dist/server/node_modules/core-js/package.json:102","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"node ./postinstall.js\",","location":"package/dist/server/node_modules/ejs/package.json:68","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"install\": \"node install.js\",","location":"package/dist/server/node_modules/fsevents/package.json:65","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc\",","location":"package/dist/server/node_modules/get-caller-file/package.json:68","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/dist/server/node_modules/koa/package.json:108","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/dist/server/node_modules/path-to-regexp/package.json:103","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"lerna bootstrap\",","location":"package/dist/server/node_modules/resolve/test/resolver/multirepo/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build-nopack\",","location":"package/dist/server/node_modules/ts-node/package.json:121","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cross-env NODE_ENV=production npm run build\",","location":"package/dist/server/node_modules/type-detect/package.json:162","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/dist/server/node_modules/typeorm/node_modules/cliui/package.json:106","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/dist/server/node_modules/typeorm/node_modules/yargs/package.json:140","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"gulp build-eslint-rules\",","location":"package/dist/server/node_modules/typescript/package.json:155","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"runmd --output=README.md README_js.md\",","location":"package/dist/server/node_modules/uuid/package.json:94","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/dist/server/node_modules/y18n/package.json:93","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/dist/server/node_modules/yargs-parser/package.json:109","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/dist/server/node_modules/zen-observable-ts/package.json:64","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"\t\tspawn(process.execPath, [path.join(__dirname, 'check.js'), JSON.stringify(this.options)], {\n\t\t\tdetached: true,\n\t\t\tstdio: 'ignore'\n\t\t}).unref();","location":"package/dist/server/node_modules/update-notifier/index.js:96","message":"This package is silently executing another executable"}],"shady-links":[{"code":"Website: https://www.tcl.tk/about/language.html","location":"package/dist/server/node_modules/highlight.js/lib/languages/tcl.js:38","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" `The option \\`${optionName}\\` is incompatible with the unified topology, please read more by visiting http://bit.ly/2D8WfT6`,","location":"package/dist/server/node_modules/mongodb/lib/core/sdam/topology.js:130","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" `The \\`${eventName}\\` event is no longer supported by the unified topology, please read more by visiting http://bit.ly/2D8WfT6`,","location":"package/dist/server/node_modules/mongodb/lib/operations/connect.js:480","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" 'http://➡.ws/➡' : {","location":"package/dist/server/node_modules/url/test.js:508","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" 'href': 'http://xn--hgi.ws/➡',","location":"package/dist/server/node_modules/url/test.js:509","message":"This package contains an URL to a domain with a suspicious extension"}]}```
|
process
|
ke fe cli has guarddog issues npm install script npm silent process execution n t t tdetached true n t t tstdio ignore n t t unref location package dist server node modules update notifier index js message this package is silently executing another executable shady links
| 1
|
403,659
| 11,844,463,889
|
IssuesEvent
|
2020-03-24 05:52:27
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
opened
|
Typo found in WSO2 API Manager 3.0 documentations under "Publish to Multiple External Developer Portals" section
|
Docs/Has Impact Priority/Normal Type/Docs
|
### Description:
Incorrect Developer portal URL (http://localhost:9464/devportal) of the second WSO2 API Manager instance in Step-7, it should be http://localhost:9444/devportal as port offset is already incremented by 1 for second instance in Step-2.
### Content Positioning in Documentation:
https://apim.docs.wso2.com/en/latest/learn/design-api/publish-api/publish-to-multiple-external-api-stores/
Refer Step-7
|
1.0
|
Typo found in WSO2 API Manager 3.0 documentations under "Publish to Multiple External Developer Portals" section - ### Description:
Incorrect Developer portal URL (http://localhost:9464/devportal) of the second WSO2 API Manager instance in Step-7, it should be http://localhost:9444/devportal as port offset is already incremented by 1 for second instance in Step-2.
### Content Positioning in Documentation:
https://apim.docs.wso2.com/en/latest/learn/design-api/publish-api/publish-to-multiple-external-api-stores/
Refer Step-7
|
non_process
|
typo found in api manager documentations under publish to multiple external developer portals section description incorrect developer portal url of the second api manager instance in step it should be as port offset is already incremented by for second instance in step content positioning in documentation refer step
| 0
|
14,023
| 16,823,885,292
|
IssuesEvent
|
2021-06-17 15:58:52
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
PANIC in query-engine/connectors/mongodb-query-connector/src/root_queries/read.rs:101:74called `Option::unwrap()` on a `None` value
|
bug/1-repro-available kind/bug process/candidate team/developer-productivity topic: mongodb
|
I am using mongoDB early-access and have noticed that the prisma client is not working. I was able to replicate the issue in a little demo app. Please find the autogenerated report and my code snippets below.
@matthewmueller, you asked me to create a feature request for one-directional relationships and to report a bug regarding my prisma client issue. Please see below my prisma schema and my thoughts on it in the comments. If prisma client works, I really don't mind how prisma is handling the relationships internally and I am fine with the two-directional relationship in the prisma.schema. But again, from a mongoose / mongoDB perspective, the pointer back from the other relationship (Ingredient) is kind of unnecessary. I hope that thought process makes sense! But also, it really doesn't matter as long as its not shown in the DB and prisma client works! 💯 Thanks for your support! 🙏
Hi Prisma Team! My Prisma Client just crashed. This is the report:
## Versions
| Name | Version |
|-----------------|--------------------|
| Node | v14.15.4 |
| OS | undefined|
| Prisma Client | in-memory |
| Query Engine | query-engine c838e79f39885bc8e1611849b1eb28b5bb5bc922|
| Database | undefined|
## Query
```
query {
findManyIngredient(
take: 5
skip: 5
) {
id
name
type
createdAt
updatedAt
defaultIn {
id
name
createdAt
updatedAt
ingredientIds
substitutionIds
}
alternativeFor {
id
name
createdAt
updatedAt
ingredientIds
substitutionIds
}
}
}
```
## Logs
```
raf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
prisma:engine { flags: [ '--enable-raw-queries', '--port', '60387' ] }
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
prisma:engine { flags: [ '--enable-raw-queries', '--port', '60388' ] }
prisma:engine stdout Started http server on http://127.0.0.1:60387
prisma:engine stdout Started http server on http://127.0.0.1:60388
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
prisma:engine Client Version: in-memory
prisma:engine Engine Version: query-engine c838e79f39885bc8e1611849b1eb28b5bb5bc922
prisma:engine Active provider: mongodb
prisma:engine Client Version: in-memory
prisma:engine Engine Version: query-engine c838e79f39885bc8e1611849b1eb28b5bb5bc922
prisma:engine Active provider: mongodb
prisma:engine stdout PANIC in query-engine/connectors/mongodb-query-connector/src/root_queries/read.rs:101:74
called `Option::unwrap()` on a `None` value
prisma:engine {
prisma:engine error: SocketError: other side closed
prisma:engine at Socket.onSocketEnd (/Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/prisma-client/runtime/index.js:25736:24)
prisma:engine at Socket.emit (events.js:327:22)
prisma:engine at Socket.EventEmitter.emit (domain.js:467:12)
prisma:engine at endReadableNT (internal/streams/readable.js:1327:12)
prisma:engine at processTicksAndRejections (internal/process/task_queues.js:80:21) {
prisma:engine code: 'UND_ERR_SOCKET'
prisma:engine }
prisma:engine }
prisma:engine {
prisma:engine error: Error: connect ECONNREFUSED 127.0.0.1:60388
prisma:engine at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1146:16) {
prisma:engine errno: -61,
prisma:engine code: 'ECONNREFUSED',
prisma:engine syscall: 'connect',
prisma:engine address: '127.0.0.1',
prisma:engine port: 60388
prisma:engine }
prisma:engine }
prisma:engine { cwd: '/Users/andrelandgraf/workspaces/demo/prisma' }
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
prisma:engine { flags: [ '--enable-raw-queries', '--port', '60400' ] }
prisma:engine Stopping Prisma engine4
prisma:engine Waiting for start promise
prisma:engine Done waiting for start promise
prisma:engine stdout Started http server on http://127.0.0.1:60400
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
prisma:engine Client Version: in-memory
prisma:engine Engine Version: query-engine c838e79f39885bc8e1611849b1eb28b5bb5bc922
prisma:engine Active provider: mongodb
prisma:engine stdout PANIC in query-engine/connectors/mongodb-query-connector/src/root_queries/read.rs:101:74
called `Option::unwrap()` on a `None` value
prisma:engine {
prisma:engine error: SocketError: other side closed
prisma:engine at Socket.onSocketEnd (/Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/prisma-client/runtime/index.js:25736:24)
prisma:engine at Socket.emit (events.js:327:22)
prisma:engine at Socket.EventEmitter.emit (domain.js:467:12)
prisma:engine at endReadableNT (internal/streams/readable.js:1327:12)
prisma:engine at processTicksAndRejections (internal/process/task_queues.js:80:21) {
prisma:engine code: 'UND_ERR_SOCKET'
prisma:engine }
prisma:engine }
```
## Client Snippet
```ts
import { Ingredient, Recipe, FoodType } from '@prisma/client'
import prisma from '../db.server';
// in my case that I try to simplify here, there would be separate forms, so lets make separate queries
const ingredient = await prisma.ingredient.create({ data: {
name: ingredientName,
type: FoodType.dairy // we only support dairy for now...
} });
const alternative = await prisma.ingredient.create({ data: {
name: alternativeName,
type: FoodType.dairy
}})
// recipe will be created somewhen later. Ingredients have to be able to "live" without them!
// const recipe = await prisma.recipe.create({ data: {
// name,
// ingredientIds: [ingredient.id],
// substitutionIds: [alternative.id]
// }})
```
## Schema
```prisma
datasource db {
provider = "mongodb"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
previewFeatures = ["mongodb"]
}
enum FoodType {
veggie
fruit
meat
dairy
}
model Ingredient {
id String @id @default(dbgenerated()) @map("_id") @db.ObjectId
name String @unique
type FoodType
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
// fields below would not be necessary in mongoose / NoSQL-land
// but I guess they don't hurt either
// maybe it even provides some nice basic for advanced analytics
defaultIn Recipe[] @relation("DefaultIntegredients")
alternativeFor Recipe[] @relation("HealthierIntegredients")
}
model Recipe {
id String @id @default(dbgenerated()) @map("_id") @db.ObjectId
name String @unique
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
// the default ingredients for this recipe
ingredients Ingredient[] @relation("DefaultIntegredients", fields: [ingredientIds], references: [id])
ingredientIds String[] @db.ObjectId
substitutions Ingredient[] @relation("HealthierIntegredients", fields: [ingredientIds], references: [id])
substitutionIds String[] @db.ObjectId
}
```
|
1.0
|
PANIC in query-engine/connectors/mongodb-query-connector/src/root_queries/read.rs:101:74called `Option::unwrap()` on a `None` value - I am using mongoDB early-access and have noticed that the prisma client is not working. I was able to replicate the issue in a little demo app. Please find the autogenerated report and my code snippets below.
@matthewmueller, you asked me to create a feature request for one-directional relationships and to report a bug regarding my prisma client issue. Please see below my prisma schema and my thoughts on it in the comments. If prisma client works, I really don't mind how prisma is handling the relationships internally and I am fine with the two-directional relationship in the prisma.schema. But again, from a mongoose / mongoDB perspective, the pointer back from the other relationship (Ingredient) is kind of unnecessary. I hope that thought process makes sense! But also, it really doesn't matter as long as its not shown in the DB and prisma client works! 💯 Thanks for your support! 🙏
Hi Prisma Team! My Prisma Client just crashed. This is the report:
## Versions
| Name | Version |
|-----------------|--------------------|
| Node | v14.15.4 |
| OS | undefined|
| Prisma Client | in-memory |
| Query Engine | query-engine c838e79f39885bc8e1611849b1eb28b5bb5bc922|
| Database | undefined|
## Query
```
query {
findManyIngredient(
take: 5
skip: 5
) {
id
name
type
createdAt
updatedAt
defaultIn {
id
name
createdAt
updatedAt
ingredientIds
substitutionIds
}
alternativeFor {
id
name
createdAt
updatedAt
ingredientIds
substitutionIds
}
}
}
```
## Logs
```
raf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
prisma:engine { flags: [ '--enable-raw-queries', '--port', '60387' ] }
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
prisma:engine { flags: [ '--enable-raw-queries', '--port', '60388' ] }
prisma:engine stdout Started http server on http://127.0.0.1:60387
prisma:engine stdout Started http server on http://127.0.0.1:60388
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
prisma:engine Client Version: in-memory
prisma:engine Engine Version: query-engine c838e79f39885bc8e1611849b1eb28b5bb5bc922
prisma:engine Active provider: mongodb
prisma:engine Client Version: in-memory
prisma:engine Engine Version: query-engine c838e79f39885bc8e1611849b1eb28b5bb5bc922
prisma:engine Active provider: mongodb
prisma:engine stdout PANIC in query-engine/connectors/mongodb-query-connector/src/root_queries/read.rs:101:74
called `Option::unwrap()` on a `None` value
prisma:engine {
prisma:engine error: SocketError: other side closed
prisma:engine at Socket.onSocketEnd (/Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/prisma-client/runtime/index.js:25736:24)
prisma:engine at Socket.emit (events.js:327:22)
prisma:engine at Socket.EventEmitter.emit (domain.js:467:12)
prisma:engine at endReadableNT (internal/streams/readable.js:1327:12)
prisma:engine at processTicksAndRejections (internal/process/task_queues.js:80:21) {
prisma:engine code: 'UND_ERR_SOCKET'
prisma:engine }
prisma:engine }
prisma:engine {
prisma:engine error: Error: connect ECONNREFUSED 127.0.0.1:60388
prisma:engine at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1146:16) {
prisma:engine errno: -61,
prisma:engine code: 'ECONNREFUSED',
prisma:engine syscall: 'connect',
prisma:engine address: '127.0.0.1',
prisma:engine port: 60388
prisma:engine }
prisma:engine }
prisma:engine { cwd: '/Users/andrelandgraf/workspaces/demo/prisma' }
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
prisma:engine { flags: [ '--enable-raw-queries', '--port', '60400' ] }
prisma:engine Stopping Prisma engine4
prisma:engine Waiting for start promise
prisma:engine Done waiting for start promise
prisma:engine stdout Started http server on http://127.0.0.1:60400
plusX Execution permissions of /Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/engines/c838e79f39885bc8e1611849b1eb28b5bb5bc922/query-engine-darwin are fine
prisma:engine Client Version: in-memory
prisma:engine Engine Version: query-engine c838e79f39885bc8e1611849b1eb28b5bb5bc922
prisma:engine Active provider: mongodb
prisma:engine stdout PANIC in query-engine/connectors/mongodb-query-connector/src/root_queries/read.rs:101:74
called `Option::unwrap()` on a `None` value
prisma:engine {
prisma:engine error: SocketError: other side closed
prisma:engine at Socket.onSocketEnd (/Users/andrelandgraf/.npm/_npx/2778af9cee32ff87/node_modules/prisma/prisma-client/runtime/index.js:25736:24)
prisma:engine at Socket.emit (events.js:327:22)
prisma:engine at Socket.EventEmitter.emit (domain.js:467:12)
prisma:engine at endReadableNT (internal/streams/readable.js:1327:12)
prisma:engine at processTicksAndRejections (internal/process/task_queues.js:80:21) {
prisma:engine code: 'UND_ERR_SOCKET'
prisma:engine }
prisma:engine }
```
## Client Snippet
```ts
import { Ingredient, Recipe, FoodType } from '@prisma/client'
import prisma from '../db.server';
// in my case that I try to simplify here, there would be separate forms, so lets make separate queries
const ingredient = await prisma.ingredient.create({ data: {
name: ingredientName,
type: FoodType.dairy // we only support dairy for now...
} });
const alternative = await prisma.ingredient.create({ data: {
name: alternativeName,
type: FoodType.dairy
}})
// recipe will be created somewhen later. Ingredients have to be able to "live" without them!
// const recipe = await prisma.recipe.create({ data: {
// name,
// ingredientIds: [ingredient.id],
// substitutionIds: [alternative.id]
// }})
```
## Schema
```prisma
datasource db {
provider = "mongodb"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
previewFeatures = ["mongodb"]
}
enum FoodType {
veggie
fruit
meat
dairy
}
model Ingredient {
id String @id @default(dbgenerated()) @map("_id") @db.ObjectId
name String @unique
type FoodType
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
// fields below would not be necessary in mongoose / NoSQL-land
// but I guess they don't hurt either
// maybe it even provides some nice basic for advanced analytics
defaultIn Recipe[] @relation("DefaultIntegredients")
alternativeFor Recipe[] @relation("HealthierIntegredients")
}
model Recipe {
id String @id @default(dbgenerated()) @map("_id") @db.ObjectId
name String @unique
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
// the default ingredients for this recipe
ingredients Ingredient[] @relation("DefaultIntegredients", fields: [ingredientIds], references: [id])
ingredientIds String[] @db.ObjectId
substitutions Ingredient[] @relation("HealthierIntegredients", fields: [ingredientIds], references: [id])
substitutionIds String[] @db.ObjectId
}
```
|
process
|
panic in query engine connectors mongodb query connector src root queries read rs option unwrap on a none value i am using mongodb early access and have noticed that the prisma client is not working i was able to replicate the issue in a little demo app please find the autogenerated report and my code snippets below matthewmueller you asked me to create a feature request for one directional relationships and to report a bug regarding my prisma client issue please see below my prisma schema and my thoughts on it in the comments if prisma client works i really don t mind how prisma is handling the relationships internally and i am fine with the two directional relationship in the prisma schema but again from a mongoose mongodb perspective the pointer back from the other relationship ingredient is kind of unnecessary i hope that thought process makes sense but also it really doesn t matter as long as its not shown in the db and prisma client works 💯 thanks for your support 🙏 hi prisma team my prisma client just crashed this is the report versions name version node os undefined prisma client in memory query engine query engine database undefined query query findmanyingredient take skip id name type createdat updatedat defaultin id name createdat updatedat ingredientids substitutionids alternativefor id name createdat updatedat ingredientids substitutionids logs raf npm npx node modules prisma engines query engine darwin are fine prisma engine flags plusx execution permissions of users andrelandgraf npm npx node modules prisma engines query engine darwin are fine plusx execution permissions of users andrelandgraf npm npx node modules prisma engines query engine darwin are fine prisma engine flags prisma engine stdout started http server on prisma engine stdout started http server on plusx execution permissions of users andrelandgraf npm npx node modules prisma engines query engine darwin are fine plusx execution permissions of users andrelandgraf npm npx node modules prisma engines query engine darwin are fine prisma engine client version in memory prisma engine engine version query engine prisma engine active provider mongodb prisma engine client version in memory prisma engine engine version query engine prisma engine active provider mongodb prisma engine stdout panic in query engine connectors mongodb query connector src root queries read rs called option unwrap on a none value prisma engine prisma engine error socketerror other side closed prisma engine at socket onsocketend users andrelandgraf npm npx node modules prisma prisma client runtime index js prisma engine at socket emit events js prisma engine at socket eventemitter emit domain js prisma engine at endreadablent internal streams readable js prisma engine at processticksandrejections internal process task queues js prisma engine code und err socket prisma engine prisma engine prisma engine prisma engine error error connect econnrefused prisma engine at tcpconnectwrap afterconnect net js prisma engine errno prisma engine code econnrefused prisma engine syscall connect prisma engine address prisma engine port prisma engine prisma engine prisma engine cwd users andrelandgraf workspaces demo prisma plusx execution permissions of users andrelandgraf npm npx node modules prisma engines query engine darwin are fine prisma engine flags prisma engine stopping prisma prisma engine waiting for start promise prisma engine done waiting for start promise prisma engine stdout started http server on plusx execution permissions of users andrelandgraf npm npx node modules prisma engines query engine darwin are fine prisma engine client version in memory prisma engine engine version query engine prisma engine active provider mongodb prisma engine stdout panic in query engine connectors mongodb query connector src root queries read rs called option unwrap on a none value prisma engine prisma engine error socketerror other side closed prisma engine at socket onsocketend users andrelandgraf npm npx node modules prisma prisma client runtime index js prisma engine at socket emit events js prisma engine at socket eventemitter emit domain js prisma engine at endreadablent internal streams readable js prisma engine at processticksandrejections internal process task queues js prisma engine code und err socket prisma engine prisma engine client snippet ts import ingredient recipe foodtype from prisma client import prisma from db server in my case that i try to simplify here there would be separate forms so lets make separate queries const ingredient await prisma ingredient create data name ingredientname type foodtype dairy we only support dairy for now const alternative await prisma ingredient create data name alternativename type foodtype dairy recipe will be created somewhen later ingredients have to be able to live without them const recipe await prisma recipe create data name ingredientids substitutionids schema prisma datasource db provider mongodb url env database url generator client provider prisma client js previewfeatures enum foodtype veggie fruit meat dairy model ingredient id string id default dbgenerated map id db objectid name string unique type foodtype createdat datetime default now updatedat datetime updatedat fields below would not be necessary in mongoose nosql land but i guess they don t hurt either maybe it even provides some nice basic for advanced analytics defaultin recipe relation defaultintegredients alternativefor recipe relation healthierintegredients model recipe id string id default dbgenerated map id db objectid name string unique createdat datetime default now updatedat datetime updatedat the default ingredients for this recipe ingredients ingredient relation defaultintegredients fields references ingredientids string db objectid substitutions ingredient relation healthierintegredients fields references substitutionids string db objectid
| 1
|
17,750
| 23,664,537,842
|
IssuesEvent
|
2022-08-26 19:14:36
|
googleapis/google-resumable-media-python
|
https://api.github.com/repos/googleapis/google-resumable-media-python
|
closed
|
Dependency Dashboard
|
type: process api: storage
|
This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/distlib-0.x -->[chore(deps): update dependency distlib to v0.3.6](../pull/352)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/click-8.x -->[chore(deps): update dependency click to v8.1.3](../pull/348)
- [ ] <!-- recreate-branch=renovate/setuptools-65.x -->[chore(deps): update dependency setuptools to v65.3.0](../pull/349)
- [ ] <!-- recreate-branch=renovate/protobuf-4.x -->[chore(deps): update dependency protobuf to v4](../pull/351)
## Detected dependencies
<details><summary>dockerfile</summary>
<blockquote>
<details><summary>.kokoro/docker/docs/Dockerfile</summary>
- `ubuntu 22.04`
</details>
</blockquote>
</details>
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/docs.yml</summary>
- `actions/checkout v3`
- `actions/setup-python v4`
- `actions/checkout v3`
- `actions/setup-python v4`
</details>
<details><summary>.github/workflows/lint.yml</summary>
- `actions/checkout v3`
- `actions/setup-python v4`
</details>
<details><summary>.github/workflows/unittest.yml</summary>
- `actions/checkout v3`
- `actions/setup-python v4`
- `actions/upload-artifact v3`
- `actions/checkout v3`
- `actions/setup-python v4`
- `actions/download-artifact v3`
</details>
</blockquote>
</details>
<details><summary>pip_requirements</summary>
<blockquote>
<details><summary>.kokoro/requirements.txt</summary>
- `argcomplete ==2.0.0`
- `attrs ==22.1.0`
- `bleach ==5.0.1`
- `cachetools ==5.2.0`
- `certifi ==2022.6.15`
- `cffi ==1.15.1`
- `charset-normalizer ==2.1.1`
- `click ==8.0.4`
- `colorlog ==6.6.0`
- `commonmark ==0.9.1`
- `cryptography ==37.0.4`
- `distlib ==0.3.5`
- `docutils ==0.19`
- `filelock ==3.8.0`
- `gcp-docuploader ==0.6.3`
- `gcp-releasetool ==1.8.6`
- `google-api-core ==2.8.2`
- `google-auth ==2.11.0`
- `google-cloud-core ==2.3.2`
- `google-cloud-storage ==2.5.0`
- `google-crc32c ==1.3.0`
- `google-resumable-media ==2.3.3`
- `googleapis-common-protos ==1.56.4`
- `idna ==3.3`
- `importlib-metadata ==4.12.0`
- `jeepney ==0.8.0`
- `jinja2 ==3.1.2`
- `keyring ==23.8.2`
- `markupsafe ==2.1.1`
- `nox ==2022.8.7`
- `packaging ==21.3`
- `pkginfo ==1.8.3`
- `platformdirs ==2.5.2`
- `protobuf ==3.20.1`
- `py ==1.11.0`
- `pyasn1 ==0.4.8`
- `pyasn1-modules ==0.2.8`
- `pycparser ==2.21`
- `pygments ==2.13.0`
- `pyjwt ==2.4.0`
- `pyparsing ==3.0.9`
- `pyperclip ==1.8.2`
- `python-dateutil ==2.8.2`
- `readme-renderer ==37.0`
- `requests ==2.28.1`
- `requests-toolbelt ==0.9.1`
- `rfc3986 ==2.0.0`
- `rich ==12.5.1`
- `rsa ==4.9`
- `secretstorage ==3.3.3`
- `six ==1.16.0`
- `twine ==4.0.1`
- `typing-extensions ==4.3.0`
- `urllib3 ==1.26.12`
- `virtualenv ==20.16.3`
- `webencodings ==0.5.1`
- `wheel ==0.37.1`
- `zipp ==3.8.1`
- `setuptools ==65.2.0`
</details>
</blockquote>
</details>
<details><summary>pip_setup</summary>
<blockquote>
<details><summary>setup.py</summary>
- `google-crc32c >= 1.0, < 2.0dev`
- `requests >= 2.18.0, < 3.0.0dev`
- `aiohttp >= 3.6.2, < 4.0.0dev`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/distlib-0.x -->[chore(deps): update dependency distlib to v0.3.6](../pull/352)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/click-8.x -->[chore(deps): update dependency click to v8.1.3](../pull/348)
- [ ] <!-- recreate-branch=renovate/setuptools-65.x -->[chore(deps): update dependency setuptools to v65.3.0](../pull/349)
- [ ] <!-- recreate-branch=renovate/protobuf-4.x -->[chore(deps): update dependency protobuf to v4](../pull/351)
## Detected dependencies
<details><summary>dockerfile</summary>
<blockquote>
<details><summary>.kokoro/docker/docs/Dockerfile</summary>
- `ubuntu 22.04`
</details>
</blockquote>
</details>
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/docs.yml</summary>
- `actions/checkout v3`
- `actions/setup-python v4`
- `actions/checkout v3`
- `actions/setup-python v4`
</details>
<details><summary>.github/workflows/lint.yml</summary>
- `actions/checkout v3`
- `actions/setup-python v4`
</details>
<details><summary>.github/workflows/unittest.yml</summary>
- `actions/checkout v3`
- `actions/setup-python v4`
- `actions/upload-artifact v3`
- `actions/checkout v3`
- `actions/setup-python v4`
- `actions/download-artifact v3`
</details>
</blockquote>
</details>
<details><summary>pip_requirements</summary>
<blockquote>
<details><summary>.kokoro/requirements.txt</summary>
- `argcomplete ==2.0.0`
- `attrs ==22.1.0`
- `bleach ==5.0.1`
- `cachetools ==5.2.0`
- `certifi ==2022.6.15`
- `cffi ==1.15.1`
- `charset-normalizer ==2.1.1`
- `click ==8.0.4`
- `colorlog ==6.6.0`
- `commonmark ==0.9.1`
- `cryptography ==37.0.4`
- `distlib ==0.3.5`
- `docutils ==0.19`
- `filelock ==3.8.0`
- `gcp-docuploader ==0.6.3`
- `gcp-releasetool ==1.8.6`
- `google-api-core ==2.8.2`
- `google-auth ==2.11.0`
- `google-cloud-core ==2.3.2`
- `google-cloud-storage ==2.5.0`
- `google-crc32c ==1.3.0`
- `google-resumable-media ==2.3.3`
- `googleapis-common-protos ==1.56.4`
- `idna ==3.3`
- `importlib-metadata ==4.12.0`
- `jeepney ==0.8.0`
- `jinja2 ==3.1.2`
- `keyring ==23.8.2`
- `markupsafe ==2.1.1`
- `nox ==2022.8.7`
- `packaging ==21.3`
- `pkginfo ==1.8.3`
- `platformdirs ==2.5.2`
- `protobuf ==3.20.1`
- `py ==1.11.0`
- `pyasn1 ==0.4.8`
- `pyasn1-modules ==0.2.8`
- `pycparser ==2.21`
- `pygments ==2.13.0`
- `pyjwt ==2.4.0`
- `pyparsing ==3.0.9`
- `pyperclip ==1.8.2`
- `python-dateutil ==2.8.2`
- `readme-renderer ==37.0`
- `requests ==2.28.1`
- `requests-toolbelt ==0.9.1`
- `rfc3986 ==2.0.0`
- `rich ==12.5.1`
- `rsa ==4.9`
- `secretstorage ==3.3.3`
- `six ==1.16.0`
- `twine ==4.0.1`
- `typing-extensions ==4.3.0`
- `urllib3 ==1.26.12`
- `virtualenv ==20.16.3`
- `webencodings ==0.5.1`
- `wheel ==0.37.1`
- `zipp ==3.8.1`
- `setuptools ==65.2.0`
</details>
</blockquote>
</details>
<details><summary>pip_setup</summary>
<blockquote>
<details><summary>setup.py</summary>
- `google-crc32c >= 1.0, < 2.0dev`
- `requests >= 2.18.0, < 3.0.0dev`
- `aiohttp >= 3.6.2, < 4.0.0dev`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more edited blocked these updates have been manually edited so renovate will no longer make changes to discard all commits and start over click on a checkbox pull ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull pull detected dependencies dockerfile kokoro docker docs dockerfile ubuntu github actions github workflows docs yml actions checkout actions setup python actions checkout actions setup python github workflows lint yml actions checkout actions setup python github workflows unittest yml actions checkout actions setup python actions upload artifact actions checkout actions setup python actions download artifact pip requirements kokoro requirements txt argcomplete attrs bleach cachetools certifi cffi charset normalizer click colorlog commonmark cryptography distlib docutils filelock gcp docuploader gcp releasetool google api core google auth google cloud core google cloud storage google google resumable media googleapis common protos idna importlib metadata jeepney keyring markupsafe nox packaging pkginfo platformdirs protobuf py modules pycparser pygments pyjwt pyparsing pyperclip python dateutil readme renderer requests requests toolbelt rich rsa secretstorage six twine typing extensions virtualenv webencodings wheel zipp setuptools pip setup setup py google requests aiohttp check this box to trigger a request for renovate to run again on this repository
| 1
|
5,693
| 8,561,093,554
|
IssuesEvent
|
2018-11-09 04:54:02
|
dklinges9/Myanmar-forest-loss
|
https://api.github.com/repos/dklinges9/Myanmar-forest-loss
|
opened
|
Calculate fragmentation for last 4 townships
|
data-processing
|
I've already done this, I just need to add in here. Will do so this weekend.
|
1.0
|
Calculate fragmentation for last 4 townships - I've already done this, I just need to add in here. Will do so this weekend.
|
process
|
calculate fragmentation for last townships i ve already done this i just need to add in here will do so this weekend
| 1
|
54,461
| 6,388,626,220
|
IssuesEvent
|
2017-08-03 15:55:28
|
ntop/ntopng
|
https://api.github.com/repos/ntop/ntopng
|
closed
|
ntopng 100% high CPU load
|
testing needed
|
The CPU load is **_# 100%_** after two days of installation. I have 20 users in my LAN.
|
1.0
|
ntopng 100% high CPU load -
The CPU load is **_# 100%_** after two days of installation. I have 20 users in my LAN.
|
non_process
|
ntopng high cpu load the cpu load is after two days of installation i have users in my lan
| 0
|
353,422
| 25,117,543,811
|
IssuesEvent
|
2022-11-09 04:13:29
|
UBCSailbot/docs
|
https://api.github.com/repos/UBCSailbot/docs
|
opened
|
Update docs page Reference/GitHub/Workflow/issues.md
|
documentation update-page
|
### Purpose
<!-- Why is this page being updated? -->
We recently started using Task Lists which falls under the category of GitHub issues, but is slightly different than typical issues. We want a section on task lists to explain the use case and their differences.
### Changes
<!-- What changes will be made to the existing docs page and why? -->
- Add a section on task lists
- What are task lists and why do we use them
- How do we create a task list
Be sure to include screenshots and add any photos to [this directory](https://github.com/UBCSailbot/docs/tree/main/docs/assets/images/github/workflow)!
### Resources
<!-- Link to any extra resources that might help and describe the relevance if not obvious. -->
- https://github.com/UBCSailbot/docs/issues/24 -- example task list
- [Task List Docs](https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists)
|
1.0
|
Update docs page Reference/GitHub/Workflow/issues.md - ### Purpose
<!-- Why is this page being updated? -->
We recently started using Task Lists which falls under the category of GitHub issues, but is slightly different than typical issues. We want a section on task lists to explain the use case and their differences.
### Changes
<!-- What changes will be made to the existing docs page and why? -->
- Add a section on task lists
- What are task lists and why do we use them
- How do we create a task list
Be sure to include screenshots and add any photos to [this directory](https://github.com/UBCSailbot/docs/tree/main/docs/assets/images/github/workflow)!
### Resources
<!-- Link to any extra resources that might help and describe the relevance if not obvious. -->
- https://github.com/UBCSailbot/docs/issues/24 -- example task list
- [Task List Docs](https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists)
|
non_process
|
update docs page reference github workflow issues md purpose we recently started using task lists which falls under the category of github issues but is slightly different than typical issues we want a section on task lists to explain the use case and their differences changes add a section on task lists what are task lists and why do we use them how do we create a task list be sure to include screenshots and add any photos to resources example task list
| 0
|
653
| 3,122,655,676
|
IssuesEvent
|
2015-09-06 18:53:52
|
K0zka/kerub
|
https://api.github.com/repos/K0zka/kerub
|
closed
|
make expectation id's optional (or maybe remove?)
|
cleanup component:data processing priority: normal
|
Having an ID for each expectation was not the brightest one, it does not have any use.
|
1.0
|
make expectation id's optional (or maybe remove?) - Having an ID for each expectation was not the brightest one, it does not have any use.
|
process
|
make expectation id s optional or maybe remove having an id for each expectation was not the brightest one it does not have any use
| 1
|
12,053
| 14,739,177,206
|
IssuesEvent
|
2021-01-07 06:39:32
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
SAB - Manual Invoices 7/11/18
|
anc-process anp-1.5 ant-support
|
In GitLab by @kdjstudios on Sep 4, 2018, 10:29
**Submitted by:** Kyle Johnson
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/5243724
**Server:** All
**Client/Site:** NA
**Account:** NA
**Issue:**
I just wanted to reach out to you to insure we have the correct procedure in place for manual invoices on active accounts. As Kim brought to my attention a valid concern earlier today. Currently on active accounts when you generate a manual invoice it will add the staged fees, but also add any recurring ‘base’ fees too the draft invoice. Kim mentioned to me that this had in the past lead to additional (unwanted) charges on accounts and also would appear to be effecting why some users are anxious of using the manual invoice feature. Both Kim and I believe that the ‘base’ rates should not be applied to manual invoices, but only to the invoices generated during a billing cycle. Would you confirm if the ‘base’ fees should be added to manual invoices. If so, could you please elaborate on why they need to be added to better our understanding.
NOTE: Terminated accounts do not charge the base rate, as when the account is terminated it will deactivate all activities and fees on the account.
|
1.0
|
SAB - Manual Invoices 7/11/18 - In GitLab by @kdjstudios on Sep 4, 2018, 10:29
**Submitted by:** Kyle Johnson
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/5243724
**Server:** All
**Client/Site:** NA
**Account:** NA
**Issue:**
I just wanted to reach out to you to insure we have the correct procedure in place for manual invoices on active accounts. As Kim brought to my attention a valid concern earlier today. Currently on active accounts when you generate a manual invoice it will add the staged fees, but also add any recurring ‘base’ fees too the draft invoice. Kim mentioned to me that this had in the past lead to additional (unwanted) charges on accounts and also would appear to be effecting why some users are anxious of using the manual invoice feature. Both Kim and I believe that the ‘base’ rates should not be applied to manual invoices, but only to the invoices generated during a billing cycle. Would you confirm if the ‘base’ fees should be added to manual invoices. If so, could you please elaborate on why they need to be added to better our understanding.
NOTE: Terminated accounts do not charge the base rate, as when the account is terminated it will deactivate all activities and fees on the account.
|
process
|
sab manual invoices in gitlab by kdjstudios on sep submitted by kyle johnson helpdesk server all client site na account na issue i just wanted to reach out to you to insure we have the correct procedure in place for manual invoices on active accounts as kim brought to my attention a valid concern earlier today currently on active accounts when you generate a manual invoice it will add the staged fees but also add any recurring ‘base’ fees too the draft invoice kim mentioned to me that this had in the past lead to additional unwanted charges on accounts and also would appear to be effecting why some users are anxious of using the manual invoice feature both kim and i believe that the ‘base’ rates should not be applied to manual invoices but only to the invoices generated during a billing cycle would you confirm if the ‘base’ fees should be added to manual invoices if so could you please elaborate on why they need to be added to better our understanding note terminated accounts do not charge the base rate as when the account is terminated it will deactivate all activities and fees on the account
| 1
|
401,614
| 11,795,228,054
|
IssuesEvent
|
2020-03-18 08:31:54
|
thaliawww/concrexit
|
https://api.github.com/repos/thaliawww/concrexit
|
opened
|
Event looks like registered for though on waiting list
|
bug priority: medium
|
In GitLab by @JobDoesburg on Feb 17, 2020, 15:49
### One-sentence description
When on the waiting list for an event, it still looks like you are registered for that event.
### Current behaviour / Reproducing the bug
- In the calendar stream, the event does appear in your calendar
- Cancelling your registration does show the warning that you have to pay a fine, if after the unregistration deadline
- In the event overview calendar, the dot in front of the event is colored as if you are registered.
### Expected behaviour
- Do not appear in the calendar stream
- Do not show a message that you cannot unregister without having to pay a fine, but something different more fit to the situation
- In the event overview, show some other, pending-ish, colored dot.
|
1.0
|
Event looks like registered for though on waiting list - In GitLab by @JobDoesburg on Feb 17, 2020, 15:49
### One-sentence description
When on the waiting list for an event, it still looks like you are registered for that event.
### Current behaviour / Reproducing the bug
- In the calendar stream, the event does appear in your calendar
- Cancelling your registration does show the warning that you have to pay a fine, if after the unregistration deadline
- In the event overview calendar, the dot in front of the event is colored as if you are registered.
### Expected behaviour
- Do not appear in the calendar stream
- Do not show a message that you cannot unregister without having to pay a fine, but something different more fit to the situation
- In the event overview, show some other, pending-ish, colored dot.
|
non_process
|
event looks like registered for though on waiting list in gitlab by jobdoesburg on feb one sentence description when on the waiting list for an event it still looks like you are registered for that event current behaviour reproducing the bug in the calendar stream the event does appear in your calendar cancelling your registration does show the warning that you have to pay a fine if after the unregistration deadline in the event overview calendar the dot in front of the event is colored as if you are registered expected behaviour do not appear in the calendar stream do not show a message that you cannot unregister without having to pay a fine but something different more fit to the situation in the event overview show some other pending ish colored dot
| 0
|
12,152
| 14,741,418,985
|
IssuesEvent
|
2021-01-07 10:35:32
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
SA Billing - Chicago - Invalid Late Fees
|
anc-process anp-1 ant-bug
|
In GitLab by @kdjstudios on Jan 17, 2019, 16:24
**Submitted by:** "Alina King" <alina.king@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-01-15-86542
**Server:** Internal
**Client/Site:** Chicago
**Account:** NA
**Issue:**
I thought that I replied to both Allentown and Chicago. I have reviewed the accounts for Chicago and you are approved to add the credits.
|
1.0
|
SA Billing - Chicago - Invalid Late Fees - In GitLab by @kdjstudios on Jan 17, 2019, 16:24
**Submitted by:** "Alina King" <alina.king@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-01-15-86542
**Server:** Internal
**Client/Site:** Chicago
**Account:** NA
**Issue:**
I thought that I replied to both Allentown and Chicago. I have reviewed the accounts for Chicago and you are approved to add the credits.
|
process
|
sa billing chicago invalid late fees in gitlab by kdjstudios on jan submitted by alina king helpdesk server internal client site chicago account na issue i thought that i replied to both allentown and chicago i have reviewed the accounts for chicago and you are approved to add the credits
| 1
|
2,501
| 5,272,572,833
|
IssuesEvent
|
2017-02-06 13:20:41
|
itsyouonline/identityserver
|
https://api.github.com/repos/itsyouonline/identityserver
|
closed
|
Question: which organizations are shown?
|
process_duplicate state_verification
|
As you can see in the last/bottom screenshot I'm owner of the greenitglobe/environments organization...
Why can't I see this in the top organizations view, while I can see greenitglobe/environments/uk-g8-1? See first screenshot.
In order to access greenitglobe/environments I currently need to click through greenitglobe/environments and then use the breadcrumb navigation to get to the greenitglobe/environments; see second/middle screen shot
Is this by design? Why? Wouldn't it be better to see the top level organizations where I'm owner?



|
1.0
|
Question: which organizations are shown? - As you can see in the last/bottom screenshot I'm owner of the greenitglobe/environments organization...
Why can't I see this in the top organizations view, while I can see greenitglobe/environments/uk-g8-1? See first screenshot.
In order to access greenitglobe/environments I currently need to click through greenitglobe/environments and then use the breadcrumb navigation to get to the greenitglobe/environments; see second/middle screen shot
Is this by design? Why? Wouldn't it be better to see the top level organizations where I'm owner?



|
process
|
question which organizations are shown as you can see in the last bottom screenshot i m owner of the greenitglobe environments organization why can t i see this in the top organizations view while i can see greenitglobe environments uk see first screenshot in order to access greenitglobe environments i currently need to click through greenitglobe environments and then use the breadcrumb navigation to get to the greenitglobe environments see second middle screen shot is this by design why wouldn t it be better to see the top level organizations where i m owner
| 1
|
13,576
| 16,109,917,577
|
IssuesEvent
|
2021-04-27 19:41:06
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Kubernetes: TLS with DNS
|
P2 enhancement process
|
**Problem**
We have insecure ports 80 and 5600 and a secure port 443 that uses a self-signed certificate. We need a way to provide the customer with a secured endpoint. We'd like to consolidate the ports so that we can deprecate 5600 and only expose 443 (with 80 redirecting to 443). Traefik does not [support](https://github.com/containous/traefik/issues/2542) Let's Encrypt with the community edition when there's more than one replica, thus it cannot be used.
**Solution**
- Configure certificates manually or use wildcard certificates
- Associate ingress with a domain name in format `<environment>.mirrornode.hedera.com`
- Enable TLS on ingresses
- Redirect port 80 to 443 at the entrypoint level
- Note: GRPC does not support redirects
**Alternatives**
- Wait for Traefik to add support to store certs in Kubernetes secrets
- Add [cert-manager](https://cert-manager.io/) chart
**Additional Context**
Traefik does not have documention on using Cert-Manager with. Best I could find:
- https://www.reddit.com/r/Traefik/comments/d36iry/traefik_20_with_certmanager/
- https://www.cerenit.fr/blog/kubernetes-ovh-traefik2-cert-manager-secrets/
Will need to be tested in a GKE cluster as I wasn't able to expose a port within a minikube VM via my router and access it remotely to work with Let's Encrypt.
|
1.0
|
Kubernetes: TLS with DNS - **Problem**
We have insecure ports 80 and 5600 and a secure port 443 that uses a self-signed certificate. We need a way to provide the customer with a secured endpoint. We'd like to consolidate the ports so that we can deprecate 5600 and only expose 443 (with 80 redirecting to 443). Traefik does not [support](https://github.com/containous/traefik/issues/2542) Let's Encrypt with the community edition when there's more than one replica, thus it cannot be used.
**Solution**
- Configure certificates manually or use wildcard certificates
- Associate ingress with a domain name in format `<environment>.mirrornode.hedera.com`
- Enable TLS on ingresses
- Redirect port 80 to 443 at the entrypoint level
- Note: GRPC does not support redirects
**Alternatives**
- Wait for Traefik to add support to store certs in Kubernetes secrets
- Add [cert-manager](https://cert-manager.io/) chart
**Additional Context**
Traefik does not have documention on using Cert-Manager with. Best I could find:
- https://www.reddit.com/r/Traefik/comments/d36iry/traefik_20_with_certmanager/
- https://www.cerenit.fr/blog/kubernetes-ovh-traefik2-cert-manager-secrets/
Will need to be tested in a GKE cluster as I wasn't able to expose a port within a minikube VM via my router and access it remotely to work with Let's Encrypt.
|
process
|
kubernetes tls with dns problem we have insecure ports and and a secure port that uses a self signed certificate we need a way to provide the customer with a secured endpoint we d like to consolidate the ports so that we can deprecate and only expose with redirecting to traefik does not let s encrypt with the community edition when there s more than one replica thus it cannot be used solution configure certificates manually or use wildcard certificates associate ingress with a domain name in format mirrornode hedera com enable tls on ingresses redirect port to at the entrypoint level note grpc does not support redirects alternatives wait for traefik to add support to store certs in kubernetes secrets add chart additional context traefik does not have documention on using cert manager with best i could find will need to be tested in a gke cluster as i wasn t able to expose a port within a minikube vm via my router and access it remotely to work with let s encrypt
| 1
|
15,830
| 20,020,976,914
|
IssuesEvent
|
2022-02-01 16:22:46
|
ethereumclassic/ECIPs
|
https://api.github.com/repos/ethereumclassic/ECIPs
|
reopened
|
ETC Core Devs Call 22: Proposed Rejection of ECIP-1049
|
status:8 rejected status:1 draft status:0 wip meta:5 call meta:1 governance meta:3 process
|
Agenda:
Discuss fate of ECIP-1049 after three years. (ECIP-1000 clause).
Decision to be made:
+ Move to `Rejected` status, or;
+ Set a Block and Push Contentious Fork with a `Chain Split`
ECIP-1000 Clause: "ECIPs should be changed from Draft or Last Call status, to Rejected, upon request by any person, if they have not made progress in three years. Such a ECIP may be changed to Draft status if the champion provides revisions that meaningfully address public criticism of the proposal, or to Last Call if it meets the criteria required as described in the previous paragraph."
https://ecips.ethereumclassic.org/ECIPs/ecip-1000
Follow Up from: https://github.com/ethereumclassic/ECIPs/issues/382
Formal Proposed Rejection: https://github.com/ethereumclassic/ECIPs/issues/394#issuecomment-1022909537
Documented Github Opposition: https://github.com/ethereumclassic/ECIPs/issues/394#issuecomment-828160552
If time permits: review ECIP-1094 and ECIP-1096 for activity. Newer proposals but appear to be abandoned by the authors as well. Should these be `Withdrawn`?
How to join:
Topic: ETC Core Devs Call 22: Proposed Rejection of ECIP-1049
Time: February 21, 2022
Time: 17:00 UTC
Meeting Link: https://ethereumclassic.org/discord
Channel: community-calls
@ethereumclassic/all-hands
|
1.0
|
ETC Core Devs Call 22: Proposed Rejection of ECIP-1049 - Agenda:
Discuss fate of ECIP-1049 after three years. (ECIP-1000 clause).
Decision to be made:
+ Move to `Rejected` status, or;
+ Set a Block and Push Contentious Fork with a `Chain Split`
ECIP-1000 Clause: "ECIPs should be changed from Draft or Last Call status, to Rejected, upon request by any person, if they have not made progress in three years. Such a ECIP may be changed to Draft status if the champion provides revisions that meaningfully address public criticism of the proposal, or to Last Call if it meets the criteria required as described in the previous paragraph."
https://ecips.ethereumclassic.org/ECIPs/ecip-1000
Follow Up from: https://github.com/ethereumclassic/ECIPs/issues/382
Formal Proposed Rejection: https://github.com/ethereumclassic/ECIPs/issues/394#issuecomment-1022909537
Documented Github Opposition: https://github.com/ethereumclassic/ECIPs/issues/394#issuecomment-828160552
If time permits: review ECIP-1094 and ECIP-1096 for activity. Newer proposals but appear to be abandoned by the authors as well. Should these be `Withdrawn`?
How to join:
Topic: ETC Core Devs Call 22: Proposed Rejection of ECIP-1049
Time: February 21, 2022
Time: 17:00 UTC
Meeting Link: https://ethereumclassic.org/discord
Channel: community-calls
@ethereumclassic/all-hands
|
process
|
etc core devs call proposed rejection of ecip agenda discuss fate of ecip after three years ecip clause decision to be made move to rejected status or set a block and push contentious fork with a chain split ecip clause ecips should be changed from draft or last call status to rejected upon request by any person if they have not made progress in three years such a ecip may be changed to draft status if the champion provides revisions that meaningfully address public criticism of the proposal or to last call if it meets the criteria required as described in the previous paragraph follow up from formal proposed rejection documented github opposition if time permits review ecip and ecip for activity newer proposals but appear to be abandoned by the authors as well should these be withdrawn how to join topic etc core devs call proposed rejection of ecip time february time utc meeting link channel community calls ethereumclassic all hands
| 1
|
503,598
| 14,595,108,641
|
IssuesEvent
|
2020-12-20 09:47:44
|
kovitikus/hecate
|
https://api.github.com/repos/kovitikus/hecate
|
opened
|
Wardrobe
|
Enhancement Medium Priority
|
Equipment sets and cosmetic sets that allow characters to swap between items with ~30s roundtime (possible upgrades that reduce the roundtime).
Items must be in inventory to wear them. Items in a set, but not in the inventory are marked as missing.
EvMenu to manage sets, but simple commands to swap between sets.
Command `wardrobe` and alias `robe`.
`robe edit` for menu.
`robe #` for equipping a set by the number of the set.
Sets can be named as well. Such as `robe plate` to don the set labled as such, before going into combat.
|
1.0
|
Wardrobe - Equipment sets and cosmetic sets that allow characters to swap between items with ~30s roundtime (possible upgrades that reduce the roundtime).
Items must be in inventory to wear them. Items in a set, but not in the inventory are marked as missing.
EvMenu to manage sets, but simple commands to swap between sets.
Command `wardrobe` and alias `robe`.
`robe edit` for menu.
`robe #` for equipping a set by the number of the set.
Sets can be named as well. Such as `robe plate` to don the set labled as such, before going into combat.
|
non_process
|
wardrobe equipment sets and cosmetic sets that allow characters to swap between items with roundtime possible upgrades that reduce the roundtime items must be in inventory to wear them items in a set but not in the inventory are marked as missing evmenu to manage sets but simple commands to swap between sets command wardrobe and alias robe robe edit for menu robe for equipping a set by the number of the set sets can be named as well such as robe plate to don the set labled as such before going into combat
| 0
|
59,571
| 3,114,421,460
|
IssuesEvent
|
2015-09-03 08:40:46
|
UnknownShadow200/ClassicalSharp
|
https://api.github.com/repos/UnknownShadow200/ClassicalSharp
|
closed
|
Chat Glitch
|
bug high priority
|
The insert marker appears to be placed far to the left of the chat box.

This happens with the OpenGL and Direct3D9 builds.
|
1.0
|
Chat Glitch - The insert marker appears to be placed far to the left of the chat box.

This happens with the OpenGL and Direct3D9 builds.
|
non_process
|
chat glitch the insert marker appears to be placed far to the left of the chat box this happens with the opengl and builds
| 0
|
511,733
| 14,880,455,890
|
IssuesEvent
|
2021-01-20 09:12:39
|
unep-grid/map-x-mgl
|
https://api.github.com/repos/unep-grid/map-x-mgl
|
closed
|
UI: Modules in wrong order
|
bug done inconsistent behaviour priority 2
|
In some cases, the modules are not in the right order.
It has been experienced that
- [x] Login module is hidden by views panel when opening a private project without credentials (see https://app.staging.mapx.org?project=MX-GWE-HNB-O5P-NXA-YH9&views=MX-X5TZ7-NF8Y1-ND3L6&viewsOpen=MX-X5TZ7-NF8Y1-ND3L6&language=en&)
- [x] Views' panel is hidden by the dashboard (not reproducible)
example first point

Version 1.8.0-alpha using Chrome 87.0.4280.88 on Windows 10
|
1.0
|
UI: Modules in wrong order - In some cases, the modules are not in the right order.
It has been experienced that
- [x] Login module is hidden by views panel when opening a private project without credentials (see https://app.staging.mapx.org?project=MX-GWE-HNB-O5P-NXA-YH9&views=MX-X5TZ7-NF8Y1-ND3L6&viewsOpen=MX-X5TZ7-NF8Y1-ND3L6&language=en&)
- [x] Views' panel is hidden by the dashboard (not reproducible)
example first point

Version 1.8.0-alpha using Chrome 87.0.4280.88 on Windows 10
|
non_process
|
ui modules in wrong order in some cases the modules are not in the right order it has been experienced that login module is hidden by views panel when opening a private project without credentials see views panel is hidden by the dashboard not reproducible example first point version alpha using chrome on windows
| 0
|
399,956
| 27,263,367,888
|
IssuesEvent
|
2023-02-22 16:17:46
|
nest-desktop/nest-desktop
|
https://api.github.com/repos/nest-desktop/nest-desktop
|
opened
|
Errors while building the documentation: `Undefined substitution referenced`
|
bug documentation
|
**Describe the bug**
Errors while building the documentation:
```
nest-desktop/docs/user/setup/appimage.rst:2: ERROR: Undefined substitution referenced: "linux"
[...]
```
(The same error is thrown at multiple places.)
**To Reproduce**
Steps to reproduce the behavior:
1. Build the documentation
2. See the errors
**Expected behavior**
The building process should not throw errors by default.
**Screenshots**
n/a
**Desktop (please complete the following information):**
OS-independent
**Smartphone (please complete the following information):**
OS-indpendent
**Additional context**
n/a
|
1.0
|
Errors while building the documentation: `Undefined substitution referenced` - **Describe the bug**
Errors while building the documentation:
```
nest-desktop/docs/user/setup/appimage.rst:2: ERROR: Undefined substitution referenced: "linux"
[...]
```
(The same error is thrown at multiple places.)
**To Reproduce**
Steps to reproduce the behavior:
1. Build the documentation
2. See the errors
**Expected behavior**
The building process should not throw errors by default.
**Screenshots**
n/a
**Desktop (please complete the following information):**
OS-independent
**Smartphone (please complete the following information):**
OS-indpendent
**Additional context**
n/a
|
non_process
|
errors while building the documentation undefined substitution referenced describe the bug errors while building the documentation nest desktop docs user setup appimage rst error undefined substitution referenced linux the same error is thrown at multiple places to reproduce steps to reproduce the behavior build the documentation see the errors expected behavior the building process should not throw errors by default screenshots n a desktop please complete the following information os independent smartphone please complete the following information os indpendent additional context n a
| 0
|
12,430
| 14,927,942,347
|
IssuesEvent
|
2021-01-24 17:19:41
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Dashboard > Stats are not updated instantly unless user logs out and logs in
|
Bug P1 Process: Fixed Process: Tested dev iOS
|
Steps:
1. Add an activity with stats enabled
2. Submit the response from iOS mobile
3. Navigate to dashboard
4. Observe the stats
Actual: Stats are not updated instantly unless user logs out and logs in
Expected: Stats should be updated instantly
|
2.0
|
[iOS] Dashboard > Stats are not updated instantly unless user logs out and logs in - Steps:
1. Add an activity with stats enabled
2. Submit the response from iOS mobile
3. Navigate to dashboard
4. Observe the stats
Actual: Stats are not updated instantly unless user logs out and logs in
Expected: Stats should be updated instantly
|
process
|
dashboard stats are not updated instantly unless user logs out and logs in steps add an activity with stats enabled submit the response from ios mobile navigate to dashboard observe the stats actual stats are not updated instantly unless user logs out and logs in expected stats should be updated instantly
| 1
|
20,316
| 26,960,394,763
|
IssuesEvent
|
2023-02-08 17:43:24
|
googleapis/java-shared-dependencies
|
https://api.github.com/repos/googleapis/java-shared-dependencies
|
closed
|
Dependency Dashboard
|
type: process priority: p4
|
This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Repository problems
These problems occurred while renovating this repository.
- WARN: RepoCacheS3.getCacheFolder() - appending missing trailing slash to pathname
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/jackson.version -->[deps: update dependency com.fasterxml.jackson:jackson-bom to v2.14.2](../pull/989)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-shared-dependencies-3.x -->[chore(deps): update dependency com.google.cloud:google-cloud-shared-dependencies to v3.2.0](../pull/998)
- [ ] <!-- rebase-branch=renovate/google.api-client.version -->[deps: update dependency com.google.api-client:google-api-client-bom to v2.2.0](../pull/980)
- [ ] <!-- rebase-branch=renovate/gapic-generator-java-bom.version -->[deps: update dependency com.google.api:gapic-generator-java-bom to v2.15.0](../pull/993)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-first-party-dependencies-3.x -->[deps: update dependency com.google.cloud:first-party-dependencies to v3.2.0](../pull/995)
- [ ] <!-- rebase-branch=renovate/grpc-gcp.version -->[deps: update dependency com.google.cloud:grpc-gcp to v1.4.1](../pull/987)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-third-party-dependencies-3.x -->[deps: update dependency com.google.cloud:third-party-dependencies to v3.2.0](../pull/996)
- [ ] <!-- rebase-branch=renovate/checker-qual.version -->[deps: update dependency org.checkerframework:checker-qual to v3.30.0](../pull/994)
- [ ] <!-- rebase-branch=renovate/google.core.version -->[deps: update google.core.version to v2.10.0](../pull/997) (`com.google.cloud:google-cloud-core`, `com.google.cloud:google-cloud-core-bom`)
- [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once**
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/actions-checkout-3.x -->[deps: update actions/checkout action to v3](../pull/617)
- [ ] <!-- recreate-branch=renovate/actions-setup-java-3.x -->[deps: update actions/setup-java action to v3](../pull/615)
## Detected dependencies
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/approve-readme.yaml</summary>
- `actions/github-script v6`
</details>
<details><summary>.github/workflows/ci.yaml</summary>
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
</details>
<details><summary>.github/workflows/downstream-native-image.yaml</summary>
- `actions/checkout v2`
- `stCarolas/setup-maven v4`
- `ayltai/setup-graalvm v1`
</details>
<details><summary>.github/workflows/downstream.yaml</summary>
- `actions/checkout v2`
- `actions/setup-java v3`
</details>
<details><summary>.github/workflows/version-check.yaml</summary>
- `actions/checkout v2`
- `stCarolas/setup-maven v4`
- `actions/setup-java v1`
- `actions/checkout v2`
- `stCarolas/setup-maven v4`
- `actions/setup-java v1`
</details>
</blockquote>
</details>
<details><summary>maven</summary>
<blockquote>
<details><summary>dependency-convergence-check/pom.xml</summary>
- `com.google.cloud:google-cloud-shared-dependencies 3.1.3-SNAPSHOT`
- `com.google.guava:guava 31.0.1-jre`
- `com.google.cloud.tools:dependencies 1.5.13`
- `junit:junit 4.13.2`
</details>
<details><summary>first-party-dependencies/pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.api:gapic-generator-java-bom 2.14.0`
- `com.google.cloud:grpc-gcp 1.3.2`
- `com.google.code.gson:gson 2.10.1`
- `com.google.cloud:google-cloud-core-bom 2.9.4`
- `com.google.http-client:google-http-client-bom 1.42.3`
- `com.google.oauth-client:google-oauth-client-bom 1.34.1`
- `com.google.api-client:google-api-client-bom 2.1.2`
- `com.google.cloud:google-cloud-core 2.9.4`
- `com.google.cloud:google-cloud-core 2.9.4`
</details>
<details><summary>pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.cloud:first-party-dependencies 3.1.3-SNAPSHOT`
- `com.google.cloud:third-party-dependencies 3.1.3-SNAPSHOT`
</details>
<details><summary>third-party-dependencies/pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `org.apache.httpcomponents:httpcore 4.4.16`
- `org.apache.httpcomponents:httpclient 4.5.14`
- `org.threeten:threetenbp 1.6.5`
- `javax.annotation:javax.annotation-api 1.3.2`
- `org.codehaus.mojo:animal-sniffer-annotations 1.22`
- `com.google.code.findbugs:jsr305 3.0.2`
- `com.google.errorprone:error_prone_annotations 2.18.0`
- `com.fasterxml.jackson:jackson-bom 2.14.1`
- `commons-codec:commons-codec 1.15`
- `io.opencensus:opencensus-api 0.31.1`
- `io.opencensus:opencensus-contrib-grpc-util 0.31.1`
- `io.opencensus:opencensus-contrib-http-util 0.31.1`
- `io.opencensus:opencensus-contrib-zpages 0.31.1`
- `io.opencensus:opencensus-exporter-stats-stackdriver 0.31.1`
- `io.opencensus:opencensus-exporter-trace-stackdriver 0.31.1`
- `io.opencensus:opencensus-impl 0.31.1`
- `io.opencensus:opencensus-impl-core 0.31.1`
- `org.checkerframework:checker-qual 3.29.0`
- `io.perfmark:perfmark-api 0.26.0`
</details>
<details><summary>upper-bound-check/pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.cloud:google-cloud-shared-dependencies 3.1.3-SNAPSHOT`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Repository problems
These problems occurred while renovating this repository.
- WARN: RepoCacheS3.getCacheFolder() - appending missing trailing slash to pathname
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/jackson.version -->[deps: update dependency com.fasterxml.jackson:jackson-bom to v2.14.2](../pull/989)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-shared-dependencies-3.x -->[chore(deps): update dependency com.google.cloud:google-cloud-shared-dependencies to v3.2.0](../pull/998)
- [ ] <!-- rebase-branch=renovate/google.api-client.version -->[deps: update dependency com.google.api-client:google-api-client-bom to v2.2.0](../pull/980)
- [ ] <!-- rebase-branch=renovate/gapic-generator-java-bom.version -->[deps: update dependency com.google.api:gapic-generator-java-bom to v2.15.0](../pull/993)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-first-party-dependencies-3.x -->[deps: update dependency com.google.cloud:first-party-dependencies to v3.2.0](../pull/995)
- [ ] <!-- rebase-branch=renovate/grpc-gcp.version -->[deps: update dependency com.google.cloud:grpc-gcp to v1.4.1](../pull/987)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-third-party-dependencies-3.x -->[deps: update dependency com.google.cloud:third-party-dependencies to v3.2.0](../pull/996)
- [ ] <!-- rebase-branch=renovate/checker-qual.version -->[deps: update dependency org.checkerframework:checker-qual to v3.30.0](../pull/994)
- [ ] <!-- rebase-branch=renovate/google.core.version -->[deps: update google.core.version to v2.10.0](../pull/997) (`com.google.cloud:google-cloud-core`, `com.google.cloud:google-cloud-core-bom`)
- [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once**
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/actions-checkout-3.x -->[deps: update actions/checkout action to v3](../pull/617)
- [ ] <!-- recreate-branch=renovate/actions-setup-java-3.x -->[deps: update actions/setup-java action to v3](../pull/615)
## Detected dependencies
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/approve-readme.yaml</summary>
- `actions/github-script v6`
</details>
<details><summary>.github/workflows/ci.yaml</summary>
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
</details>
<details><summary>.github/workflows/downstream-native-image.yaml</summary>
- `actions/checkout v2`
- `stCarolas/setup-maven v4`
- `ayltai/setup-graalvm v1`
</details>
<details><summary>.github/workflows/downstream.yaml</summary>
- `actions/checkout v2`
- `actions/setup-java v3`
</details>
<details><summary>.github/workflows/version-check.yaml</summary>
- `actions/checkout v2`
- `stCarolas/setup-maven v4`
- `actions/setup-java v1`
- `actions/checkout v2`
- `stCarolas/setup-maven v4`
- `actions/setup-java v1`
</details>
</blockquote>
</details>
<details><summary>maven</summary>
<blockquote>
<details><summary>dependency-convergence-check/pom.xml</summary>
- `com.google.cloud:google-cloud-shared-dependencies 3.1.3-SNAPSHOT`
- `com.google.guava:guava 31.0.1-jre`
- `com.google.cloud.tools:dependencies 1.5.13`
- `junit:junit 4.13.2`
</details>
<details><summary>first-party-dependencies/pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.api:gapic-generator-java-bom 2.14.0`
- `com.google.cloud:grpc-gcp 1.3.2`
- `com.google.code.gson:gson 2.10.1`
- `com.google.cloud:google-cloud-core-bom 2.9.4`
- `com.google.http-client:google-http-client-bom 1.42.3`
- `com.google.oauth-client:google-oauth-client-bom 1.34.1`
- `com.google.api-client:google-api-client-bom 2.1.2`
- `com.google.cloud:google-cloud-core 2.9.4`
- `com.google.cloud:google-cloud-core 2.9.4`
</details>
<details><summary>pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.cloud:first-party-dependencies 3.1.3-SNAPSHOT`
- `com.google.cloud:third-party-dependencies 3.1.3-SNAPSHOT`
</details>
<details><summary>third-party-dependencies/pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `org.apache.httpcomponents:httpcore 4.4.16`
- `org.apache.httpcomponents:httpclient 4.5.14`
- `org.threeten:threetenbp 1.6.5`
- `javax.annotation:javax.annotation-api 1.3.2`
- `org.codehaus.mojo:animal-sniffer-annotations 1.22`
- `com.google.code.findbugs:jsr305 3.0.2`
- `com.google.errorprone:error_prone_annotations 2.18.0`
- `com.fasterxml.jackson:jackson-bom 2.14.1`
- `commons-codec:commons-codec 1.15`
- `io.opencensus:opencensus-api 0.31.1`
- `io.opencensus:opencensus-contrib-grpc-util 0.31.1`
- `io.opencensus:opencensus-contrib-http-util 0.31.1`
- `io.opencensus:opencensus-contrib-zpages 0.31.1`
- `io.opencensus:opencensus-exporter-stats-stackdriver 0.31.1`
- `io.opencensus:opencensus-exporter-trace-stackdriver 0.31.1`
- `io.opencensus:opencensus-impl 0.31.1`
- `io.opencensus:opencensus-impl-core 0.31.1`
- `org.checkerframework:checker-qual 3.29.0`
- `io.perfmark:perfmark-api 0.26.0`
</details>
<details><summary>upper-bound-check/pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.cloud:google-cloud-shared-dependencies 3.1.3-SNAPSHOT`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more repository problems these problems occurred while renovating this repository warn getcachefolder appending missing trailing slash to pathname open these updates have all been created already click a checkbox below to force a retry rebase of any pull pull pull pull pull pull pull pull pull com google cloud google cloud core com google cloud google cloud core bom click on this checkbox to rebase all open prs at once ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull detected dependencies github actions github workflows approve readme yaml actions github script github workflows ci yaml actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java github workflows downstream native image yaml actions checkout stcarolas setup maven ayltai setup graalvm github workflows downstream yaml actions checkout actions setup java github workflows version check yaml actions checkout stcarolas setup maven actions setup java actions checkout stcarolas setup maven actions setup java maven dependency convergence check pom xml com google cloud google cloud shared dependencies snapshot com google guava guava jre com google cloud tools dependencies junit junit first party dependencies pom xml com google cloud google cloud shared config com google api gapic generator java bom com google cloud grpc gcp com google code gson gson com google cloud google cloud core bom com google http client google http client bom com google oauth client google oauth client bom com google api client google api client bom com google cloud google cloud core com google cloud google cloud core pom xml com google cloud google cloud shared config com google cloud first party dependencies snapshot com google cloud third party dependencies snapshot third party dependencies pom xml com google cloud google cloud shared config org apache httpcomponents httpcore org apache httpcomponents httpclient org threeten threetenbp javax annotation javax annotation api org codehaus mojo animal sniffer annotations com google code findbugs com google errorprone error prone annotations com fasterxml jackson jackson bom commons codec commons codec io opencensus opencensus api io opencensus opencensus contrib grpc util io opencensus opencensus contrib http util io opencensus opencensus contrib zpages io opencensus opencensus exporter stats stackdriver io opencensus opencensus exporter trace stackdriver io opencensus opencensus impl io opencensus opencensus impl core org checkerframework checker qual io perfmark perfmark api upper bound check pom xml com google cloud google cloud shared config com google cloud google cloud shared dependencies snapshot check this box to trigger a request for renovate to run again on this repository
| 1
|
2,536
| 5,294,621,758
|
IssuesEvent
|
2017-02-09 11:20:52
|
DynareTeam/dynare
|
https://api.github.com/repos/DynareTeam/dynare
|
opened
|
fix ramsey_model syntax
|
preprocessor
|
@MichelJuillard Currently `ramsey_model` accepts a list of symbols in the syntax and the `RamseyModelStatement` accepts a `symbol_list` but nothing is done with it. Furthermore, there's no reference to the `symbol_list` form in the manual. Is there a reason you added support for `symbol_list` to `ramsey_model` in d945395a15, e90859ba524abfcb570d6cf09394ce86b4405090, and 17477ab095bb4f9b24d0da5ec5cf747f41988bb4
|
1.0
|
fix ramsey_model syntax - @MichelJuillard Currently `ramsey_model` accepts a list of symbols in the syntax and the `RamseyModelStatement` accepts a `symbol_list` but nothing is done with it. Furthermore, there's no reference to the `symbol_list` form in the manual. Is there a reason you added support for `symbol_list` to `ramsey_model` in d945395a15, e90859ba524abfcb570d6cf09394ce86b4405090, and 17477ab095bb4f9b24d0da5ec5cf747f41988bb4
|
process
|
fix ramsey model syntax micheljuillard currently ramsey model accepts a list of symbols in the syntax and the ramseymodelstatement accepts a symbol list but nothing is done with it furthermore there s no reference to the symbol list form in the manual is there a reason you added support for symbol list to ramsey model in and
| 1
|
32,904
| 4,440,716,005
|
IssuesEvent
|
2016-08-19 05:43:56
|
oppia/oppia
|
https://api.github.com/repos/oppia/oppia
|
closed
|
Add undo/redo functionality to the exploration editor
|
loc: frontend team: exploration creation (@seanlip) TODO: design (UX) type: feature (important)
|
```
In as much detail as possible, please describe what you would like to see.
We have the infrastructure for undo/redo functionality in the editor UI, but no
way for exploration creators to access it. We should figure out what this is
and implement it, or actively decide not to.
An idea is to bind Ctrl-Z to 'undo', and Ctrl-Shift-Z or Ctrl-Y to 'redo'. But
this overloads the corresponding commands in the RTEs.
We could, alternatively, have explicit undo and redo buttons, but perhaps that
might be too much UI clutter?
```
Original issue reported on code.google.com by `s...@seanlip.org` on 27 Apr 2014 at 7:08
|
1.0
|
Add undo/redo functionality to the exploration editor - ```
In as much detail as possible, please describe what you would like to see.
We have the infrastructure for undo/redo functionality in the editor UI, but no
way for exploration creators to access it. We should figure out what this is
and implement it, or actively decide not to.
An idea is to bind Ctrl-Z to 'undo', and Ctrl-Shift-Z or Ctrl-Y to 'redo'. But
this overloads the corresponding commands in the RTEs.
We could, alternatively, have explicit undo and redo buttons, but perhaps that
might be too much UI clutter?
```
Original issue reported on code.google.com by `s...@seanlip.org` on 27 Apr 2014 at 7:08
|
non_process
|
add undo redo functionality to the exploration editor in as much detail as possible please describe what you would like to see we have the infrastructure for undo redo functionality in the editor ui but no way for exploration creators to access it we should figure out what this is and implement it or actively decide not to an idea is to bind ctrl z to undo and ctrl shift z or ctrl y to redo but this overloads the corresponding commands in the rtes we could alternatively have explicit undo and redo buttons but perhaps that might be too much ui clutter original issue reported on code google com by s seanlip org on apr at
| 0
|
20,461
| 27,128,522,645
|
IssuesEvent
|
2023-02-16 07:59:35
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Cannot use dl_open to load external shared lib that linked with another shared lib
|
P4 type: support / not a bug (process) team-Rules-CPP stale
|
### Description of the problem / feature request:
Cannot use dl_open to load external shared lib that linked with another shared lib
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
```
mkdir -p ext
# Make an external repo.
touch ext/WORKSPACE
cat << EOF > ext/shared.cc
int foo() { return 1; }
EOF
cat << EOF > ext/lib.cc
int foo();
int bar() { return foo() + 1; }
EOF
cat << EOF > ext/directlink.cc
int bar();
int main() { bar(); return 0; }
EOF
cat << EOF > ext/usedlopen.cc
#include <string>
#include <stdio.h>
#include <dlfcn.h>
using std::string;
int main(int argc, char** argv) {
string path = argv[0];
path += ".runfiles/ext/libmylib.so";
void* p = dlopen(path.c_str(), RTLD_LAZY);
if (!p) {
printf("Failed to load: %s\n", dlerror());
return 1;
} else {
printf("Loaded\n");
dlclose(p);
}
return 0;
}
EOF
cat << EOF > ext/BUILD
cc_binary(
name = "libshared.so",
srcs = ["shared.cc"],
linkshared = 1,
)
cc_import(
name = "libshared",
shared_library = ":libshared.so",
)
cc_binary(
name = "libmylib.so",
srcs = ["lib.cc"],
deps = [":libshared"],
linkshared = 1,
)
cc_import(
name = "libmylib",
shared_library = ":libmylib.so",
)
cc_binary(
name = "directlink",
srcs = ["directlink.cc"],
deps = [":libmylib"],
)
cc_binary(
name = "usedlopen",
srcs = ["usedlopen.cc"],
data = [":libmylib.so"],
linkopts = ["-ldl"],
)
EOF
cat << EOF > WORKSPACE
local_repository(
name = "ext",
path = "ext"
)
EOF
bazel build @ext//:*
echo "Testing direct link:"
bazel-bin/external/ext/directlink
echo $?
echo "Testing dlopen:"
bazel-bin/external/ext/usedlopen
echo $?
```
Static linking is an option but not always possible, e.g. libshared.so is LGPL licensed, and libmylib.so is actually a python plugin so it must be a shared lib.
### What operating system are you running Bazel on?
ubuntu 16.04
### What's the output of `bazel info release`?
Both 24.0 & 19.2 can repro this issue.
### Have you found anything relevant by searching the web?
### Any other information, logs, or outputs that you want to share?
```
ldd bazel-bin/external/ext/usedlopen.runfiles/ext/libmylib.so
linux-vdso.so.1 => (0x00007ffc1f965000)
libshared.so => not found
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f55eb766000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f55eb45d000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f55eb093000)
/lib64/ld-linux-x86-64.so.2 (0x00007f55ebaf2000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f55eae7b000)
```
```
ldd bazel-bin/external/ext/usedlopen.runfiles/__main__/external/ext/libmylib.so
linux-vdso.so.1 => (0x00007ffe08991000)
libshared.so => /home/test/bazeltest/bazel-bin/external/ext/usedlopen.runfiles/__main__/external/ext/../../_solib_k8/_U@ext_S_S_Clibshared___Uexternal_Sext/libshared.so (0x00007f623bde3000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f623b839000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f623b530000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f623b166000)
/lib64/ld-linux-x86-64.so.2 (0x00007f623bbc5000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f623af4e000)
```
If I switch mylib.so to use `__main__/external/` instead of direct runfiles, it can work. However, I'm not able to change that if it's used as a python plugin.
Invoking the direct link is fine, because linux will follow symlink before applying `$ORIGIN` in RUNPATH. This is not true for shared libs.
|
1.0
|
Cannot use dl_open to load external shared lib that linked with another shared lib - ### Description of the problem / feature request:
Cannot use dl_open to load external shared lib that linked with another shared lib
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
```
mkdir -p ext
# Make an external repo.
touch ext/WORKSPACE
cat << EOF > ext/shared.cc
int foo() { return 1; }
EOF
cat << EOF > ext/lib.cc
int foo();
int bar() { return foo() + 1; }
EOF
cat << EOF > ext/directlink.cc
int bar();
int main() { bar(); return 0; }
EOF
cat << EOF > ext/usedlopen.cc
#include <string>
#include <stdio.h>
#include <dlfcn.h>
using std::string;
int main(int argc, char** argv) {
string path = argv[0];
path += ".runfiles/ext/libmylib.so";
void* p = dlopen(path.c_str(), RTLD_LAZY);
if (!p) {
printf("Failed to load: %s\n", dlerror());
return 1;
} else {
printf("Loaded\n");
dlclose(p);
}
return 0;
}
EOF
cat << EOF > ext/BUILD
cc_binary(
name = "libshared.so",
srcs = ["shared.cc"],
linkshared = 1,
)
cc_import(
name = "libshared",
shared_library = ":libshared.so",
)
cc_binary(
name = "libmylib.so",
srcs = ["lib.cc"],
deps = [":libshared"],
linkshared = 1,
)
cc_import(
name = "libmylib",
shared_library = ":libmylib.so",
)
cc_binary(
name = "directlink",
srcs = ["directlink.cc"],
deps = [":libmylib"],
)
cc_binary(
name = "usedlopen",
srcs = ["usedlopen.cc"],
data = [":libmylib.so"],
linkopts = ["-ldl"],
)
EOF
cat << EOF > WORKSPACE
local_repository(
name = "ext",
path = "ext"
)
EOF
bazel build @ext//:*
echo "Testing direct link:"
bazel-bin/external/ext/directlink
echo $?
echo "Testing dlopen:"
bazel-bin/external/ext/usedlopen
echo $?
```
Static linking is an option but not always possible, e.g. libshared.so is LGPL licensed, and libmylib.so is actually a python plugin so it must be a shared lib.
### What operating system are you running Bazel on?
ubuntu 16.04
### What's the output of `bazel info release`?
Both 24.0 & 19.2 can repro this issue.
### Have you found anything relevant by searching the web?
### Any other information, logs, or outputs that you want to share?
```
ldd bazel-bin/external/ext/usedlopen.runfiles/ext/libmylib.so
linux-vdso.so.1 => (0x00007ffc1f965000)
libshared.so => not found
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f55eb766000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f55eb45d000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f55eb093000)
/lib64/ld-linux-x86-64.so.2 (0x00007f55ebaf2000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f55eae7b000)
```
```
ldd bazel-bin/external/ext/usedlopen.runfiles/__main__/external/ext/libmylib.so
linux-vdso.so.1 => (0x00007ffe08991000)
libshared.so => /home/test/bazeltest/bazel-bin/external/ext/usedlopen.runfiles/__main__/external/ext/../../_solib_k8/_U@ext_S_S_Clibshared___Uexternal_Sext/libshared.so (0x00007f623bde3000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f623b839000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f623b530000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f623b166000)
/lib64/ld-linux-x86-64.so.2 (0x00007f623bbc5000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f623af4e000)
```
If I switch mylib.so to use `__main__/external/` instead of direct runfiles, it can work. However, I'm not able to change that if it's used as a python plugin.
Invoking the direct link is fine, because linux will follow symlink before applying `$ORIGIN` in RUNPATH. This is not true for shared libs.
|
process
|
cannot use dl open to load external shared lib that linked with another shared lib description of the problem feature request cannot use dl open to load external shared lib that linked with another shared lib bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible mkdir p ext make an external repo touch ext workspace cat ext shared cc int foo return eof cat ext lib cc int foo int bar return foo eof cat ext directlink cc int bar int main bar return eof cat ext usedlopen cc include include include using std string int main int argc char argv string path argv path runfiles ext libmylib so void p dlopen path c str rtld lazy if p printf failed to load s n dlerror return else printf loaded n dlclose p return eof cat ext build cc binary name libshared so srcs linkshared cc import name libshared shared library libshared so cc binary name libmylib so srcs deps linkshared cc import name libmylib shared library libmylib so cc binary name directlink srcs deps cc binary name usedlopen srcs data linkopts eof cat workspace local repository name ext path ext eof bazel build ext echo testing direct link bazel bin external ext directlink echo echo testing dlopen bazel bin external ext usedlopen echo static linking is an option but not always possible e g libshared so is lgpl licensed and libmylib so is actually a python plugin so it must be a shared lib what operating system are you running bazel on ubuntu what s the output of bazel info release both can repro this issue have you found anything relevant by searching the web any other information logs or outputs that you want to share ldd bazel bin external ext usedlopen runfiles ext libmylib so linux vdso so libshared so not found libstdc so usr lib linux gnu libstdc so libm so lib linux gnu libm so libc so lib linux gnu libc so ld linux so libgcc s so lib linux gnu libgcc s so ldd bazel bin external ext usedlopen runfiles main external ext libmylib so linux vdso so libshared so home test bazeltest bazel bin external ext usedlopen runfiles main external ext solib u ext s s clibshared uexternal sext libshared so libstdc so usr lib linux gnu libstdc so libm so lib linux gnu libm so libc so lib linux gnu libc so ld linux so libgcc s so lib linux gnu libgcc s so if i switch mylib so to use main external instead of direct runfiles it can work however i m not able to change that if it s used as a python plugin invoking the direct link is fine because linux will follow symlink before applying origin in runpath this is not true for shared libs
| 1
|
50,429
| 3,006,391,996
|
IssuesEvent
|
2015-07-27 10:04:33
|
Itseez/opencv
|
https://api.github.com/repos/Itseez/opencv
|
opened
|
MouseCallback gives unsigned values when mouse is outside the window
|
affected: 2.4 auto-transferred bug category: highgui-gui priority: normal
|
Transferred from http://code.opencv.org/issues/3446
```
|| Adi Shavit on 2013-12-19 08:06
|| Priority: Normal
|| Affected: 2.4.8 (latest release)
|| Category: highgui-gui
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
MouseCallback gives unsigned values when mouse is outside the window
-----------
```
When the mouse goes out of the window, e.g. while the button is still pressed, the MouseCallback event is received with the wrong coordinates.
When the mouse is above or to the left of the window, the values x and/or y (which are int) get are the unsigned short casting of the expected negative values.
Casting x/y to short returns the correct negative position.
On Windows x and y are recieved as the low and high words of a single int.
When separating them into 2 separate ints they should be cast to shorts before the assignment to allow for negative values.
See this row "here":http://code.opencv.org/projects/opencv/repository/revisions/master/entry/modules/highgui/src/window_w32.cpp#L1478 .
```
History
-------
##### Anna Kogan on 2014-01-13 09:19
```
Hello Adi,
Thank you for reporting the problem. If you could find on a solution for the issue, a contribution (see [[How_to_contribute]]) would be very appreciated!
- Affected version changed from 2.4.0 - 2.4.6 to 2.4.8 (latest
release)
- Assignee set to Adi Shavit
```
##### Andrew Senin on 2014-01-26 16:06
```
- Status changed from New to Open
```
|
1.0
|
MouseCallback gives unsigned values when mouse is outside the window - Transferred from http://code.opencv.org/issues/3446
```
|| Adi Shavit on 2013-12-19 08:06
|| Priority: Normal
|| Affected: 2.4.8 (latest release)
|| Category: highgui-gui
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
MouseCallback gives unsigned values when mouse is outside the window
-----------
```
When the mouse goes out of the window, e.g. while the button is still pressed, the MouseCallback event is received with the wrong coordinates.
When the mouse is above or to the left of the window, the values x and/or y (which are int) get are the unsigned short casting of the expected negative values.
Casting x/y to short returns the correct negative position.
On Windows x and y are recieved as the low and high words of a single int.
When separating them into 2 separate ints they should be cast to shorts before the assignment to allow for negative values.
See this row "here":http://code.opencv.org/projects/opencv/repository/revisions/master/entry/modules/highgui/src/window_w32.cpp#L1478 .
```
History
-------
##### Anna Kogan on 2014-01-13 09:19
```
Hello Adi,
Thank you for reporting the problem. If you could find on a solution for the issue, a contribution (see [[How_to_contribute]]) would be very appreciated!
- Affected version changed from 2.4.0 - 2.4.6 to 2.4.8 (latest
release)
- Assignee set to Adi Shavit
```
##### Andrew Senin on 2014-01-26 16:06
```
- Status changed from New to Open
```
|
non_process
|
mousecallback gives unsigned values when mouse is outside the window transferred from adi shavit on priority normal affected latest release category highgui gui tracker bug difficulty pr platform windows mousecallback gives unsigned values when mouse is outside the window when the mouse goes out of the window e g while the button is still pressed the mousecallback event is received with the wrong coordinates when the mouse is above or to the left of the window the values x and or y which are int get are the unsigned short casting of the expected negative values casting x y to short returns the correct negative position on windows x and y are recieved as the low and high words of a single int when separating them into separate ints they should be cast to shorts before the assignment to allow for negative values see this row here history anna kogan on hello adi thank you for reporting the problem if you could find on a solution for the issue a contribution see would be very appreciated affected version changed from to latest release assignee set to adi shavit andrew senin on status changed from new to open
| 0
|
6,589
| 9,664,050,800
|
IssuesEvent
|
2019-05-21 03:34:48
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Check-in GitHub button is grayed out
|
automation/svc cxp process-automation/subsvc product-question triaged
|
I've already synced my source control (Git repo). However this repo is empty which is expected. I'm trying to source control one of my runbooks that as of now only exists in my Azure Automation account, however I have not found in the documentation how to do this.
I noticed that the "Check-in" GitHub button was grayed out, did i miss something when setting up my source control?

How can I push newly created runbooks from **Azure Automation** to **GitHub**?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 83c90e64-b615-711f-a53d-fc76606e2ecd
* Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea
* Content: [Source Control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration)
* Content Source: [articles/automation/source-control-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/source-control-integration.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**
|
1.0
|
Check-in GitHub button is grayed out - I've already synced my source control (Git repo). However this repo is empty which is expected. I'm trying to source control one of my runbooks that as of now only exists in my Azure Automation account, however I have not found in the documentation how to do this.
I noticed that the "Check-in" GitHub button was grayed out, did i miss something when setting up my source control?

How can I push newly created runbooks from **Azure Automation** to **GitHub**?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 83c90e64-b615-711f-a53d-fc76606e2ecd
* Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea
* Content: [Source Control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration)
* Content Source: [articles/automation/source-control-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/source-control-integration.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**
|
process
|
check in github button is grayed out i ve already synced my source control git repo however this repo is empty which is expected i m trying to source control one of my runbooks that as of now only exists in my azure automation account however i have not found in the documentation how to do this i noticed that the check in github button was grayed out did i miss something when setting up my source control how can i push newly created runbooks from azure automation to github document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login georgewallace microsoft alias gwallace
| 1
|
14,122
| 17,018,523,927
|
IssuesEvent
|
2021-07-02 15:14:37
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Status of Bazel 5.0.0-pre.20210623.2
|
P1 release team-XProduct type: process
|
- Expected release date: July 2nd
Task list:
- [x] Pick release baseline: 8b453331163378071f1cfe0ae7c74d551c21b834 with cherrypick 223113c9202e8f338b183d1736d97327d28241ea
- [x] Create release candidate: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210623.2rc1/index.html
- [x] Post-submit: https://buildkite.com/bazel/bazel-bazel/builds/16688
- [x] Push the release: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210623.2/index.html
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Status of Bazel 5.0.0-pre.20210623.2 - - Expected release date: July 2nd
Task list:
- [x] Pick release baseline: 8b453331163378071f1cfe0ae7c74d551c21b834 with cherrypick 223113c9202e8f338b183d1736d97327d28241ea
- [x] Create release candidate: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210623.2rc1/index.html
- [x] Post-submit: https://buildkite.com/bazel/bazel-bazel/builds/16688
- [x] Push the release: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210623.2/index.html
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
status of bazel pre expected release date july task list pick release baseline with cherrypick create release candidate post submit push the release update the
| 1
|
13,146
| 15,570,326,829
|
IssuesEvent
|
2021-03-17 02:18:03
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Bug report]事件名中间带-的是无法触发的
|
processing
|
**问题描述**
https://github.com/youzan/vant-weapp/issues/3332
mpx 也有同样的问题
**环境信息描述**
https://github.com/youzan/vant-weapp/issues/3332
mpx 也有同样的问题
使用 mpx 时,事件名中间带-的是无法触发的
**最简复现demo**
https://github.com/youzan/vant-weapp/issues/3332
mpx 也有同样的问题
|
1.0
|
[Bug report]事件名中间带-的是无法触发的 - **问题描述**
https://github.com/youzan/vant-weapp/issues/3332
mpx 也有同样的问题
**环境信息描述**
https://github.com/youzan/vant-weapp/issues/3332
mpx 也有同样的问题
使用 mpx 时,事件名中间带-的是无法触发的
**最简复现demo**
https://github.com/youzan/vant-weapp/issues/3332
mpx 也有同样的问题
|
process
|
事件名中间带 的是无法触发的 问题描述 mpx 也有同样的问题 环境信息描述 mpx 也有同样的问题 使用 mpx 时,事件名中间带 的是无法触发的 最简复现demo mpx 也有同样的问题
| 1
|
3,062
| 6,048,298,404
|
IssuesEvent
|
2017-06-12 16:08:54
|
AnalyticalGraphicsInc/cesium
|
https://api.github.com/repos/AnalyticalGraphicsInc/cesium
|
opened
|
Proposed eslint rule additions
|
dev process
|
I did a pass on all available rules on http://eslint.org/docs/rules/ and picked out ones that we probably want in Cesium. We'll want to test these out one by one to make sure the rule isn't already handled by some of the other rules we are already using. We can turn a bunch of these on at once, assuming they don't require a bunch of code changes. Rules that do require a bunch of code changes should be done separately. I didn't include any style rules since they will be part of #5369. I also didn't include rules that were likely never to come up (for example explicitly disallowing eval).
__Things we definitely want (almost all of these were part of jsHint and long when moved to eslint)__
[block-scoped-var](http://eslint.org/docs/rules/block-scoped-var)
[default-case](http://eslint.org/docs/rules/default-case)
[guard-for-in](http://eslint.org/docs/rules/guard-for-in)
[no-alert](http://eslint.org/docs/rules/no-alert)
[no-caller](http://eslint.org/docs/rules/no-caller)
[no-floating-decimal](http://eslint.org/docs/rules/no-floating-decimal)
[no-implicit-globals](http://eslint.org/docs/rules/no-implicit-globals)
[no-invalid-this](http://eslint.org/docs/rules/no-invalid-this)
[no-loop-func](http://eslint.org/docs/rules/no-loop-func)
[no-new](http://eslint.org/docs/rules/no-new)
[no-use-before-define](http://eslint.org/docs/rules/no-use-before-define)
__Things we probably want (Similar guidelines already exist in our documentation)__
[no-else-return](http://eslint.org/docs/rules/no-else-return)
[no-extra-parens](http://eslint.org/docs/rules/no-extra-parens)
[class-methods-use-this](http://eslint.org/docs/rules/class-methods-use-this)
[consistent-return](http://eslint.org/docs/rules/consistent-return)
[no-sequences](http://eslint.org/docs/rules/no-sequences)
[no-unused-expressions](http://eslint.org/docs/rules/no-unused-expressions)
[prefer-promise-reject-errors](http://eslint.org/docs/rules/prefer-promise-reject-errors)
[no-shadow](http://eslint.org/docs/rules/no-shadow)
[no-undef-init](http://eslint.org/docs/rules/no-undef-init)
[valid-jsdoc](http://eslint.org/docs/rules/valid-jsdoc) (may require lots of cleanup but well worth it)
__Things we may want (cool ideas that might be overkill)__
[complexity](http://eslint.org/docs/rules/complexity)
__Node only (only apply to other Cesium ecosystem projects)__
[global-require](http://eslint.org/docs/rules/global-require)
[handle-callback-err](http://eslint.org/docs/rules/handle-callback-err)
[no-buffer-constructor](http://eslint.org/docs/rules/no-buffer-constructor)
[no-new-require](http://eslint.org/docs/rules/no-new-require)
|
1.0
|
Proposed eslint rule additions - I did a pass on all available rules on http://eslint.org/docs/rules/ and picked out ones that we probably want in Cesium. We'll want to test these out one by one to make sure the rule isn't already handled by some of the other rules we are already using. We can turn a bunch of these on at once, assuming they don't require a bunch of code changes. Rules that do require a bunch of code changes should be done separately. I didn't include any style rules since they will be part of #5369. I also didn't include rules that were likely never to come up (for example explicitly disallowing eval).
__Things we definitely want (almost all of these were part of jsHint and long when moved to eslint)__
[block-scoped-var](http://eslint.org/docs/rules/block-scoped-var)
[default-case](http://eslint.org/docs/rules/default-case)
[guard-for-in](http://eslint.org/docs/rules/guard-for-in)
[no-alert](http://eslint.org/docs/rules/no-alert)
[no-caller](http://eslint.org/docs/rules/no-caller)
[no-floating-decimal](http://eslint.org/docs/rules/no-floating-decimal)
[no-implicit-globals](http://eslint.org/docs/rules/no-implicit-globals)
[no-invalid-this](http://eslint.org/docs/rules/no-invalid-this)
[no-loop-func](http://eslint.org/docs/rules/no-loop-func)
[no-new](http://eslint.org/docs/rules/no-new)
[no-use-before-define](http://eslint.org/docs/rules/no-use-before-define)
__Things we probably want (Similar guidelines already exist in our documentation)__
[no-else-return](http://eslint.org/docs/rules/no-else-return)
[no-extra-parens](http://eslint.org/docs/rules/no-extra-parens)
[class-methods-use-this](http://eslint.org/docs/rules/class-methods-use-this)
[consistent-return](http://eslint.org/docs/rules/consistent-return)
[no-sequences](http://eslint.org/docs/rules/no-sequences)
[no-unused-expressions](http://eslint.org/docs/rules/no-unused-expressions)
[prefer-promise-reject-errors](http://eslint.org/docs/rules/prefer-promise-reject-errors)
[no-shadow](http://eslint.org/docs/rules/no-shadow)
[no-undef-init](http://eslint.org/docs/rules/no-undef-init)
[valid-jsdoc](http://eslint.org/docs/rules/valid-jsdoc) (may require lots of cleanup but well worth it)
__Things we may want (cool ideas that might be overkill)__
[complexity](http://eslint.org/docs/rules/complexity)
__Node only (only apply to other Cesium ecosystem projects)__
[global-require](http://eslint.org/docs/rules/global-require)
[handle-callback-err](http://eslint.org/docs/rules/handle-callback-err)
[no-buffer-constructor](http://eslint.org/docs/rules/no-buffer-constructor)
[no-new-require](http://eslint.org/docs/rules/no-new-require)
|
process
|
proposed eslint rule additions i did a pass on all available rules on and picked out ones that we probably want in cesium we ll want to test these out one by one to make sure the rule isn t already handled by some of the other rules we are already using we can turn a bunch of these on at once assuming they don t require a bunch of code changes rules that do require a bunch of code changes should be done separately i didn t include any style rules since they will be part of i also didn t include rules that were likely never to come up for example explicitly disallowing eval things we definitely want almost all of these were part of jshint and long when moved to eslint things we probably want similar guidelines already exist in our documentation may require lots of cleanup but well worth it things we may want cool ideas that might be overkill node only only apply to other cesium ecosystem projects
| 1
|
14,217
| 17,137,854,180
|
IssuesEvent
|
2021-07-13 05:56:10
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
When Process.WaitForExitCore(int milliseconds) waits successfully, the redirected output streams miss some data
|
area-System.Diagnostics.Process untriaged
|
### Description
When the following code executes, the output of the child processes doesn't return full text sometimes, even though all the WaitForExitCore(int milliseconds) calls finish successfully, i.e. no timeout happens. The expected behavior in this case is same as when calling WaitForExitCore() without the timeout arg.
```
class Program
{
static void Main()
{
for (var i = 0; i < 1000; i++)
{
var res = RunTest();
if (res.Trim() != "TEST")
Debugger.Break(); // should never break here
}
Debugger.Break();
}
static string RunTest()
{
var info = new ProcessStartInfo(CmdPath, " /C echo TEST")
{
CreateNoWindow = true,
UseShellExecute = false,
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
};
var process = Process.Start(info);
var sync = new object();
var buf = new StringBuilder();
process.OutputDataReceived +=
(_, args) =>
{
Thread.Sleep(1);
lock (sync)
{
buf.AppendLine(args.Data);
Trace.WriteLine(args.Data);
}
};
process.BeginOutputReadLine();
process.ErrorDataReceived +=
(_, args) =>
{
Thread.Sleep(1);
lock (sync)
{
buf.AppendLine(args.Data);
Trace.WriteLine(args.Data);
}
};
process.BeginErrorReadLine();
if (!process.WaitForExit(10_000))
throw new TimeoutException();
lock (sync)
{
var res = buf.ToString();
return res;
}
}
static string CmdPath => Path.Combine(Environment.SystemDirectory, "cmd.exe");
}
```
### Configuration
.NET 5, 6
Windows 10 x64
### Regression?
No
|
1.0
|
When Process.WaitForExitCore(int milliseconds) waits successfully, the redirected output streams miss some data - ### Description
When the following code executes, the output of the child processes doesn't return full text sometimes, even though all the WaitForExitCore(int milliseconds) calls finish successfully, i.e. no timeout happens. The expected behavior in this case is same as when calling WaitForExitCore() without the timeout arg.
```
class Program
{
static void Main()
{
for (var i = 0; i < 1000; i++)
{
var res = RunTest();
if (res.Trim() != "TEST")
Debugger.Break(); // should never break here
}
Debugger.Break();
}
static string RunTest()
{
var info = new ProcessStartInfo(CmdPath, " /C echo TEST")
{
CreateNoWindow = true,
UseShellExecute = false,
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
};
var process = Process.Start(info);
var sync = new object();
var buf = new StringBuilder();
process.OutputDataReceived +=
(_, args) =>
{
Thread.Sleep(1);
lock (sync)
{
buf.AppendLine(args.Data);
Trace.WriteLine(args.Data);
}
};
process.BeginOutputReadLine();
process.ErrorDataReceived +=
(_, args) =>
{
Thread.Sleep(1);
lock (sync)
{
buf.AppendLine(args.Data);
Trace.WriteLine(args.Data);
}
};
process.BeginErrorReadLine();
if (!process.WaitForExit(10_000))
throw new TimeoutException();
lock (sync)
{
var res = buf.ToString();
return res;
}
}
static string CmdPath => Path.Combine(Environment.SystemDirectory, "cmd.exe");
}
```
### Configuration
.NET 5, 6
Windows 10 x64
### Regression?
No
|
process
|
when process waitforexitcore int milliseconds waits successfully the redirected output streams miss some data description when the following code executes the output of the child processes doesn t return full text sometimes even though all the waitforexitcore int milliseconds calls finish successfully i e no timeout happens the expected behavior in this case is same as when calling waitforexitcore without the timeout arg class program static void main for var i i i var res runtest if res trim test debugger break should never break here debugger break static string runtest var info new processstartinfo cmdpath c echo test createnowindow true useshellexecute false redirectstandardinput true redirectstandardoutput true redirectstandarderror true var process process start info var sync new object var buf new stringbuilder process outputdatareceived args thread sleep lock sync buf appendline args data trace writeline args data process beginoutputreadline process errordatareceived args thread sleep lock sync buf appendline args data trace writeline args data process beginerrorreadline if process waitforexit throw new timeoutexception lock sync var res buf tostring return res static string cmdpath path combine environment systemdirectory cmd exe configuration net windows regression no
| 1
|
15,267
| 19,248,121,411
|
IssuesEvent
|
2021-12-09 00:17:12
|
MasterPlayer/adxl345-sv
|
https://api.github.com/repos/MasterPlayer/adxl345-sv
|
opened
|
Modify hardware component for extend functional
|
enhancement hardware process
|
1. AVG calculation with selecting window for some interval (add register `window size`
2. Direct access to reading, not from internal FPGA memory
3. Direct Access to reading over interrupt mechanism
- Send Request to adxl345 component
- Reading data
- Interrupt
- Maybe, need additional bank for realize this mechanism
4. New data generate interrupt
5. Self-Calibration
- Read data on window
- Calculation average
- Calculation offset
- Need know which mode in device(BW_RATE, LOW_POWER, SLEEP etc)
6. Optimal request time calculation according next parameters
- Clock period
- BW_RATE
- Low power
|
1.0
|
Modify hardware component for extend functional - 1. AVG calculation with selecting window for some interval (add register `window size`
2. Direct access to reading, not from internal FPGA memory
3. Direct Access to reading over interrupt mechanism
- Send Request to adxl345 component
- Reading data
- Interrupt
- Maybe, need additional bank for realize this mechanism
4. New data generate interrupt
5. Self-Calibration
- Read data on window
- Calculation average
- Calculation offset
- Need know which mode in device(BW_RATE, LOW_POWER, SLEEP etc)
6. Optimal request time calculation according next parameters
- Clock period
- BW_RATE
- Low power
|
process
|
modify hardware component for extend functional avg calculation with selecting window for some interval add register window size direct access to reading not from internal fpga memory direct access to reading over interrupt mechanism send request to component reading data interrupt maybe need additional bank for realize this mechanism new data generate interrupt self calibration read data on window calculation average calculation offset need know which mode in device bw rate low power sleep etc optimal request time calculation according next parameters clock period bw rate low power
| 1
|
111,061
| 11,723,188,864
|
IssuesEvent
|
2020-03-10 08:37:28
|
FACN8/Donator
|
https://api.github.com/repos/FACN8/Donator
|
closed
|
prioritize issues
|
documentation
|
can we add priority labels on each user story so we can understand what this is sprint is about ?
that way we could also sort out the milestone, cause the milestone only has 1 issue now which is the user journey. we need to have the user stories in there too.
1 last thing, assign yourself to the issues you are working on.
|
1.0
|
prioritize issues - can we add priority labels on each user story so we can understand what this is sprint is about ?
that way we could also sort out the milestone, cause the milestone only has 1 issue now which is the user journey. we need to have the user stories in there too.
1 last thing, assign yourself to the issues you are working on.
|
non_process
|
prioritize issues can we add priority labels on each user story so we can understand what this is sprint is about that way we could also sort out the milestone cause the milestone only has issue now which is the user journey we need to have the user stories in there too last thing assign yourself to the issues you are working on
| 0
|
13,109
| 15,498,137,475
|
IssuesEvent
|
2021-03-11 05:57:49
|
cypress-io/cypress-documentation
|
https://api.github.com/repos/cypress-io/cypress-documentation
|
closed
|
Set up main branch to be 'master' - no longer require merging 'develop' to 'master'
|
process: deployment stage: ready for work
|
## I'm submitting a...
```
[ ] Bug report
[ ] Content update
[x] Process update (build, deployment, ... )
```
## Type of bug / changes
- Remove step of merging `develop` into `master`
- if tests pass, deploy to staging, if staging passes, directly deploy to production.
- UPdate main branch within GitHub to be `master`
- Update circle.yml?
- ???
- profit
|
1.0
|
Set up main branch to be 'master' - no longer require merging 'develop' to 'master' - ## I'm submitting a...
```
[ ] Bug report
[ ] Content update
[x] Process update (build, deployment, ... )
```
## Type of bug / changes
- Remove step of merging `develop` into `master`
- if tests pass, deploy to staging, if staging passes, directly deploy to production.
- UPdate main branch within GitHub to be `master`
- Update circle.yml?
- ???
- profit
|
process
|
set up main branch to be master no longer require merging develop to master i m submitting a bug report content update process update build deployment type of bug changes remove step of merging develop into master if tests pass deploy to staging if staging passes directly deploy to production update main branch within github to be master update circle yml profit
| 1
|
14,532
| 17,630,671,682
|
IssuesEvent
|
2021-08-19 07:32:36
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Kidnapped by Danger: The Avery Jessup Story
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Kidnapped by Danger: The Avery Jessup Story
Type (film/tv show): tv movie
Film or show in which it appears: 30 Rock
Is the parent film/show streaming anywhere? Netflix
About when in the parent film/show does it appear? S06E14
Actual footage of the film/show can be seen (yes/no)? yes
|
1.0
|
Add Kidnapped by Danger: The Avery Jessup Story - Please add as much of the following info as you can:
Title: Kidnapped by Danger: The Avery Jessup Story
Type (film/tv show): tv movie
Film or show in which it appears: 30 Rock
Is the parent film/show streaming anywhere? Netflix
About when in the parent film/show does it appear? S06E14
Actual footage of the film/show can be seen (yes/no)? yes
|
process
|
add kidnapped by danger the avery jessup story please add as much of the following info as you can title kidnapped by danger the avery jessup story type film tv show tv movie film or show in which it appears rock is the parent film show streaming anywhere netflix about when in the parent film show does it appear actual footage of the film show can be seen yes no yes
| 1
|
18,669
| 24,584,831,229
|
IssuesEvent
|
2022-10-13 18:43:26
|
sysflow-telemetry/sysflow
|
https://api.github.com/repos/sysflow-telemetry/sysflow
|
closed
|
Parametric object storage export path configuration
|
enhancement sf-exporter sf-processor
|
**Indicate project**
processor, exporter
**Describe the feature you'd like**
Make the export path to object storage parametric (e.g., allow accountID, clusterID, etc. to be included)
|
1.0
|
Parametric object storage export path configuration - **Indicate project**
processor, exporter
**Describe the feature you'd like**
Make the export path to object storage parametric (e.g., allow accountID, clusterID, etc. to be included)
|
process
|
parametric object storage export path configuration indicate project processor exporter describe the feature you d like make the export path to object storage parametric e g allow accountid clusterid etc to be included
| 1
|
92,391
| 15,857,067,647
|
IssuesEvent
|
2021-04-08 03:53:01
|
Hans-Zamorano-Matamala/mean_entrenamiento
|
https://api.github.com/repos/Hans-Zamorano-Matamala/mean_entrenamiento
|
opened
|
CVE-2020-28500 (Medium) detected in lodash-4.17.10.tgz
|
security vulnerability
|
## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.10.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz</a></p>
<p>Path to dependency file: mean_entrenamiento/client/package.json</p>
<p>Path to vulnerable library: mean_entrenamiento/client/node_modules/lodash/package.json,mean_entrenamiento/client/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.6.7.tgz (Root Library)
- :x: **lodash-4.17.10.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
WhiteSource Note: After conducting further research, WhiteSource has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash-4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28500 (Medium) detected in lodash-4.17.10.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.10.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz</a></p>
<p>Path to dependency file: mean_entrenamiento/client/package.json</p>
<p>Path to vulnerable library: mean_entrenamiento/client/node_modules/lodash/package.json,mean_entrenamiento/client/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.6.7.tgz (Root Library)
- :x: **lodash-4.17.10.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
WhiteSource Note: After conducting further research, WhiteSource has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash-4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file mean entrenamiento client package json path to vulnerable library mean entrenamiento client node modules lodash package json mean entrenamiento client node modules lodash package json dependency hierarchy build angular tgz root library x lodash tgz vulnerable library vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions whitesource note after conducting further research whitesource has determined that cve only affects environments with versions to of lodash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
19,705
| 6,758,115,430
|
IssuesEvent
|
2017-10-24 13:16:18
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
reopened
|
Build is failing with Gradle 4.2
|
build
|
Not sure what causes it but when I run:
```sh
gradle :plugins:repository-azure:check
```
I'm getting (tested on 5.6, 6.0 and 6.x branches):
```
* What went wrong:
Execution failed for task ':plugins:repository-azure:integTestCluster#installRepositoryAzurePlugin'.
> A problem occurred starting process 'command '/Users/dpilato/Documents/Elasticsearch/dev/elasticsearch/es-5.x/elasticsearch/plugins/repository-azure/build/cluster/integTestCluster node0/elasticsearch-5.6.3-SNAPSHOT/bin/elasticsearch-plugin''
```
Note that on 6.0 and 6.x the message is a bit different as it fails when running `bin/elasticsearch-keystore`. I checked manually the directory and I'm able to run `bin/elasticsearch-plugin` or `bin/elasticsearch-keystore` manually.
```sh
$ gradle -version
------------------------------------------------------------
Gradle 4.2
------------------------------------------------------------
Build time: 2017-09-20 14:48:23 UTC
Revision: 5ba503cc17748671c83ce35d7da1cffd6e24dfbd
Groovy: 2.4.11
Ant: Apache Ant(TM) version 1.9.6 compiled on June 29 2015
JVM: 1.8.0_144 (Oracle Corporation 25.144-b01)
OS: Mac OS X 10.12.6 x86_64
```
Running with `gradlew` works:
```sh
./gradlew :plugins:repository-azure:check
```
gradlew reports a 3.3 version of Gradle.
|
1.0
|
Build is failing with Gradle 4.2 - Not sure what causes it but when I run:
```sh
gradle :plugins:repository-azure:check
```
I'm getting (tested on 5.6, 6.0 and 6.x branches):
```
* What went wrong:
Execution failed for task ':plugins:repository-azure:integTestCluster#installRepositoryAzurePlugin'.
> A problem occurred starting process 'command '/Users/dpilato/Documents/Elasticsearch/dev/elasticsearch/es-5.x/elasticsearch/plugins/repository-azure/build/cluster/integTestCluster node0/elasticsearch-5.6.3-SNAPSHOT/bin/elasticsearch-plugin''
```
Note that on 6.0 and 6.x the message is a bit different as it fails when running `bin/elasticsearch-keystore`. I checked manually the directory and I'm able to run `bin/elasticsearch-plugin` or `bin/elasticsearch-keystore` manually.
```sh
$ gradle -version
------------------------------------------------------------
Gradle 4.2
------------------------------------------------------------
Build time: 2017-09-20 14:48:23 UTC
Revision: 5ba503cc17748671c83ce35d7da1cffd6e24dfbd
Groovy: 2.4.11
Ant: Apache Ant(TM) version 1.9.6 compiled on June 29 2015
JVM: 1.8.0_144 (Oracle Corporation 25.144-b01)
OS: Mac OS X 10.12.6 x86_64
```
Running with `gradlew` works:
```sh
./gradlew :plugins:repository-azure:check
```
gradlew reports a 3.3 version of Gradle.
|
non_process
|
build is failing with gradle not sure what causes it but when i run sh gradle plugins repository azure check i m getting tested on and x branches what went wrong execution failed for task plugins repository azure integtestcluster installrepositoryazureplugin a problem occurred starting process command users dpilato documents elasticsearch dev elasticsearch es x elasticsearch plugins repository azure build cluster integtestcluster elasticsearch snapshot bin elasticsearch plugin note that on and x the message is a bit different as it fails when running bin elasticsearch keystore i checked manually the directory and i m able to run bin elasticsearch plugin or bin elasticsearch keystore manually sh gradle version gradle build time utc revision groovy ant apache ant tm version compiled on june jvm oracle corporation os mac os x running with gradlew works sh gradlew plugins repository azure check gradlew reports a version of gradle
| 0
|
21,417
| 29,359,590,715
|
IssuesEvent
|
2023-05-28 00:36:41
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Remoto] Software Architect na Coodesh
|
SALVADOR TESTE BANCO DE DADOS FULL-STACK SQL VISÃO COMPUTACIONAL STARTUP DOCKER KUBERNETES NOSQL DEVOPS SOLID AWS REQUISITOS REMOTO PROCESSOS INOVAÇÃO GITHUB CI CD AZURE UMA C QUALIDADE DOCUMENTAÇÃO PADRÕ MICROSERVICES SAAS MACHINE LEARNING ENGENHARIA DE SOFTWARE INTELIGÊNCIA ARTIFICIAL Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/arquiteto-de-software-200352198?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Pix Force</strong> está em busca de <strong><ins>Software Architect</ins></strong> para compor seu time!</p>
<p>A Pix Force foi fundada em 2015 e, de lá pra cá, desenvolve soluções utilizando tecnologias de visão computacional, inteligência artificial e machine learning. Nossos projetos e soluções transformam dados e imagens em informações valiosas para os clientes através da interpretação automática de imagens e vídeos.</p>
<p>O nosso time de Pix Whizz é formado de profissionais especializados em diversas áreas de conhecimento. A palavra-chave para trabalhar conosco é INOVAÇÃO. Se você se considera um profissional inovador, então você tem tudo para se tornar parte da nossa equipe. </p>
<p></p>
<p>O jeito Pix de ser:</p>
<ul>
<li>Quando um Pix Whizz pede ajuda, fazemos o possível para colaborar;</li>
<li>Sempre falamos quando algo não está bom e sugerimos melhorias;</li>
<li>Somos inovadores nos projetos, nas ideias e nos processos. Não deixamos de fazer por medo de errar;</li>
<li>Estamos em constante desenvolvimento e evolução, respeitamos as diferenças e o tempo de cada um;</li>
<li>Somos protagonistas da história da Pix, juntos seremos os melhores do mundo!</li>
</ul>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Responsável por estabelecer a arquitetura, o desenho e a diretrizes relacionadas a construção dos produtos;</li>
<li>Elaborar a documentação técnica de especificação de requisitos de softwares;</li>
<li>Definir padrões de teste (unitário, etc.) e de qualidade (código, entregáveis, etc.);</li>
<li>Desempenhar o papel de tradução dos requisitos do produto em especificações de engenharia de software.</li>
</ul>
## Pix Force:
<p>A Pix Force desenvolve soluções utilizando tecnologias de visão computacional, inteligência artificial e machine learning. Fornecemos informações valiosas para os nossos clientes através de aquisição e interpretação automática de imagens e vídeos.</p>
<p>Startup #1 em visão computacional no Brasil e multipremiada.</p><a href='https://coodesh.com/empresas/pix-force'>Veja mais no site</a>
## Habilidades:
- DevOps
- Kubernetes
- CI/CD
- API
- AWS
- Docker
- Banco de dados relacionais (SQL)
- Microservices
- SOLID
## Local:
100% Remoto
## Requisitos:
- Sólida experiência nas tecnologias citadas;
- Formação superior;
- SOLID;
- Microsserviços;
- Arquitetura cloud: AWS /Azure;
- Bancos de dados: SQL e NoSQL;
- Servidores;
- Docker;
- Kubernetes;
- DevOps;
- CI/CD;
- API;
- Software stack;
- SaaS.
## Diferenciais:
- Gestão de equipes;
- Mestrado ou doutorado.
## Benefícios:
- Horários Flexíveis;
- Stock Options.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Software Architect na Pix Force](https://coodesh.com/vagas/arquiteto-de-software-200352198?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Categoria
Full-Stack
|
1.0
|
[Remoto] Software Architect na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/arquiteto-de-software-200352198?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Pix Force</strong> está em busca de <strong><ins>Software Architect</ins></strong> para compor seu time!</p>
<p>A Pix Force foi fundada em 2015 e, de lá pra cá, desenvolve soluções utilizando tecnologias de visão computacional, inteligência artificial e machine learning. Nossos projetos e soluções transformam dados e imagens em informações valiosas para os clientes através da interpretação automática de imagens e vídeos.</p>
<p>O nosso time de Pix Whizz é formado de profissionais especializados em diversas áreas de conhecimento. A palavra-chave para trabalhar conosco é INOVAÇÃO. Se você se considera um profissional inovador, então você tem tudo para se tornar parte da nossa equipe. </p>
<p></p>
<p>O jeito Pix de ser:</p>
<ul>
<li>Quando um Pix Whizz pede ajuda, fazemos o possível para colaborar;</li>
<li>Sempre falamos quando algo não está bom e sugerimos melhorias;</li>
<li>Somos inovadores nos projetos, nas ideias e nos processos. Não deixamos de fazer por medo de errar;</li>
<li>Estamos em constante desenvolvimento e evolução, respeitamos as diferenças e o tempo de cada um;</li>
<li>Somos protagonistas da história da Pix, juntos seremos os melhores do mundo!</li>
</ul>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Responsável por estabelecer a arquitetura, o desenho e a diretrizes relacionadas a construção dos produtos;</li>
<li>Elaborar a documentação técnica de especificação de requisitos de softwares;</li>
<li>Definir padrões de teste (unitário, etc.) e de qualidade (código, entregáveis, etc.);</li>
<li>Desempenhar o papel de tradução dos requisitos do produto em especificações de engenharia de software.</li>
</ul>
## Pix Force:
<p>A Pix Force desenvolve soluções utilizando tecnologias de visão computacional, inteligência artificial e machine learning. Fornecemos informações valiosas para os nossos clientes através de aquisição e interpretação automática de imagens e vídeos.</p>
<p>Startup #1 em visão computacional no Brasil e multipremiada.</p><a href='https://coodesh.com/empresas/pix-force'>Veja mais no site</a>
## Habilidades:
- DevOps
- Kubernetes
- CI/CD
- API
- AWS
- Docker
- Banco de dados relacionais (SQL)
- Microservices
- SOLID
## Local:
100% Remoto
## Requisitos:
- Sólida experiência nas tecnologias citadas;
- Formação superior;
- SOLID;
- Microsserviços;
- Arquitetura cloud: AWS /Azure;
- Bancos de dados: SQL e NoSQL;
- Servidores;
- Docker;
- Kubernetes;
- DevOps;
- CI/CD;
- API;
- Software stack;
- SaaS.
## Diferenciais:
- Gestão de equipes;
- Mestrado ou doutorado.
## Benefícios:
- Horários Flexíveis;
- Stock Options.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Software Architect na Pix Force](https://coodesh.com/vagas/arquiteto-de-software-200352198?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Categoria
Full-Stack
|
process
|
software architect na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a pix force está em busca de software architect para compor seu time a pix force foi fundada em e de lá pra cá desenvolve soluções utilizando tecnologias de visão computacional inteligência artificial e machine learning nossos projetos e soluções transformam dados e imagens em informações valiosas para os clientes através da interpretação automática de imagens e vídeos o nosso time de pix whizz é formado de profissionais especializados em diversas áreas de conhecimento a palavra chave para trabalhar conosco é inovação se você se considera um profissional inovador então você tem tudo para se tornar parte da nossa equipe nbsp o jeito pix de ser quando um pix whizz pede ajuda fazemos o possível para colaborar sempre falamos quando algo não está bom e sugerimos melhorias somos inovadores nos projetos nas ideias e nos processos não deixamos de fazer por medo de errar estamos em constante desenvolvimento e evolução respeitamos as diferenças e o tempo de cada um somos protagonistas da história da pix juntos seremos os melhores do mundo responsabilidades responsável por estabelecer a arquitetura o desenho e a diretrizes relacionadas a construção dos produtos elaborar a documentação técnica de especificação de requisitos de softwares definir padrões de teste unitário etc e de qualidade código entregáveis etc desempenhar o papel de tradução dos requisitos do produto em especificações de engenharia de software pix force a pix force desenvolve soluções utilizando tecnologias de visão computacional inteligência artificial e machine learning fornecemos informações valiosas para os nossos clientes através de aquisição e interpretação automática de imagens e vídeos startup em visão computacional no brasil e multipremiada habilidades devops kubernetes ci cd api aws docker banco de dados relacionais sql microservices solid local remoto requisitos sólida experiência nas tecnologias citadas formação superior solid microsserviços arquitetura cloud aws azure bancos de dados sql e nosql servidores docker kubernetes devops ci cd api software stack saas diferenciais gestão de equipes mestrado ou doutorado benefícios horários flexíveis stock options como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto categoria full stack
| 1
|
162,882
| 6,177,789,677
|
IssuesEvent
|
2017-07-02 05:15:52
|
kabrabom/web
|
https://api.github.com/repos/kabrabom/web
|
closed
|
Ver minha página
|
enhancement Low Priority
|
Seria interessante dentro do perfil do usuário ter um botão onde o profissional pudesse visualizar a sua página para dar melhor visibilidade a ele do que os usuário estão vendo dele.
|
1.0
|
Ver minha página - Seria interessante dentro do perfil do usuário ter um botão onde o profissional pudesse visualizar a sua página para dar melhor visibilidade a ele do que os usuário estão vendo dele.
|
non_process
|
ver minha página seria interessante dentro do perfil do usuário ter um botão onde o profissional pudesse visualizar a sua página para dar melhor visibilidade a ele do que os usuário estão vendo dele
| 0
|
546,881
| 16,020,848,435
|
IssuesEvent
|
2021-04-20 22:57:23
|
sopra-fs21-group-16/mth-server
|
https://api.github.com/repos/sopra-fs21-group-16/mth-server
|
closed
|
Implement PUT endpoint for the profile update
|
high priority task
|
#12 , #10 and #8
UserService Implementation
Tests and Adapt Mappers (depends on #66)
|
1.0
|
Implement PUT endpoint for the profile update - #12 , #10 and #8
UserService Implementation
Tests and Adapt Mappers (depends on #66)
|
non_process
|
implement put endpoint for the profile update and userservice implementation tests and adapt mappers depends on
| 0
|
99,581
| 30,502,768,557
|
IssuesEvent
|
2023-07-18 14:53:54
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
[6.0] CI failure: System.Globalization.Tests.NumberFormatInfoMiscTests.DigitSubstitutionListTest(cultureName: "prs", shape: NativeNational) expected Context
|
area-System.Globalization blocking-clean-ci Known Build Error
|
### Error Blob
```json
{
"ErrorMessage": "/_/src/libraries/System.Globalization/tests/NumberFormatInfo/NumberFormatInfoTests.cs(110,0): at System.Globalization.Tests.NumberFormatInfoMiscTests.DigitSubstitutionListTest(String cultureName, DigitShapes shape)",
"BuildRetry": false,
"ErrorPattern": "",
"ExcludeConsoleLog": true
}
```
### Reproduction Steps
- Affected 6.0 PR: https://github.com/dotnet/runtime/pull/86295
- Build: https://dev.azure.com/dnceng-public/public/_build/results?buildId=274862&view=results
- Queue: Libraries Test Run release coreclr windows arm64 Release
- Run: https://dev.azure.com/dnceng-public/public/_build/results?buildId=274862&view=logs&j=a9add232-41fe-5d00-d3bc-624abd6e7259&t=ad45aa27-8e05-5f14-6180-a4bed9aa8357
- Log file: https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-release-60-staging-74b27a8ab13348a28f/System.Globalization.Nls.Tests/1/console.bd393e04.log?helixlogtype=result
- Callstack:
```
Discovering: System.Globalization.Nls.Tests (method display = ClassAndMethod, method display options = None)
Discovered: System.Globalization.Nls.Tests (found 444 of 449 test cases)
Starting: System.Globalization.Nls.Tests (parallel test collections = on, max threads = 2)
System.Globalization.Tests.NumberFormatInfoMiscTests.DigitSubstitutionListTest(cultureName: "prs", shape: NativeNational) [FAIL]
Assert.Equal() Failure
Expected: Context
Actual: NativeNational
Stack Trace:
/_/src/libraries/System.Globalization/tests/NumberFormatInfo/NumberFormatInfoTests.cs(110,0): at System.Globalization.Tests.NumberFormatInfoMiscTests.DigitSubstitutionListTest(String cultureName, DigitShapes shape)
System.Globalization.Tests.NumberFormatInfoPercentNegativePattern.PercentNegativePattern_Get_ReturnsExpected_ICU [SKIP]
Condition(s) not met: "IsIcuGlobalization"
System.Globalization.Tests.NumberFormatInfoPercentPositivePattern.PercentPositivePattern_Get_ReturnsExpected_ICU [SKIP]
Condition(s) not met: "IsIcuGlobalization"
System.Globalization.Tests.NumberFormatInfoCurrencyGroupSizes.CurrencyGroupSizes_Get_ReturnsExpected(format: NumberFormatInfo { CurrencyDecimalDigits = 2, CurrencyDecimalSeparator = ".", CurrencyGroupSeparator = ",", CurrencyGroupSizes = [3], CurrencyNegativePattern = 1, ... }, expected: [3, 2]) [FAIL]
Assert.Equal() Failure
Expected: Int32[] [3, 2]
Actual: Int32[] [3]
Stack Trace:
/_/src/libraries/System.Globalization/tests/NumberFormatInfo/NumberFormatInfoCurrencyGroupSizes.cs(26,0): at System.Globalization.Tests.NumberFormatInfoCurrencyGroupSizes.CurrencyGroupSizes_Get_ReturnsExpected(NumberFormatInfo format, Int32[] expected)
System.Globalization.Tests.DateTimeFormatInfoLongTimePattern.LongTimePattern_CheckReadingTimeFormatWithSingleQuotes_ICU [SKIP]
Condition(s) not met: "IsIcuGlobalization"
System.Globalization.Tests.CultureInfoConstructor.Ctor_String(name: "prs", expectedNames: ["prs"], expectToThrowOnBrowser: True) [FAIL]
Assert.Contains() Failure
Not found: fa
In value: String[] ["prs"]
Stack Trace:
/_/src/libraries/System.Globalization/tests/CultureInfo/CultureInfoCtor.cs(394,0): at System.Globalization.Tests.CultureInfoConstructor.Ctor_String(String name, String[] expectedNames, Boolean expectToThrowOnBrowser)
System.Globalization.Tests.CultureInfoConstructor.Ctor_String(name: "prs-AF", expectedNames: ["prs-AF"], expectToThrowOnBrowser: True) [FAIL]
Assert.Contains() Failure
Not found: fa-AF
In value: String[] ["prs-AF"]
Stack Trace:
/_/src/libraries/System.Globalization/tests/CultureInfo/CultureInfoCtor.cs(394,0): at System.Globalization.Tests.CultureInfoConstructor.Ctor_String(String name, String[] expectedNames, Boolean expectToThrowOnBrowser)
Finished: System.Globalization.Nls.Tests
=== TEST EXECUTION SUMMARY ===
System.Globalization.Nls.Tests Total: 2561, Errors: 0, Failed: 4, Skipped: 3, Time: 5.414s
```
<!--Known issue error report start -->
### Report
#### Summary
|24-Hour Hit Count|7-Day Hit Count|1-Month Count|
|---|---|---|
|0|0|0|
<!--Known issue error report end -->
|
1.0
|
[6.0] CI failure: System.Globalization.Tests.NumberFormatInfoMiscTests.DigitSubstitutionListTest(cultureName: "prs", shape: NativeNational) expected Context - ### Error Blob
```json
{
"ErrorMessage": "/_/src/libraries/System.Globalization/tests/NumberFormatInfo/NumberFormatInfoTests.cs(110,0): at System.Globalization.Tests.NumberFormatInfoMiscTests.DigitSubstitutionListTest(String cultureName, DigitShapes shape)",
"BuildRetry": false,
"ErrorPattern": "",
"ExcludeConsoleLog": true
}
```
### Reproduction Steps
- Affected 6.0 PR: https://github.com/dotnet/runtime/pull/86295
- Build: https://dev.azure.com/dnceng-public/public/_build/results?buildId=274862&view=results
- Queue: Libraries Test Run release coreclr windows arm64 Release
- Run: https://dev.azure.com/dnceng-public/public/_build/results?buildId=274862&view=logs&j=a9add232-41fe-5d00-d3bc-624abd6e7259&t=ad45aa27-8e05-5f14-6180-a4bed9aa8357
- Log file: https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-release-60-staging-74b27a8ab13348a28f/System.Globalization.Nls.Tests/1/console.bd393e04.log?helixlogtype=result
- Callstack:
```
Discovering: System.Globalization.Nls.Tests (method display = ClassAndMethod, method display options = None)
Discovered: System.Globalization.Nls.Tests (found 444 of 449 test cases)
Starting: System.Globalization.Nls.Tests (parallel test collections = on, max threads = 2)
System.Globalization.Tests.NumberFormatInfoMiscTests.DigitSubstitutionListTest(cultureName: "prs", shape: NativeNational) [FAIL]
Assert.Equal() Failure
Expected: Context
Actual: NativeNational
Stack Trace:
/_/src/libraries/System.Globalization/tests/NumberFormatInfo/NumberFormatInfoTests.cs(110,0): at System.Globalization.Tests.NumberFormatInfoMiscTests.DigitSubstitutionListTest(String cultureName, DigitShapes shape)
System.Globalization.Tests.NumberFormatInfoPercentNegativePattern.PercentNegativePattern_Get_ReturnsExpected_ICU [SKIP]
Condition(s) not met: "IsIcuGlobalization"
System.Globalization.Tests.NumberFormatInfoPercentPositivePattern.PercentPositivePattern_Get_ReturnsExpected_ICU [SKIP]
Condition(s) not met: "IsIcuGlobalization"
System.Globalization.Tests.NumberFormatInfoCurrencyGroupSizes.CurrencyGroupSizes_Get_ReturnsExpected(format: NumberFormatInfo { CurrencyDecimalDigits = 2, CurrencyDecimalSeparator = ".", CurrencyGroupSeparator = ",", CurrencyGroupSizes = [3], CurrencyNegativePattern = 1, ... }, expected: [3, 2]) [FAIL]
Assert.Equal() Failure
Expected: Int32[] [3, 2]
Actual: Int32[] [3]
Stack Trace:
/_/src/libraries/System.Globalization/tests/NumberFormatInfo/NumberFormatInfoCurrencyGroupSizes.cs(26,0): at System.Globalization.Tests.NumberFormatInfoCurrencyGroupSizes.CurrencyGroupSizes_Get_ReturnsExpected(NumberFormatInfo format, Int32[] expected)
System.Globalization.Tests.DateTimeFormatInfoLongTimePattern.LongTimePattern_CheckReadingTimeFormatWithSingleQuotes_ICU [SKIP]
Condition(s) not met: "IsIcuGlobalization"
System.Globalization.Tests.CultureInfoConstructor.Ctor_String(name: "prs", expectedNames: ["prs"], expectToThrowOnBrowser: True) [FAIL]
Assert.Contains() Failure
Not found: fa
In value: String[] ["prs"]
Stack Trace:
/_/src/libraries/System.Globalization/tests/CultureInfo/CultureInfoCtor.cs(394,0): at System.Globalization.Tests.CultureInfoConstructor.Ctor_String(String name, String[] expectedNames, Boolean expectToThrowOnBrowser)
System.Globalization.Tests.CultureInfoConstructor.Ctor_String(name: "prs-AF", expectedNames: ["prs-AF"], expectToThrowOnBrowser: True) [FAIL]
Assert.Contains() Failure
Not found: fa-AF
In value: String[] ["prs-AF"]
Stack Trace:
/_/src/libraries/System.Globalization/tests/CultureInfo/CultureInfoCtor.cs(394,0): at System.Globalization.Tests.CultureInfoConstructor.Ctor_String(String name, String[] expectedNames, Boolean expectToThrowOnBrowser)
Finished: System.Globalization.Nls.Tests
=== TEST EXECUTION SUMMARY ===
System.Globalization.Nls.Tests Total: 2561, Errors: 0, Failed: 4, Skipped: 3, Time: 5.414s
```
<!--Known issue error report start -->
### Report
#### Summary
|24-Hour Hit Count|7-Day Hit Count|1-Month Count|
|---|---|---|
|0|0|0|
<!--Known issue error report end -->
|
non_process
|
ci failure system globalization tests numberformatinfomisctests digitsubstitutionlisttest culturename prs shape nativenational expected context error blob json errormessage src libraries system globalization tests numberformatinfo numberformatinfotests cs at system globalization tests numberformatinfomisctests digitsubstitutionlisttest string culturename digitshapes shape buildretry false errorpattern excludeconsolelog true reproduction steps affected pr build queue libraries test run release coreclr windows release run log file callstack discovering system globalization nls tests method display classandmethod method display options none discovered system globalization nls tests found of test cases starting system globalization nls tests parallel test collections on max threads system globalization tests numberformatinfomisctests digitsubstitutionlisttest culturename prs shape nativenational assert equal failure expected context actual nativenational stack trace src libraries system globalization tests numberformatinfo numberformatinfotests cs at system globalization tests numberformatinfomisctests digitsubstitutionlisttest string culturename digitshapes shape system globalization tests numberformatinfopercentnegativepattern percentnegativepattern get returnsexpected icu condition s not met isicuglobalization system globalization tests numberformatinfopercentpositivepattern percentpositivepattern get returnsexpected icu condition s not met isicuglobalization system globalization tests numberformatinfocurrencygroupsizes currencygroupsizes get returnsexpected format numberformatinfo currencydecimaldigits currencydecimalseparator currencygroupseparator currencygroupsizes currencynegativepattern expected assert equal failure expected actual stack trace src libraries system globalization tests numberformatinfo numberformatinfocurrencygroupsizes cs at system globalization tests numberformatinfocurrencygroupsizes currencygroupsizes get returnsexpected numberformatinfo format expected system globalization tests datetimeformatinfolongtimepattern longtimepattern checkreadingtimeformatwithsinglequotes icu condition s not met isicuglobalization system globalization tests cultureinfoconstructor ctor string name prs expectednames expecttothrowonbrowser true assert contains failure not found fa in value string stack trace src libraries system globalization tests cultureinfo cultureinfoctor cs at system globalization tests cultureinfoconstructor ctor string string name string expectednames boolean expecttothrowonbrowser system globalization tests cultureinfoconstructor ctor string name prs af expectednames expecttothrowonbrowser true assert contains failure not found fa af in value string stack trace src libraries system globalization tests cultureinfo cultureinfoctor cs at system globalization tests cultureinfoconstructor ctor string string name string expectednames boolean expecttothrowonbrowser finished system globalization nls tests test execution summary system globalization nls tests total errors failed skipped time report summary hour hit count day hit count month count
| 0
|
1,086
| 3,548,591,325
|
IssuesEvent
|
2016-01-20 15:02:56
|
LOVDnl/LOVD3
|
https://api.github.com/repos/LOVDnl/LOVD3
|
opened
|
Check numeric id's
|
import process
|
The {{id}} field in the Individuals section needs to be numeric, but the {{individualid}} in the Individuals_To_Diseases and Screenings sections is not checked this way. Also, it doesn't seem to complain if the individualid given is found nowhere (not in the file, not in the database).
|
1.0
|
Check numeric id's - The {{id}} field in the Individuals section needs to be numeric, but the {{individualid}} in the Individuals_To_Diseases and Screenings sections is not checked this way. Also, it doesn't seem to complain if the individualid given is found nowhere (not in the file, not in the database).
|
process
|
check numeric id s the id field in the individuals section needs to be numeric but the individualid in the individuals to diseases and screenings sections is not checked this way also it doesn t seem to complain if the individualid given is found nowhere not in the file not in the database
| 1
|
3,078
| 6,088,701,354
|
IssuesEvent
|
2017-06-19 00:19:34
|
JustBru00/RenamePlugin
|
https://api.github.com/repos/JustBru00/RenamePlugin
|
closed
|
Error report for 1.12
|
Processing Question
|
I am trying to add this plugin to my 1.12 server but when I try to use it I get an internal error
|
1.0
|
Error report for 1.12 - I am trying to add this plugin to my 1.12 server but when I try to use it I get an internal error
|
process
|
error report for i am trying to add this plugin to my server but when i try to use it i get an internal error
| 1
|
18,570
| 24,556,073,227
|
IssuesEvent
|
2022-10-12 15:58:10
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] App is crashing in the following scenario
|
Bug Blocker P0 Android Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Install the mobile app
2. Sign in / Sign up
3. Enroll to the study
4. Kill the app after navigating to study activities screen
5. Now, Go back to SB
6. Select 'Enforce e-consent flow again for enrolled participants' in the SB and Publish the study
7. Open the mobile app and submit the updated consent flow and Verify
**AR:** App is crashing in the following scenario
**ER:** App should not crash
|
3.0
|
[Android] App is crashing in the following scenario - **Steps:**
1. Install the mobile app
2. Sign in / Sign up
3. Enroll to the study
4. Kill the app after navigating to study activities screen
5. Now, Go back to SB
6. Select 'Enforce e-consent flow again for enrolled participants' in the SB and Publish the study
7. Open the mobile app and submit the updated consent flow and Verify
**AR:** App is crashing in the following scenario
**ER:** App should not crash
|
process
|
app is crashing in the following scenario steps install the mobile app sign in sign up enroll to the study kill the app after navigating to study activities screen now go back to sb select enforce e consent flow again for enrolled participants in the sb and publish the study open the mobile app and submit the updated consent flow and verify ar app is crashing in the following scenario er app should not crash
| 1
|
221,144
| 24,590,722,849
|
IssuesEvent
|
2022-10-14 01:45:43
|
andygonzalez2010/store
|
https://api.github.com/repos/andygonzalez2010/store
|
opened
|
CVE-2022-37601 (High) detected in loader-utils-0.2.17.tgz, loader-utils-1.2.3.tgz
|
security vulnerability
|
## CVE-2022-37601 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>loader-utils-0.2.17.tgz</b>, <b>loader-utils-1.2.3.tgz</b></p></summary>
<p>
<details><summary><b>loader-utils-0.2.17.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/html-webpack-plugin/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- sourcemap-istanbul-instrumenter-loader-0.2.0.tgz (Root Library)
- :x: **loader-utils-0.2.17.tgz** (Vulnerable Library)
</details>
<details><summary><b>loader-utils-1.2.3.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- cache-loader-2.0.1.tgz (Root Library)
- :x: **loader-utils-1.2.3.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/andygonzalez2010/store/commit/3f6d614029f4d6cfdddfcef8468949cb7822503c">3f6d614029f4d6cfdddfcef8468949cb7822503c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils 2.0.0 via the name variable in parseQuery.js.
<p>Publish Date: 2022-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37601>CVE-2022-37601</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-12</p>
<p>Fix Resolution: loader-utils - v2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-37601 (High) detected in loader-utils-0.2.17.tgz, loader-utils-1.2.3.tgz - ## CVE-2022-37601 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>loader-utils-0.2.17.tgz</b>, <b>loader-utils-1.2.3.tgz</b></p></summary>
<p>
<details><summary><b>loader-utils-0.2.17.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/html-webpack-plugin/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- sourcemap-istanbul-instrumenter-loader-0.2.0.tgz (Root Library)
- :x: **loader-utils-0.2.17.tgz** (Vulnerable Library)
</details>
<details><summary><b>loader-utils-1.2.3.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- cache-loader-2.0.1.tgz (Root Library)
- :x: **loader-utils-1.2.3.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/andygonzalez2010/store/commit/3f6d614029f4d6cfdddfcef8468949cb7822503c">3f6d614029f4d6cfdddfcef8468949cb7822503c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils 2.0.0 via the name variable in parseQuery.js.
<p>Publish Date: 2022-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37601>CVE-2022-37601</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-12</p>
<p>Fix Resolution: loader-utils - v2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in loader utils tgz loader utils tgz cve high severity vulnerability vulnerable libraries loader utils tgz loader utils tgz loader utils tgz utils for webpack loaders library home page a href path to dependency file package json path to vulnerable library node modules html webpack plugin node modules loader utils package json dependency hierarchy sourcemap istanbul instrumenter loader tgz root library x loader utils tgz vulnerable library loader utils tgz utils for webpack loaders library home page a href path to dependency file package json path to vulnerable library node modules loader utils package json dependency hierarchy cache loader tgz root library x loader utils tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution vulnerability in function parsequery in parsequery js in webpack loader utils via the name variable in parsequery js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution loader utils step up your open source security game with mend
| 0
|
17,849
| 23,787,667,550
|
IssuesEvent
|
2022-09-02 11:44:31
|
galasa-dev/projectmanagement
|
https://api.github.com/repos/galasa-dev/projectmanagement
|
closed
|
Roll out GitHubIssue annotation over failing RSE integration tests
|
testing Conversion Process
|
- Add GitHubIssue (Issue 1060) annotation to failing RSE tests
- Update galasactl to show Failed With Defects result in a) console log b) test outputs (YAML, JSON and JUnit)
- Ensure these don't break: a) Jenkins pipeline b) Slack output
|
1.0
|
Roll out GitHubIssue annotation over failing RSE integration tests - - Add GitHubIssue (Issue 1060) annotation to failing RSE tests
- Update galasactl to show Failed With Defects result in a) console log b) test outputs (YAML, JSON and JUnit)
- Ensure these don't break: a) Jenkins pipeline b) Slack output
|
process
|
roll out githubissue annotation over failing rse integration tests add githubissue issue annotation to failing rse tests update galasactl to show failed with defects result in a console log b test outputs yaml json and junit ensure these don t break a jenkins pipeline b slack output
| 1
|
2,479
| 5,253,801,308
|
IssuesEvent
|
2017-02-02 10:45:50
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
Improve edge HA node selection
|
priority_normal process_wontfix type_enhancement
|
In case of a HA scenario the Edge doesn't take into account node which are down (marked offline) or in maintenance (https://github.com/openvstorage/volumedriver/issues/236).
The framework should set the distance to max cost 2**32-1 if a node is set to offline.
|
1.0
|
Improve edge HA node selection - In case of a HA scenario the Edge doesn't take into account node which are down (marked offline) or in maintenance (https://github.com/openvstorage/volumedriver/issues/236).
The framework should set the distance to max cost 2**32-1 if a node is set to offline.
|
process
|
improve edge ha node selection in case of a ha scenario the edge doesn t take into account node which are down marked offline or in maintenance the framework should set the distance to max cost if a node is set to offline
| 1
|
141,974
| 11,449,922,038
|
IssuesEvent
|
2020-02-06 08:30:26
|
WorldHealthOrganization/herams-backend
|
https://api.github.com/repos/WorldHealthOrganization/herams-backend
|
closed
|
Dashboard maps on staging are not working
|
test on staging
|
There seems to be an issue with the dashboard maps on staging. The dashboards themselves work, but maps are either blank or only show a few points,
https://herams-staging.org/project/30

https://herams-staging.org/project/24

|
1.0
|
Dashboard maps on staging are not working - There seems to be an issue with the dashboard maps on staging. The dashboards themselves work, but maps are either blank or only show a few points,
https://herams-staging.org/project/30

https://herams-staging.org/project/24

|
non_process
|
dashboard maps on staging are not working there seems to be an issue with the dashboard maps on staging the dashboards themselves work but maps are either blank or only show a few points
| 0
|
740,087
| 25,736,178,094
|
IssuesEvent
|
2022-12-08 00:58:55
|
ImranR98/Obtainium
|
https://api.github.com/repos/ImranR98/Obtainium
|
closed
|
(since last release) Error checking updates: late initialization error
|
bug high priority
|
Ever since the last Obtainium release, I'm periodically getting this notification:
### Notification title:
```
errorCheckingUpdates
```
### Notification body:
```
LateInitializationError: Field '_locale@(redacted by me: 9 digit int)' has not been initialized
```
### Notification channel:
It's coming from the "Error Checking for Updates" channel
-----
I'm using Graphene OS, Android 13. Let me know if you need any more info!
|
1.0
|
(since last release) Error checking updates: late initialization error - Ever since the last Obtainium release, I'm periodically getting this notification:
### Notification title:
```
errorCheckingUpdates
```
### Notification body:
```
LateInitializationError: Field '_locale@(redacted by me: 9 digit int)' has not been initialized
```
### Notification channel:
It's coming from the "Error Checking for Updates" channel
-----
I'm using Graphene OS, Android 13. Let me know if you need any more info!
|
non_process
|
since last release error checking updates late initialization error ever since the last obtainium release i m periodically getting this notification notification title errorcheckingupdates notification body lateinitializationerror field locale redacted by me digit int has not been initialized notification channel it s coming from the error checking for updates channel i m using graphene os android let me know if you need any more info
| 0
|
20,000
| 26,474,625,883
|
IssuesEvent
|
2023-01-17 10:10:56
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
closed
|
Bump Node to 18.12.1 LTS
|
enhancement P2 process
|
### Problem
Node js released the new 18 LTS version and we're behind on 16
### Solution
Bump Node to 18.12.1 LTS
Ensure the following are updated
- local dev env
- docker image
- Helm chart
- github actions
### Alternatives
_No response_
|
1.0
|
Bump Node to 18.12.1 LTS - ### Problem
Node js released the new 18 LTS version and we're behind on 16
### Solution
Bump Node to 18.12.1 LTS
Ensure the following are updated
- local dev env
- docker image
- Helm chart
- github actions
### Alternatives
_No response_
|
process
|
bump node to lts problem node js released the new lts version and we re behind on solution bump node to lts ensure the following are updated local dev env docker image helm chart github actions alternatives no response
| 1
|
21,655
| 30,105,567,751
|
IssuesEvent
|
2023-06-30 00:40:25
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Hibrido / Belo Horizonte, Minas Gerais, Brazil] QA Pleno (Híbrido - BH) na Coodesh
|
SALVADOR DESENVOLVIMENTO DE SOFTWARE BANCO DE DADOS MYSQL SCRUM PLENO STARTUP REQUISITOS REMOTO PROCESSOS GITHUB KANBAN SEGURANÇA UMA QUALIDADE R DOCUMENTAÇÃO PADRÕ QA LÓGICA DE PROGRAMAÇÃO SAAS HIBRIDO ALOCADO Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/qa-pleno-hibrido-bh-162436869?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A Onfly está em busca de QA Pleno para compor seu time!</p>
<p>A Onfly é uma startup focada em ajudar as empresas a viajarem melhor! Democratizar a gestão de viagens, reembolsos, descontos em passagens, hotéis e locação de carros em uma experiência completa e simples é foco do nosso negócio. Somos um dos melhores lugares para se trabalhar no Brasil (inclusive somos uma das empresas do Great Place to work)!</p>
<p>Para que alcancemos a meta de crescer exponencialmente, buscamos pessoas que sejam apaixonadas por tecnologia, que desejam estar em um time de alta performance e que queiram crescer muuuuito.</p>
<p>Para a vaga de QA Pleno, buscamos um(a) profissional que será responsável por garantir que todo o nosso processo de desenvolvimento de software aconteça com altos de padrões de qualidade. Além disso, buscamos alguém que também seja dinâmico(a), que saiba trabalhar em equipe e que se conecte com nossa cultura e valores.</p>
<p>RESPONSABILIDADES E ATRIBUIÇÕES:</p>
<ul>
<li>Análise de todos os aspectos de utilização do software ou aplicação;</li>
<li>Acompanhar todas as etapas do processo de desenvolvimento e documentar o nível de qualidade em cada uma delas;</li>
<li>Estabelecer diferentes padrões de qualidade para cada produto;</li>
<li>Identificar requisitos funcionais e não funcionais dos produtos;</li>
<li>Garantir que todos nossa equipe Tech esteja ciente dos processos e procedimentos necessários para garantir a qualidade do software;</li>
<li>Elaborar documentação técnica ou revisar o que foi escrito pelos desenvolvedores, assegurando que o seja claro o que foi implementado no produto de software.</li>
</ul>
## Onfly:
<p>A Onfly oferece uma solução "all-in-one" com gestão de viagens (aéreo, hotel e carros), despesas e cartão corporativo em uma única plataforma digital. Com intensivo uso de tecnologia e uma boa dose de calor humano, elevamos ao máximo a gestão de viagens das empresas, trazendo conforto e produtividade para os colaboradores e segurança e economia para as empresas. Simplifique, vá de Onfly!</p></p>
## Habilidades:
- SaaS
- MySQL
- SCRUM
- Planejamento de Testes
- Análise de requisitos
## Local:
Belo Horizonte, Minas Gerais, Brazil
## Requisitos:
- Sólida experiência anterior como QA;
- Experiência com documentação;
- Lógica de Programação;
- Conhecimento em Banco de Dados – MySQL;
- Experiência com metodologias de desenvolvimento ágil Scrum ou Kanban.
## Diferenciais:
- Regras de negócio de produto SaaS - desejável.
## Benefícios:
- Regime Híbrido de trabalho (2 dias na semana presenciais e o restante remoto);
- Horário de trabalho flexível;
- Vale refeição em cartão flexível R$28 (sem desconto de 6%);
- Vale transporte em cartão flexível (sem desconto de 6%);
- Plano de Saúde Unimed, sem coparticipação e sem mensalidade para o Onflyer (por nossa conta :D);
- Plano Odontológico (desconto na mensalidade);
- Desconto Farmácia;
- No dress code;
- Benefício de Psicoterapia on-line (Zenklub e Vittude);
- Pagamos metade da sua academia (TotalPass);
- Bolsa de 50% para estudos de idiomas;
- Descontos em Graduações e Pós-graduações (UNA e UNIBH);
- Pacote de descontos Clube Certo sem valor de mensalidade;
- Seguro de vida.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [QA Pleno (Híbrido - BH) na Onfly](https://coodesh.com/jobs/qa-pleno-hibrido-bh-162436869?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Categoria
Testes/Q.A
|
1.0
|
[Hibrido / Belo Horizonte, Minas Gerais, Brazil] QA Pleno (Híbrido - BH) na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/qa-pleno-hibrido-bh-162436869?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A Onfly está em busca de QA Pleno para compor seu time!</p>
<p>A Onfly é uma startup focada em ajudar as empresas a viajarem melhor! Democratizar a gestão de viagens, reembolsos, descontos em passagens, hotéis e locação de carros em uma experiência completa e simples é foco do nosso negócio. Somos um dos melhores lugares para se trabalhar no Brasil (inclusive somos uma das empresas do Great Place to work)!</p>
<p>Para que alcancemos a meta de crescer exponencialmente, buscamos pessoas que sejam apaixonadas por tecnologia, que desejam estar em um time de alta performance e que queiram crescer muuuuito.</p>
<p>Para a vaga de QA Pleno, buscamos um(a) profissional que será responsável por garantir que todo o nosso processo de desenvolvimento de software aconteça com altos de padrões de qualidade. Além disso, buscamos alguém que também seja dinâmico(a), que saiba trabalhar em equipe e que se conecte com nossa cultura e valores.</p>
<p>RESPONSABILIDADES E ATRIBUIÇÕES:</p>
<ul>
<li>Análise de todos os aspectos de utilização do software ou aplicação;</li>
<li>Acompanhar todas as etapas do processo de desenvolvimento e documentar o nível de qualidade em cada uma delas;</li>
<li>Estabelecer diferentes padrões de qualidade para cada produto;</li>
<li>Identificar requisitos funcionais e não funcionais dos produtos;</li>
<li>Garantir que todos nossa equipe Tech esteja ciente dos processos e procedimentos necessários para garantir a qualidade do software;</li>
<li>Elaborar documentação técnica ou revisar o que foi escrito pelos desenvolvedores, assegurando que o seja claro o que foi implementado no produto de software.</li>
</ul>
## Onfly:
<p>A Onfly oferece uma solução "all-in-one" com gestão de viagens (aéreo, hotel e carros), despesas e cartão corporativo em uma única plataforma digital. Com intensivo uso de tecnologia e uma boa dose de calor humano, elevamos ao máximo a gestão de viagens das empresas, trazendo conforto e produtividade para os colaboradores e segurança e economia para as empresas. Simplifique, vá de Onfly!</p></p>
## Habilidades:
- SaaS
- MySQL
- SCRUM
- Planejamento de Testes
- Análise de requisitos
## Local:
Belo Horizonte, Minas Gerais, Brazil
## Requisitos:
- Sólida experiência anterior como QA;
- Experiência com documentação;
- Lógica de Programação;
- Conhecimento em Banco de Dados – MySQL;
- Experiência com metodologias de desenvolvimento ágil Scrum ou Kanban.
## Diferenciais:
- Regras de negócio de produto SaaS - desejável.
## Benefícios:
- Regime Híbrido de trabalho (2 dias na semana presenciais e o restante remoto);
- Horário de trabalho flexível;
- Vale refeição em cartão flexível R$28 (sem desconto de 6%);
- Vale transporte em cartão flexível (sem desconto de 6%);
- Plano de Saúde Unimed, sem coparticipação e sem mensalidade para o Onflyer (por nossa conta :D);
- Plano Odontológico (desconto na mensalidade);
- Desconto Farmácia;
- No dress code;
- Benefício de Psicoterapia on-line (Zenklub e Vittude);
- Pagamos metade da sua academia (TotalPass);
- Bolsa de 50% para estudos de idiomas;
- Descontos em Graduações e Pós-graduações (UNA e UNIBH);
- Pacote de descontos Clube Certo sem valor de mensalidade;
- Seguro de vida.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [QA Pleno (Híbrido - BH) na Onfly](https://coodesh.com/jobs/qa-pleno-hibrido-bh-162436869?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Categoria
Testes/Q.A
|
process
|
qa pleno híbrido bh na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a onfly está em busca de qa pleno para compor seu time a onfly é uma startup focada em ajudar as empresas a viajarem melhor democratizar a gestão de viagens reembolsos descontos em passagens hotéis e locação de carros em uma experiência completa e simples é foco do nosso negócio somos um dos melhores lugares para se trabalhar no brasil inclusive somos uma das empresas do great place to work para que alcancemos a meta de crescer exponencialmente buscamos pessoas que sejam apaixonadas por tecnologia que desejam estar em um time de alta performance e que queiram crescer muuuuito para a vaga de qa pleno buscamos um a profissional que será responsável por garantir que todo o nosso processo de desenvolvimento de software aconteça com altos de padrões de qualidade além disso buscamos alguém que também seja dinâmico a que saiba trabalhar em equipe e que se conecte com nossa cultura e valores responsabilidades e atribuições análise de todos os aspectos de utilização do software ou aplicação acompanhar todas as etapas do processo de desenvolvimento e documentar o nível de qualidade em cada uma delas estabelecer diferentes padrões de qualidade para cada produto identificar requisitos funcionais e não funcionais dos produtos garantir que todos nossa equipe tech esteja ciente dos processos e procedimentos necessários para garantir a qualidade do software elaborar documentação técnica ou revisar o que foi escrito pelos desenvolvedores assegurando que o seja claro o que foi implementado no produto de software onfly a onfly oferece uma solução all in one com gestão de viagens aéreo hotel e carros despesas e cartão corporativo em uma única plataforma digital com intensivo uso de tecnologia e uma boa dose de calor humano elevamos ao máximo a gestão de viagens das empresas trazendo conforto e produtividade para os colaboradores e segurança e economia para as empresas simplifique vá de onfly habilidades saas mysql scrum planejamento de testes análise de requisitos local belo horizonte minas gerais brazil requisitos sólida experiência anterior como qa experiência com documentação lógica de programação conhecimento em banco de dados – mysql experiência com metodologias de desenvolvimento ágil scrum ou kanban diferenciais regras de negócio de produto saas desejável benefícios regime híbrido de trabalho dias na semana presenciais e o restante remoto horário de trabalho flexível vale refeição em cartão flexível r sem desconto de vale transporte em cartão flexível sem desconto de plano de saúde unimed sem coparticipação e sem mensalidade para o onflyer por nossa conta d plano odontológico desconto na mensalidade desconto farmácia no dress code benefício de psicoterapia on line zenklub e vittude pagamos metade da sua academia totalpass bolsa de para estudos de idiomas descontos em graduações e pós graduações una e unibh pacote de descontos clube certo sem valor de mensalidade seguro de vida como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação alocado categoria testes q a
| 1
|
240,565
| 26,256,390,239
|
IssuesEvent
|
2023-01-06 01:22:49
|
talmax1124/knighttimesnews-restore
|
https://api.github.com/repos/talmax1124/knighttimesnews-restore
|
closed
|
CVE-2021-33502 (High) detected in normalize-url-2.0.1.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-33502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>normalize-url-2.0.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- netlify-plugin-image-optim-0.4.0.tgz (Root Library)
- imagemin-gifsicle-6.0.1.tgz
- gifsicle-4.0.1.tgz
- bin-wrapper-4.1.0.tgz
- download-7.1.0.tgz
- got-8.3.2.tgz
- cacheable-request-2.1.4.tgz
- :x: **normalize-url-2.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/talmax1124/knighttimesnews-restore/commit/96281733d81e089a7ef9fce868aea9b88a9fd3e7">96281733d81e089a7ef9fce868aea9b88a9fd3e7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs.
<p>Publish Date: 2021-05-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33502>CVE-2021-33502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p>
<p>Release Date: 2021-05-24</p>
<p>Fix Resolution: normalize-url - 4.5.1,5.3.1,6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-33502 (High) detected in normalize-url-2.0.1.tgz - autoclosed - ## CVE-2021-33502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>normalize-url-2.0.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- netlify-plugin-image-optim-0.4.0.tgz (Root Library)
- imagemin-gifsicle-6.0.1.tgz
- gifsicle-4.0.1.tgz
- bin-wrapper-4.1.0.tgz
- download-7.1.0.tgz
- got-8.3.2.tgz
- cacheable-request-2.1.4.tgz
- :x: **normalize-url-2.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/talmax1124/knighttimesnews-restore/commit/96281733d81e089a7ef9fce868aea9b88a9fd3e7">96281733d81e089a7ef9fce868aea9b88a9fd3e7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs.
<p>Publish Date: 2021-05-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33502>CVE-2021-33502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p>
<p>Release Date: 2021-05-24</p>
<p>Fix Resolution: normalize-url - 4.5.1,5.3.1,6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in normalize url tgz autoclosed cve high severity vulnerability vulnerable library normalize url tgz normalize a url library home page a href path to dependency file package json path to vulnerable library node modules normalize url package json dependency hierarchy netlify plugin image optim tgz root library imagemin gifsicle tgz gifsicle tgz bin wrapper tgz download tgz got tgz cacheable request tgz x normalize url tgz vulnerable library found in head commit a href found in base branch master vulnerability details the normalize url package before x before and x before for node js has a redos regular expression denial of service issue because it has exponential performance for data urls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution normalize url step up your open source security game with mend
| 0
|
20,938
| 3,438,493,048
|
IssuesEvent
|
2015-12-14 00:36:07
|
eleybourn/Book-Catalogue
|
https://api.github.com/repos/eleybourn/Book-Catalogue
|
opened
|
Using day name (EEE) in system default date causes crashes...
|
Defect
|
STACK_TRACE=java.lang.IllegalArgumentException: Bad pattern character 'E' in E, d MMM yyyy
at libcore.icu.ICU.getDateFormatOrder(ICU.java:165)
at android.text.format.DateFormat.getDateFormatOrder(DateFormat.java:384)
at com.eleybourn.bookcatalogue.dialogs.PartialDatePicker.reorderPickers(PartialDatePicker.java:488)
at com.eleybourn.bookcatalogue.dialogs.PartialDatePicker.<init>(PartialDatePicker.java:119)
at com.eleybourn.bookcatalogue.dialogs.PartialDatePicker.<init>(PartialDatePicker.java:93)
at com.eleybourn.bookcatalogue.dialogs.PartialDatePickerFragment.onCreateDialog(PartialDatePickerFragment.java:95)
at android.support.v4.app.DialogFragment.getLayoutInflater(DialogFragment.java:295)
at android.support.v4.app.FragmentManagerImpl.moveToState(FragmentManager.java:911)
at android.support.v4.app.FragmentManagerImpl.moveToState(FragmentManager.java:1088)
at android.support.v4.app.BackStackRecord.run(BackStackRecord.java:682)
at android.support.v4.app.FragmentManagerImpl.execPendingActions(FragmentManager.java:1444)
at android.support.v4.app.FragmentManagerImpl$1.run(FragmentManager.java:429)
at android.os.Handler.handleCallback(Handler.java:808)
at android.os.Handler.dispatchMessage(Handler.java:103)
at android.os.Looper.loop(Looper.java:193)
at android.app.ActivityThread.main(ActivityThread.java:5312)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:515)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:825)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:641)
at dalvik.system.NativeStart.main(Native Method)
|
1.0
|
Using day name (EEE) in system default date causes crashes... - STACK_TRACE=java.lang.IllegalArgumentException: Bad pattern character 'E' in E, d MMM yyyy
at libcore.icu.ICU.getDateFormatOrder(ICU.java:165)
at android.text.format.DateFormat.getDateFormatOrder(DateFormat.java:384)
at com.eleybourn.bookcatalogue.dialogs.PartialDatePicker.reorderPickers(PartialDatePicker.java:488)
at com.eleybourn.bookcatalogue.dialogs.PartialDatePicker.<init>(PartialDatePicker.java:119)
at com.eleybourn.bookcatalogue.dialogs.PartialDatePicker.<init>(PartialDatePicker.java:93)
at com.eleybourn.bookcatalogue.dialogs.PartialDatePickerFragment.onCreateDialog(PartialDatePickerFragment.java:95)
at android.support.v4.app.DialogFragment.getLayoutInflater(DialogFragment.java:295)
at android.support.v4.app.FragmentManagerImpl.moveToState(FragmentManager.java:911)
at android.support.v4.app.FragmentManagerImpl.moveToState(FragmentManager.java:1088)
at android.support.v4.app.BackStackRecord.run(BackStackRecord.java:682)
at android.support.v4.app.FragmentManagerImpl.execPendingActions(FragmentManager.java:1444)
at android.support.v4.app.FragmentManagerImpl$1.run(FragmentManager.java:429)
at android.os.Handler.handleCallback(Handler.java:808)
at android.os.Handler.dispatchMessage(Handler.java:103)
at android.os.Looper.loop(Looper.java:193)
at android.app.ActivityThread.main(ActivityThread.java:5312)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:515)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:825)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:641)
at dalvik.system.NativeStart.main(Native Method)
|
non_process
|
using day name eee in system default date causes crashes stack trace java lang illegalargumentexception bad pattern character e in e d mmm yyyy at libcore icu icu getdateformatorder icu java at android text format dateformat getdateformatorder dateformat java at com eleybourn bookcatalogue dialogs partialdatepicker reorderpickers partialdatepicker java at com eleybourn bookcatalogue dialogs partialdatepicker partialdatepicker java at com eleybourn bookcatalogue dialogs partialdatepicker partialdatepicker java at com eleybourn bookcatalogue dialogs partialdatepickerfragment oncreatedialog partialdatepickerfragment java at android support app dialogfragment getlayoutinflater dialogfragment java at android support app fragmentmanagerimpl movetostate fragmentmanager java at android support app fragmentmanagerimpl movetostate fragmentmanager java at android support app backstackrecord run backstackrecord java at android support app fragmentmanagerimpl execpendingactions fragmentmanager java at android support app fragmentmanagerimpl run fragmentmanager java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invokenative native method at java lang reflect method invoke method java at com android internal os zygoteinit methodandargscaller run zygoteinit java at com android internal os zygoteinit main zygoteinit java at dalvik system nativestart main native method
| 0
|
21,359
| 29,190,163,669
|
IssuesEvent
|
2023-05-19 19:08:20
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] [Bug] Missing schemas for cumulative sum (`:cum-sum`) and cumulative count (`:cum-count`)aggregations
|
Type:Bug .Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
```clj
Don't know how to determine the type of
[:cum-sum #:lib{:uuid "8abfffc6-0531-47e4-bb99-7837c92b1436"} [:field #:lib{:uuid "f2b32892-bdf7-4725-9d96-dd7931abe072"} 133245]]
```
|
1.0
|
[MLv2] [Bug] Missing schemas for cumulative sum (`:cum-sum`) and cumulative count (`:cum-count`)aggregations - ```clj
Don't know how to determine the type of
[:cum-sum #:lib{:uuid "8abfffc6-0531-47e4-bb99-7837c92b1436"} [:field #:lib{:uuid "f2b32892-bdf7-4725-9d96-dd7931abe072"} 133245]]
```
|
process
|
missing schemas for cumulative sum cum sum and cumulative count cum count aggregations clj don t know how to determine the type of
| 1
|
15,243
| 19,181,843,529
|
IssuesEvent
|
2021-12-04 14:41:36
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
npm audit moderate severity postcss dependency related
|
CSS Preprocessing :fire: Security ✨ Parcel 2
|
# 🐛 bug report
NPM audit moderate severity findings in parcel-bundler 1.12.5 (postcss dependency related)
## 🎛 Configuration (.babelrc, package.json, cli command)
from .babelrc:
```
{
"presets": [
"preact",
[
"env",
{
"targets": {
"browsers": "last 2 Firefox versions, last 2 Chrome versions, last 2 Edge versions, last 2 Safari versions"
}
}
]
]
}
```
from package.json:
```
"devDependencies": {
"@babel/core": "^7.11.6",
"@babel/plugin-proposal-class-properties": "^7.10.4",
"babel-preset-env": "^1.7.0",
"babel-preset-preact": "^2.0.0",
"chai": "^4.1.2",
"easyimage": "^3.1.0",
"eslint": "^7.9.0",
"eslint-config-standard-jsx": "^8.1.0",
"eslint-config-standard-preact": "^1.1.6",
"express": "^4.17.1",
"js-yaml": "^3.12.0",
"looks-same": "^3.3.0",
"mocha": "^7.0.0",
"mochawesome": "^6.1.1",
"parcel-bundler": "^1.12.5",
"request-promise": "^4.2.5",
"sass": "^1.26.10",
"selenium-webdriver": "^4.0.0-alpha.7",
"standard": "^14.3.4"
}
```
## 🤔 Expected Behavior
npm audit will succeed
## 😯 Current Behavior
npm audit returns 66 moderate severity vulnerabilities
These all seem related to the same postcss npm advisory (https://npmjs.com/advisories/1693) via cssnano and htmlnano:
```
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
css-declaration-sorter > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> css-declaration-sorter > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
cssnano-util-raw-cache > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> cssnano-util-raw-cache > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-calc > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-calc > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-colormin > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-colormin > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-convert-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-convert-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-discard-comments > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-discard-comments > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-discard-duplicates > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-discard-duplicates > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-discard-empty > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-discard-empty > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-discard-overridden > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-discard-overridden > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-merge-longhand > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-merge-longhand > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-merge-longhand > stylehacks > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-merge-longhand > stylehacks > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-merge-rules > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-merge-rules > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-minify-font-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-minify-font-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-minify-gradients > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-minify-gradients > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-minify-params > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-minify-params > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-minify-selectors > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-minify-selectors > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-charset > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-charset > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-display-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-display-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-positions > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-positions > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-repeat-style > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-repeat-style > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-string > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-string > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-timing-functions > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-timing-functions > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-unicode > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-unicode > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-url > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-url > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-whitespace > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-whitespace > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-ordered-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-ordered-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-reduce-initial > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-reduce-initial > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-reduce-transforms > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-reduce-transforms > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-svgo > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-svgo > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-unique-selectors > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-unique-selectors > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > uncss > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > purgecss > postcss
More info https://npmjs.com/advisories/1693
found 66 moderate severity vulnerabilities in 1642 scanned packages
66 vulnerabilities require manual review. See the full report for details.
```
## 💁 Possible Solution
Update internal dependencies on htmlnano and cssnano to versions that use postcss version greater than or equal to 8.2.10.
I'm not at all sure how difficult or easy that would be...
## 🔦 Context
Some organizations (including ours) have security policies which prevent production deployments if npm audit vulnerabilities are found (even in dev dependencies).
I know this particular npm audit finding _may_ not be truly as severe as npm audit describes, but dealing with security audits is a fact of life.
The root of the issue appears to be in postcss v7 as mentioned here:
https://github.com/postcss/postcss/issues/1574
but that version will not be fixed as the developer has a newer version to maintain (postcss v8)
I realize that Parcel 1 is also in "maintenance mode" and there is a push to move to Parcel 2 (as mentioned here: https://github.com/parcel-bundler/parcel/issues/5250#issuecomment-750379659)
I've made a first stab (so far unsuccessful) attempt to migrate to Parcel 2, but these npm audit findings are _also_ present after uninstalling Parcel 1 (parcel-bundler) and installing latest Parcel 2 using npm. So this issue likely impacts all Parcel users of both versions?
## 💻 Code Sample
for version 1:
```
npm init
npm install parcel-bundler
npm audit
```
or for version 2:
```
npm init
npm install parcel
npm audit
```
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.12.5
| Node | 12.22.0
| npm/Yarn | npm v. 6.14.11
| Operating System | win 10
|
1.0
|
npm audit moderate severity postcss dependency related - # 🐛 bug report
NPM audit moderate severity findings in parcel-bundler 1.12.5 (postcss dependency related)
## 🎛 Configuration (.babelrc, package.json, cli command)
from .babelrc:
```
{
"presets": [
"preact",
[
"env",
{
"targets": {
"browsers": "last 2 Firefox versions, last 2 Chrome versions, last 2 Edge versions, last 2 Safari versions"
}
}
]
]
}
```
from package.json:
```
"devDependencies": {
"@babel/core": "^7.11.6",
"@babel/plugin-proposal-class-properties": "^7.10.4",
"babel-preset-env": "^1.7.0",
"babel-preset-preact": "^2.0.0",
"chai": "^4.1.2",
"easyimage": "^3.1.0",
"eslint": "^7.9.0",
"eslint-config-standard-jsx": "^8.1.0",
"eslint-config-standard-preact": "^1.1.6",
"express": "^4.17.1",
"js-yaml": "^3.12.0",
"looks-same": "^3.3.0",
"mocha": "^7.0.0",
"mochawesome": "^6.1.1",
"parcel-bundler": "^1.12.5",
"request-promise": "^4.2.5",
"sass": "^1.26.10",
"selenium-webdriver": "^4.0.0-alpha.7",
"standard": "^14.3.4"
}
```
## 🤔 Expected Behavior
npm audit will succeed
## 😯 Current Behavior
npm audit returns 66 moderate severity vulnerabilities
These all seem related to the same postcss npm advisory (https://npmjs.com/advisories/1693) via cssnano and htmlnano:
```
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
css-declaration-sorter > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> css-declaration-sorter > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
cssnano-util-raw-cache > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> cssnano-util-raw-cache > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-calc > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-calc > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-colormin > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-colormin > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-convert-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-convert-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-discard-comments > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-discard-comments > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-discard-duplicates > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-discard-duplicates > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-discard-empty > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-discard-empty > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-discard-overridden > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-discard-overridden > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-merge-longhand > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-merge-longhand > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-merge-longhand > stylehacks > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-merge-longhand > stylehacks > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-merge-rules > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-merge-rules > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-minify-font-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-minify-font-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-minify-gradients > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-minify-gradients > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-minify-params > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-minify-params > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-minify-selectors > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-minify-selectors > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-charset > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-charset > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-display-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-display-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-positions > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-positions > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-repeat-style > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-repeat-style > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-string > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-string > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-timing-functions > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-timing-functions > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-unicode > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-unicode > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-url > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-url > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-normalize-whitespace > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-normalize-whitespace > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-ordered-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-ordered-values > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-reduce-initial > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-reduce-initial > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-reduce-transforms > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-reduce-transforms > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-svgo > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-svgo > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > cssnano-preset-default >
postcss-unique-selectors > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > cssnano-preset-default
> postcss-unique-selectors > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > cssnano > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > cssnano > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > uncss > postcss
More info https://npmjs.com/advisories/1693
Moderate Regular Expression Denial of Service
Package postcss
Patched in >=8.2.10
Dependency of parcel-bundler [dev]
Path parcel-bundler > htmlnano > purgecss > postcss
More info https://npmjs.com/advisories/1693
found 66 moderate severity vulnerabilities in 1642 scanned packages
66 vulnerabilities require manual review. See the full report for details.
```
## 💁 Possible Solution
Update internal dependencies on htmlnano and cssnano to versions that use postcss version greater than or equal to 8.2.10.
I'm not at all sure how difficult or easy that would be...
## 🔦 Context
Some organizations (including ours) have security policies which prevent production deployments if npm audit vulnerabilities are found (even in dev dependencies).
I know this particular npm audit finding _may_ not be truly as severe as npm audit describes, but dealing with security audits is a fact of life.
The root of the issue appears to be in postcss v7 as mentioned here:
https://github.com/postcss/postcss/issues/1574
but that version will not be fixed as the developer has a newer version to maintain (postcss v8)
I realize that Parcel 1 is also in "maintenance mode" and there is a push to move to Parcel 2 (as mentioned here: https://github.com/parcel-bundler/parcel/issues/5250#issuecomment-750379659)
I've made a first stab (so far unsuccessful) attempt to migrate to Parcel 2, but these npm audit findings are _also_ present after uninstalling Parcel 1 (parcel-bundler) and installing latest Parcel 2 using npm. So this issue likely impacts all Parcel users of both versions?
## 💻 Code Sample
for version 1:
```
npm init
npm install parcel-bundler
npm audit
```
or for version 2:
```
npm init
npm install parcel
npm audit
```
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.12.5
| Node | 12.22.0
| npm/Yarn | npm v. 6.14.11
| Operating System | win 10
|
process
|
npm audit moderate severity postcss dependency related 🐛 bug report npm audit moderate severity findings in parcel bundler postcss dependency related 🎛 configuration babelrc package json cli command from babelrc presets preact env targets browsers last firefox versions last chrome versions last edge versions last safari versions from package json devdependencies babel core babel plugin proposal class properties babel preset env babel preset preact chai easyimage eslint eslint config standard jsx eslint config standard preact express js yaml looks same mocha mochawesome parcel bundler request promise sass selenium webdriver alpha standard 🤔 expected behavior npm audit will succeed 😯 current behavior npm audit returns moderate severity vulnerabilities these all seem related to the same postcss npm advisory via cssnano and htmlnano moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default css declaration sorter postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default css declaration sorter postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default cssnano util raw cache postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default cssnano util raw cache postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss calc postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss calc postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss colormin postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss colormin postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss convert values postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss convert values postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss discard comments postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss discard comments postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss discard duplicates postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss discard duplicates postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss discard empty postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss discard empty postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss discard overridden postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss discard overridden postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss merge longhand postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss merge longhand postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss merge longhand stylehacks postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss merge longhand stylehacks postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss merge rules postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss merge rules postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss minify font values postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss minify font values postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss minify gradients postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss minify gradients postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss minify params postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss minify params postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss minify selectors postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss minify selectors postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss normalize charset postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss normalize charset postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss normalize display values postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss normalize display values postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss normalize positions postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss normalize positions postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss normalize repeat style postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss normalize repeat style postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss normalize string postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss normalize string postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss normalize timing functions postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss normalize timing functions postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss normalize unicode postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss normalize unicode postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss normalize url postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss normalize url postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss normalize whitespace postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss normalize whitespace postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss ordered values postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss ordered values postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss reduce initial postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss reduce initial postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss reduce transforms postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss reduce transforms postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss svgo postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss svgo postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano cssnano preset default postcss unique selectors postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano cssnano preset default postcss unique selectors postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler cssnano postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano cssnano postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano uncss postcss more info moderate regular expression denial of service package postcss patched in dependency of parcel bundler path parcel bundler htmlnano purgecss postcss more info found moderate severity vulnerabilities in scanned packages vulnerabilities require manual review see the full report for details 💁 possible solution update internal dependencies on htmlnano and cssnano to versions that use postcss version greater than or equal to i m not at all sure how difficult or easy that would be 🔦 context some organizations including ours have security policies which prevent production deployments if npm audit vulnerabilities are found even in dev dependencies i know this particular npm audit finding may not be truly as severe as npm audit describes but dealing with security audits is a fact of life the root of the issue appears to be in postcss as mentioned here but that version will not be fixed as the developer has a newer version to maintain postcss i realize that parcel is also in maintenance mode and there is a push to move to parcel as mentioned here i ve made a first stab so far unsuccessful attempt to migrate to parcel but these npm audit findings are also present after uninstalling parcel parcel bundler and installing latest parcel using npm so this issue likely impacts all parcel users of both versions 💻 code sample for version npm init npm install parcel bundler npm audit or for version npm init npm install parcel npm audit 🌍 your environment software version s parcel node npm yarn npm v operating system win
| 1
|
4,524
| 5,165,977,476
|
IssuesEvent
|
2017-01-17 15:09:03
|
AAROC/DevOps
|
https://api.github.com/repos/AAROC/DevOps
|
opened
|
Site-bdii vulnerabilities addressed
|
network security
|
# Issue type
# Issue type
- [x] Problem with one of the services
- [ ] Ansible problem
- [ ] Puppet problem
- [ ] Container problem
- [ ] Want to join as collaborator
- [ ] General question, enhancement or suggestion
# Repository information
* Branch: master
* git hash: 12a554e
# Issue description
<!-- provide a detailed description of the issue -->
This is the master ticket related to vulnerabilities identified on the machine site-bdii.c4.csir.co.za
|
True
|
Site-bdii vulnerabilities addressed - # Issue type
# Issue type
- [x] Problem with one of the services
- [ ] Ansible problem
- [ ] Puppet problem
- [ ] Container problem
- [ ] Want to join as collaborator
- [ ] General question, enhancement or suggestion
# Repository information
* Branch: master
* git hash: 12a554e
# Issue description
<!-- provide a detailed description of the issue -->
This is the master ticket related to vulnerabilities identified on the machine site-bdii.c4.csir.co.za
|
non_process
|
site bdii vulnerabilities addressed issue type issue type problem with one of the services ansible problem puppet problem container problem want to join as collaborator general question enhancement or suggestion repository information branch master git hash issue description this is the master ticket related to vulnerabilities identified on the machine site bdii csir co za
| 0
|
4,032
| 6,969,020,272
|
IssuesEvent
|
2017-12-11 02:13:35
|
triplea-game/triplea
|
https://api.github.com/repos/triplea-game/triplea
|
closed
|
Remove Codacy?
|
category: dev & admin process close pending confirmation discussion
|
At question: remove codacy branch builds? Codacy is reading zero issues, did we zero that out thanks to the checkstyle? If so, then codacy is seeming a bit redundant.

|
1.0
|
Remove Codacy? - At question: remove codacy branch builds? Codacy is reading zero issues, did we zero that out thanks to the checkstyle? If so, then codacy is seeming a bit redundant.

|
process
|
remove codacy at question remove codacy branch builds codacy is reading zero issues did we zero that out thanks to the checkstyle if so then codacy is seeming a bit redundant
| 1
|
43,137
| 7,025,666,009
|
IssuesEvent
|
2017-12-23 14:16:48
|
emberjs/ember-test-helpers
|
https://api.github.com/repos/emberjs/ember-test-helpers
|
opened
|
Documentation!
|
documentation meta
|
We need to generate a nice set of detailed API docs for the functionality in this library.
- [ ] Add in-line API docs for all public APIs in `addon-test-support/@ember/ember-testing`
- [ ] Add API doc generator (likely publishing to GH Pages).
|
1.0
|
Documentation! - We need to generate a nice set of detailed API docs for the functionality in this library.
- [ ] Add in-line API docs for all public APIs in `addon-test-support/@ember/ember-testing`
- [ ] Add API doc generator (likely publishing to GH Pages).
|
non_process
|
documentation we need to generate a nice set of detailed api docs for the functionality in this library add in line api docs for all public apis in addon test support ember ember testing add api doc generator likely publishing to gh pages
| 0
|
10,037
| 13,044,161,549
|
IssuesEvent
|
2020-07-29 03:47:24
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `Format` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `Format` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `Format` from TiDB -
## Description
Port the scalar function `Format` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function format from tidb description port the scalar function format from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
48,549
| 20,188,040,343
|
IssuesEvent
|
2022-02-11 01:03:58
|
BCDevOps/developer-experience
|
https://api.github.com/repos/BCDevOps/developer-experience
|
closed
|
OCP 4.8 Testing in CLAB
|
Epic env/lab team/DXC ops and shared services
|
**Describe the epic**
Epic to track all work involved in upgrading CLAB to OCP 4.8, testing all the components, and documenting the process.
**Additional context**
**Definition of done**
- [x] CLAB upgraded to OCP 4.8.
- [x] All Platform Ops features and components tested.
- [x] Upgrade and CHG documentation updated with new/changed steps as needed.
|
1.0
|
OCP 4.8 Testing in CLAB - **Describe the epic**
Epic to track all work involved in upgrading CLAB to OCP 4.8, testing all the components, and documenting the process.
**Additional context**
**Definition of done**
- [x] CLAB upgraded to OCP 4.8.
- [x] All Platform Ops features and components tested.
- [x] Upgrade and CHG documentation updated with new/changed steps as needed.
|
non_process
|
ocp testing in clab describe the epic epic to track all work involved in upgrading clab to ocp testing all the components and documenting the process additional context definition of done clab upgraded to ocp all platform ops features and components tested upgrade and chg documentation updated with new changed steps as needed
| 0
|
20,370
| 27,026,989,046
|
IssuesEvent
|
2023-02-11 18:26:49
|
RobertCraigie/prisma-client-py
|
https://api.github.com/repos/RobertCraigie/prisma-client-py
|
closed
|
Add support for querying from partial models
|
kind/improvement process/candidate topic: client level/advanced priority/high
|
## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently partial models cannot be used with model-based access.
```py
from prisma.partials import UserOnlyId
user = await UserOnlyId.prisma().find_first(
where={
'name': 'Robert',
},
)
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
After #19 is implemented, all it would take is adding the `prisma()` classmethod to generated partial types.
|
1.0
|
Add support for querying from partial models - ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently partial models cannot be used with model-based access.
```py
from prisma.partials import UserOnlyId
user = await UserOnlyId.prisma().find_first(
where={
'name': 'Robert',
},
)
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
After #19 is implemented, all it would take is adding the `prisma()` classmethod to generated partial types.
|
process
|
add support for querying from partial models problem currently partial models cannot be used with model based access py from prisma partials import useronlyid user await useronlyid prisma find first where name robert suggested solution after is implemented all it would take is adding the prisma classmethod to generated partial types
| 1
|
19,029
| 25,038,736,236
|
IssuesEvent
|
2022-11-04 18:26:27
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Obsolete renal water absorption involved in negative regulation of urine volume
|
obsoletion regulation organism-level process
|
Please provide as much information as you can:
* **GO term ID and Label**
* GO:0035816 renal water absorption involved in negative regulation of urine volume
* **Reason for deprecation**
* No annotations and should be captured in GO-CAM and AEs
* **"Replace by" term (ID and label)**
If all annotations can safely be moved to that term
No annotations
* **"Consider" term(s) (ID and label)**
Suggestions for reannotation
No annotations - not used therefore no suggestions for reannotation
* **Are there annotations to this term?**
- How many EXP: 0
* **Are there mappings and cross references to this term? (InterPro, Keywords; check QuickGO cross-references section)**
* No
* **Is this term in a subset? (check the AmiGO page for that term)**
* No
* **Any other information**
* No
|
1.0
|
Obsolete renal water absorption involved in negative regulation of urine volume - Please provide as much information as you can:
* **GO term ID and Label**
* GO:0035816 renal water absorption involved in negative regulation of urine volume
* **Reason for deprecation**
* No annotations and should be captured in GO-CAM and AEs
* **"Replace by" term (ID and label)**
If all annotations can safely be moved to that term
No annotations
* **"Consider" term(s) (ID and label)**
Suggestions for reannotation
No annotations - not used therefore no suggestions for reannotation
* **Are there annotations to this term?**
- How many EXP: 0
* **Are there mappings and cross references to this term? (InterPro, Keywords; check QuickGO cross-references section)**
* No
* **Is this term in a subset? (check the AmiGO page for that term)**
* No
* **Any other information**
* No
|
process
|
obsolete renal water absorption involved in negative regulation of urine volume please provide as much information as you can go term id and label go renal water absorption involved in negative regulation of urine volume reason for deprecation no annotations and should be captured in go cam and aes replace by term id and label if all annotations can safely be moved to that term no annotations consider term s id and label suggestions for reannotation no annotations not used therefore no suggestions for reannotation are there annotations to this term how many exp are there mappings and cross references to this term interpro keywords check quickgo cross references section no is this term in a subset check the amigo page for that term no any other information no
| 1
|
830,968
| 32,032,985,025
|
IssuesEvent
|
2023-09-22 13:33:32
|
nance-eth/nance-interface
|
https://api.github.com/repos/nance-eth/nance-interface
|
closed
|
Nance for other client
|
enhancement help wanted high priority
|
- [ ] Page for people to create a new Nance space
- [ ] Function to manage space owner/admin, edit space configuration
- [x] Page to list all nance spaces
- [x] Better generic interface for Nance spaces (refactor jbdao.org as template)
- [x] Better url, for example: `jbdao.org/?overrideSpace=waterbox` => `nance.app/space/waterbox`
- [ ] Extract nance-sdk and nance-components for self-hosting solution
Discord thread: https://discord.com/channels/1090064637858414633/1090688958066856016
|
1.0
|
Nance for other client - - [ ] Page for people to create a new Nance space
- [ ] Function to manage space owner/admin, edit space configuration
- [x] Page to list all nance spaces
- [x] Better generic interface for Nance spaces (refactor jbdao.org as template)
- [x] Better url, for example: `jbdao.org/?overrideSpace=waterbox` => `nance.app/space/waterbox`
- [ ] Extract nance-sdk and nance-components for self-hosting solution
Discord thread: https://discord.com/channels/1090064637858414633/1090688958066856016
|
non_process
|
nance for other client page for people to create a new nance space function to manage space owner admin edit space configuration page to list all nance spaces better generic interface for nance spaces refactor jbdao org as template better url for example jbdao org overridespace waterbox nance app space waterbox extract nance sdk and nance components for self hosting solution discord thread
| 0
|
715
| 3,206,218,627
|
IssuesEvent
|
2015-10-04 20:29:52
|
pwittchen/ReactiveBeacons
|
https://api.github.com/repos/pwittchen/ReactiveBeacons
|
opened
|
Release 0.1.0
|
release process
|
**Initial release notes**:
added `Filter` class providing methods, which can be used with `filter(...)` method from RxJava inside specific subscription. These methods can be used for filtering stream of Beacons by Proximity, distance, device names and MAC addresses.
**Things to do**:
- [ ] write unit tests
- [ ] perform manual tests
- [ ] bump version of library
- [ ] upload archives to Maven Central
- [ ] update `CHANGELOG.md`
- [ ] update library version in `README.md` after Maven Sync
- [ ] create new GitHub release
|
1.0
|
Release 0.1.0 - **Initial release notes**:
added `Filter` class providing methods, which can be used with `filter(...)` method from RxJava inside specific subscription. These methods can be used for filtering stream of Beacons by Proximity, distance, device names and MAC addresses.
**Things to do**:
- [ ] write unit tests
- [ ] perform manual tests
- [ ] bump version of library
- [ ] upload archives to Maven Central
- [ ] update `CHANGELOG.md`
- [ ] update library version in `README.md` after Maven Sync
- [ ] create new GitHub release
|
process
|
release initial release notes added filter class providing methods which can be used with filter method from rxjava inside specific subscription these methods can be used for filtering stream of beacons by proximity distance device names and mac addresses things to do write unit tests perform manual tests bump version of library upload archives to maven central update changelog md update library version in readme md after maven sync create new github release
| 1
|
108,297
| 13,613,184,960
|
IssuesEvent
|
2020-09-23 11:29:09
|
nextcloud/deck
|
https://api.github.com/repos/nextcloud/deck
|
closed
|
inconsistent design of mouseover text
|
1. to develop bug design: papercut
|
This button bar is using the browser mouseover design...

And here it's the nextcloud stile...

I can reproduse this on all my browsers (firefox and chromium edge)
|
1.0
|
inconsistent design of mouseover text - This button bar is using the browser mouseover design...

And here it's the nextcloud stile...

I can reproduse this on all my browsers (firefox and chromium edge)
|
non_process
|
inconsistent design of mouseover text this button bar is using the browser mouseover design and here it s the nextcloud stile i can reproduse this on all my browsers firefox and chromium edge
| 0
|
105,511
| 9,085,610,349
|
IssuesEvent
|
2019-02-18 08:49:48
|
nextyio/gonex
|
https://api.github.com/repos/nextyio/gonex
|
closed
|
make params config for long-term --testnet for nexty like ropsten/rinkeby
|
feature :cake: need:disccusion 🗣 priority:medium testnet
|
we might deploy an official --testnet like ropsten/rinkeby for testing new features/hardforks before running it on mainnet
the testnet should have official fixed genesis block/network-id and maintaining client's source upgrading similar to mainnet
|
1.0
|
make params config for long-term --testnet for nexty like ropsten/rinkeby - we might deploy an official --testnet like ropsten/rinkeby for testing new features/hardforks before running it on mainnet
the testnet should have official fixed genesis block/network-id and maintaining client's source upgrading similar to mainnet
|
non_process
|
make params config for long term testnet for nexty like ropsten rinkeby we might deploy an official testnet like ropsten rinkeby for testing new features hardforks before running it on mainnet the testnet should have official fixed genesis block network id and maintaining client s source upgrading similar to mainnet
| 0
|
168,951
| 26,717,308,681
|
IssuesEvent
|
2023-01-28 17:40:13
|
Krypton-Suite/Standard-Toolkit
|
https://api.github.com/repos/Krypton-Suite/Standard-Toolkit
|
opened
|
[Bug]: Cannot add a group to the `KryptonRibbon` while using .NET
|
bug designer
|
Cannot add a group to the `KryptonRibbon` while using .NET via the designer. Affected versions are .NET 6 - 7 (plus older versions that are now sunset).
|
1.0
|
[Bug]: Cannot add a group to the `KryptonRibbon` while using .NET - Cannot add a group to the `KryptonRibbon` while using .NET via the designer. Affected versions are .NET 6 - 7 (plus older versions that are now sunset).
|
non_process
|
cannot add a group to the kryptonribbon while using net cannot add a group to the kryptonribbon while using net via the designer affected versions are net plus older versions that are now sunset
| 0
|
8,954
| 12,059,447,090
|
IssuesEvent
|
2020-04-15 19:17:32
|
googleapis/cloud-profiler-nodejs
|
https://api.github.com/repos/googleapis/cloud-profiler-nodejs
|
closed
|
add synth.py and start auto-generating .kokoro, etc
|
api: cloudprofiler type: process
|
There's is currently no `synth.py` in this repo, making it difficult to perform automated updates to `.kokoro`, `README.md`, etc.
|
1.0
|
add synth.py and start auto-generating .kokoro, etc - There's is currently no `synth.py` in this repo, making it difficult to perform automated updates to `.kokoro`, `README.md`, etc.
|
process
|
add synth py and start auto generating kokoro etc there s is currently no synth py in this repo making it difficult to perform automated updates to kokoro readme md etc
| 1
|
666,914
| 22,391,558,516
|
IssuesEvent
|
2022-06-17 08:13:49
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
opened
|
`GLIBC_2.29' not found error when runing the helloworld server example in python
|
kind/bug lang/Python priority/P2
|
<!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
grpcio==1.46.3
grpcio-tools==1.46.3
Python Language
### What operating system (Linux, Windows,...) and version?
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
### What runtime / compiler are you using (e.g. python version or version of gcc)
python
### What did you do?
I was trying to run the "greeter_server.py" file from the "/grpc/examples/python/helloworld" directory.
### What did you expect to see?
I was excepting to see the server start without any error.
### What did you see instead?
I got the following error message
> Traceback (most recent call last):
> File "greeter_server.py", line 19, in <module>
> import grpc
> File "/home/tqc_adm/.local/lib/python3.7/site-packages/grpc/__init__.py", line 22, in <module>
> from grpc import _compression
> File "/home/tqc_adm/.local/lib/python3.7/site-packages/grpc/_compression.py", line 15, in <module>
> from grpc._cython import cygrpc
> ImportError: /lib/arm-linux-gnueabihf/libm.so.6: version `GLIBC_2.29' not found (required by /home/tqc_adm/.local/lib/python3.7/site-packages/grpc/_cython/cygrpc.cpython-37m-arm-linux-gnueabihf.so)
### Anything else we should know about your project / environment?
|
1.0
|
`GLIBC_2.29' not found error when runing the helloworld server example in python - <!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
grpcio==1.46.3
grpcio-tools==1.46.3
Python Language
### What operating system (Linux, Windows,...) and version?
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
### What runtime / compiler are you using (e.g. python version or version of gcc)
python
### What did you do?
I was trying to run the "greeter_server.py" file from the "/grpc/examples/python/helloworld" directory.
### What did you expect to see?
I was excepting to see the server start without any error.
### What did you see instead?
I got the following error message
> Traceback (most recent call last):
> File "greeter_server.py", line 19, in <module>
> import grpc
> File "/home/tqc_adm/.local/lib/python3.7/site-packages/grpc/__init__.py", line 22, in <module>
> from grpc import _compression
> File "/home/tqc_adm/.local/lib/python3.7/site-packages/grpc/_compression.py", line 15, in <module>
> from grpc._cython import cygrpc
> ImportError: /lib/arm-linux-gnueabihf/libm.so.6: version `GLIBC_2.29' not found (required by /home/tqc_adm/.local/lib/python3.7/site-packages/grpc/_cython/cygrpc.cpython-37m-arm-linux-gnueabihf.so)
### Anything else we should know about your project / environment?
|
non_process
|
glibc not found error when runing the helloworld server example in python please do not post a question here this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers at stackoverflow with grpc tag for questions that specifically need to be answered by grpc team members please ask look for answers at grpc io mailing list issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using grpcio grpcio tools python language what operating system linux windows and version pretty name raspbian gnu linux buster name raspbian gnu linux version id version buster version codename buster id raspbian id like debian what runtime compiler are you using e g python version or version of gcc python what did you do i was trying to run the greeter server py file from the grpc examples python helloworld directory what did you expect to see i was excepting to see the server start without any error what did you see instead i got the following error message traceback most recent call last file greeter server py line in import grpc file home tqc adm local lib site packages grpc init py line in from grpc import compression file home tqc adm local lib site packages grpc compression py line in from grpc cython import cygrpc importerror lib arm linux gnueabihf libm so version glibc not found required by home tqc adm local lib site packages grpc cython cygrpc cpython arm linux gnueabihf so anything else we should know about your project environment
| 0
|
70,671
| 15,098,247,789
|
IssuesEvent
|
2021-02-07 21:54:27
|
doc-ai/tensorio
|
https://api.github.com/repos/doc-ai/tensorio
|
opened
|
CVE-2020-26266 (Medium) detected in TensorIOTensorFlow-2.0.3
|
security vulnerability
|
## CVE-2020-26266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>TensorIOTensorFlow-2.0.3</b></p></summary>
<p>An unofficial build of TensorFlow for iOS used by TensorIO, supporting inference, evaluation, and training.</p>
<p>Library home page: <a href="https://storage.googleapis.com/tensorio-build/r2.0/TensorIO-TensorFlow-2.0_2.tar.gz">https://storage.googleapis.com/tensorio-build/r2.0/TensorIO-TensorFlow-2.0_2.tar.gz</a></p>
<p>Path to dependency file: tensorio/examples/ios/Podfile.lock</p>
<p>Path to vulnerable library: tensorio/examples/ios/Podfile.lock</p>
<p>
Dependency Hierarchy:
- TensorIO/TensorFlow-1.2.3 (Root Library)
- :x: **TensorIOTensorFlow-2.0.3** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/doc-ai/tensorio/commit/c82527a35914f266a73930b0a8f8c1e291890b36">c82527a35914f266a73930b0a8f8c1e291890b36</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In affected versions of TensorFlow under certain cases a saved model can trigger use of uninitialized values during code execution. This is caused by having tensor buffers be filled with the default value of the type but forgetting to default initialize the quantized floating point types in Eigen. This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.
<p>Publish Date: 2020-12-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-26266>CVE-2020-26266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"CocoaPods","packageName":"TensorIOTensorFlow","packageVersion":"2.0.3","packageFilePaths":["/examples/ios/Podfile.lock"],"isTransitiveDependency":true,"dependencyTree":"TensorIO/TensorFlow:1.2.3;TensorIOTensorFlow:2.0.3","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-26266","vulnerabilityDetails":"In affected versions of TensorFlow under certain cases a saved model can trigger use of uninitialized values during code execution. This is caused by having tensor buffers be filled with the default value of the type but forgetting to default initialize the quantized floating point types in Eigen. This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-26266","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"Low","S":"Unchanged","C":"Low","UI":"None","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-26266 (Medium) detected in TensorIOTensorFlow-2.0.3 - ## CVE-2020-26266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>TensorIOTensorFlow-2.0.3</b></p></summary>
<p>An unofficial build of TensorFlow for iOS used by TensorIO, supporting inference, evaluation, and training.</p>
<p>Library home page: <a href="https://storage.googleapis.com/tensorio-build/r2.0/TensorIO-TensorFlow-2.0_2.tar.gz">https://storage.googleapis.com/tensorio-build/r2.0/TensorIO-TensorFlow-2.0_2.tar.gz</a></p>
<p>Path to dependency file: tensorio/examples/ios/Podfile.lock</p>
<p>Path to vulnerable library: tensorio/examples/ios/Podfile.lock</p>
<p>
Dependency Hierarchy:
- TensorIO/TensorFlow-1.2.3 (Root Library)
- :x: **TensorIOTensorFlow-2.0.3** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/doc-ai/tensorio/commit/c82527a35914f266a73930b0a8f8c1e291890b36">c82527a35914f266a73930b0a8f8c1e291890b36</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In affected versions of TensorFlow under certain cases a saved model can trigger use of uninitialized values during code execution. This is caused by having tensor buffers be filled with the default value of the type but forgetting to default initialize the quantized floating point types in Eigen. This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.
<p>Publish Date: 2020-12-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-26266>CVE-2020-26266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"CocoaPods","packageName":"TensorIOTensorFlow","packageVersion":"2.0.3","packageFilePaths":["/examples/ios/Podfile.lock"],"isTransitiveDependency":true,"dependencyTree":"TensorIO/TensorFlow:1.2.3;TensorIOTensorFlow:2.0.3","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-26266","vulnerabilityDetails":"In affected versions of TensorFlow under certain cases a saved model can trigger use of uninitialized values during code execution. This is caused by having tensor buffers be filled with the default value of the type but forgetting to default initialize the quantized floating point types in Eigen. This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-26266","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"Low","S":"Unchanged","C":"Low","UI":"None","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in tensoriotensorflow cve medium severity vulnerability vulnerable library tensoriotensorflow an unofficial build of tensorflow for ios used by tensorio supporting inference evaluation and training library home page a href path to dependency file tensorio examples ios podfile lock path to vulnerable library tensorio examples ios podfile lock dependency hierarchy tensorio tensorflow root library x tensoriotensorflow vulnerable library found in head commit a href found in base branch master vulnerability details in affected versions of tensorflow under certain cases a saved model can trigger use of uninitialized values during code execution this is caused by having tensor buffers be filled with the default value of the type but forgetting to default initialize the quantized floating point types in eigen this is fixed in versions and publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree tensorio tensorflow tensoriotensorflow isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails in affected versions of tensorflow under certain cases a saved model can trigger use of uninitialized values during code execution this is caused by having tensor buffers be filled with the default value of the type but forgetting to default initialize the quantized floating point types in eigen this is fixed in versions and vulnerabilityurl
| 0
|
202,918
| 15,307,542,889
|
IssuesEvent
|
2021-02-24 21:03:31
|
rstudio/rstudio
|
https://api.github.com/repos/rstudio/rstudio
|
closed
|
"Hand pointer" mouse cursor in RStudio does not follow system theme and it is much smaller than normal size
|
bug test
|
<!--
IMPORTANT: Please fill out this template fully! Failure to do so will result in the issue being closed automatically.
This issue tracker is for bugs and feature requests in the RStudio IDE. If you're having trouble with R itself or an R package, see https://www.r-project.org/help.html, and if you want to ask a question rather than report a bug, go to https://community.rstudio.com/. Finally, if you use RStudio Server Pro, get in touch with our Pro support team at support@rstudio.com.
-->
### System details
RStudio Edition : Desktop
RStudio Version : Version 1.4.1103
OS Version : Manjaro GNOME 20.2.1
R Version : 4.0.3
### Steps to reproduce the problem
- Open RStudio
- When the mouse cursor is moved over one of the icons under the menu ("New File", "Open an existing file") or others (like "Export" for exporting images), the cursor becomes much smaller and does not follow the system theme anymore (e.g. while I use a dark cursor theme, the "Hand Pointer" cursor is white).
### Describe the problem in detail
### Describe the behavior you expected
- The mouse cursor should follow the system theme and be of the normal size, not smaller.
<!--
Please keep the below portion in your issue, and check `[x]` the applicable boxes.
-->
- [X] I have read the guide for [submitting good bug reports](https://github.com/rstudio/rstudio/wiki/Writing-Good-Bug-Reports).
- [X] I have installed the latest version of RStudio, and confirmed that the issue still persists.
- [ ] If I am reporting a RStudio crash, I have included a [diagnostics report](https://support.rstudio.com/hc/en-us/articles/200321257-Running-a-Diagnostics-Report).
- [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
See the video below to see what happens:
https://user-images.githubusercontent.com/59918868/105491076-0bff0e00-5cb6-11eb-80ee-164ffc82dfa6.mp4
|
1.0
|
"Hand pointer" mouse cursor in RStudio does not follow system theme and it is much smaller than normal size - <!--
IMPORTANT: Please fill out this template fully! Failure to do so will result in the issue being closed automatically.
This issue tracker is for bugs and feature requests in the RStudio IDE. If you're having trouble with R itself or an R package, see https://www.r-project.org/help.html, and if you want to ask a question rather than report a bug, go to https://community.rstudio.com/. Finally, if you use RStudio Server Pro, get in touch with our Pro support team at support@rstudio.com.
-->
### System details
RStudio Edition : Desktop
RStudio Version : Version 1.4.1103
OS Version : Manjaro GNOME 20.2.1
R Version : 4.0.3
### Steps to reproduce the problem
- Open RStudio
- When the mouse cursor is moved over one of the icons under the menu ("New File", "Open an existing file") or others (like "Export" for exporting images), the cursor becomes much smaller and does not follow the system theme anymore (e.g. while I use a dark cursor theme, the "Hand Pointer" cursor is white).
### Describe the problem in detail
### Describe the behavior you expected
- The mouse cursor should follow the system theme and be of the normal size, not smaller.
<!--
Please keep the below portion in your issue, and check `[x]` the applicable boxes.
-->
- [X] I have read the guide for [submitting good bug reports](https://github.com/rstudio/rstudio/wiki/Writing-Good-Bug-Reports).
- [X] I have installed the latest version of RStudio, and confirmed that the issue still persists.
- [ ] If I am reporting a RStudio crash, I have included a [diagnostics report](https://support.rstudio.com/hc/en-us/articles/200321257-Running-a-Diagnostics-Report).
- [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
See the video below to see what happens:
https://user-images.githubusercontent.com/59918868/105491076-0bff0e00-5cb6-11eb-80ee-164ffc82dfa6.mp4
|
non_process
|
hand pointer mouse cursor in rstudio does not follow system theme and it is much smaller than normal size important please fill out this template fully failure to do so will result in the issue being closed automatically this issue tracker is for bugs and feature requests in the rstudio ide if you re having trouble with r itself or an r package see and if you want to ask a question rather than report a bug go to finally if you use rstudio server pro get in touch with our pro support team at support rstudio com system details rstudio edition desktop rstudio version version os version manjaro gnome r version steps to reproduce the problem open rstudio when the mouse cursor is moved over one of the icons under the menu new file open an existing file or others like export for exporting images the cursor becomes much smaller and does not follow the system theme anymore e g while i use a dark cursor theme the hand pointer cursor is white describe the problem in detail describe the behavior you expected the mouse cursor should follow the system theme and be of the normal size not smaller please keep the below portion in your issue and check the applicable boxes i have read the guide for i have installed the latest version of rstudio and confirmed that the issue still persists if i am reporting a rstudio crash i have included a i have done my best to include a minimal self contained set of instructions for consistently reproducing the issue see the video below to see what happens
| 0
|
2,817
| 5,766,886,498
|
IssuesEvent
|
2017-04-27 08:32:54
|
reasonml/reason-tools
|
https://api.github.com/repos/reasonml/reason-tools
|
closed
|
Switch from rebel to rebuild
|
cat-process pri-low type-feature
|
Low pri, since this only affects contributors. Rebel's install time is pretty brutal.
|
1.0
|
Switch from rebel to rebuild - Low pri, since this only affects contributors. Rebel's install time is pretty brutal.
|
process
|
switch from rebel to rebuild low pri since this only affects contributors rebel s install time is pretty brutal
| 1
|
135,019
| 19,458,111,293
|
IssuesEvent
|
2021-12-23 03:29:45
|
miho73/IPU
|
https://api.github.com/repos/miho73/IPU
|
closed
|
CSS font setting is not correctly set
|
bug design cost-small
|
### Steps to reproduce
In univ.css, very top object, font-face block.
Colon is missing.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
Font is not rendered as expected.
|
1.0
|
CSS font setting is not correctly set - ### Steps to reproduce
In univ.css, very top object, font-face block.
Colon is missing.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
Font is not rendered as expected.
|
non_process
|
css font setting is not correctly set steps to reproduce in univ css very top object font face block colon is missing ✔️ expected behavior no response ❌ actual behavior font is not rendered as expected
| 0
|
12,793
| 15,172,500,359
|
IssuesEvent
|
2021-02-13 09:28:57
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
closed
|
Contest tagger: impact of EMSI service trial version restrictions
|
BE ShapeupProcess challenge- recommender-tool question
|
Are we using the trial version? Will these restrictions have any impact when we go to prod?
<img width="740" alt="Screenshot 2021-02-09 at 4 53 41 PM" src="https://user-images.githubusercontent.com/58783823/107602150-a4056d00-6c4e-11eb-8844-8f1f4d16bacc.png">
|
1.0
|
Contest tagger: impact of EMSI service trial version restrictions - Are we using the trial version? Will these restrictions have any impact when we go to prod?
<img width="740" alt="Screenshot 2021-02-09 at 4 53 41 PM" src="https://user-images.githubusercontent.com/58783823/107602150-a4056d00-6c4e-11eb-8844-8f1f4d16bacc.png">
|
process
|
contest tagger impact of emsi service trial version restrictions are we using the trial version will these restrictions have any impact when we go to prod img width alt screenshot at pm src
| 1
|
9,618
| 12,555,543,912
|
IssuesEvent
|
2020-06-07 06:32:24
|
bio-miga/miga
|
https://api.github.com/repos/bio-miga/miga
|
closed
|
Calculate query dataset AAI/ANI on request
|
Processing enhancement
|
It should be possible to add AAI/ANI estimation against a given reference dataset upon request.
|
1.0
|
Calculate query dataset AAI/ANI on request - It should be possible to add AAI/ANI estimation against a given reference dataset upon request.
|
process
|
calculate query dataset aai ani on request it should be possible to add aai ani estimation against a given reference dataset upon request
| 1
|
40,606
| 5,310,456,057
|
IssuesEvent
|
2017-02-12 20:13:57
|
EmanHylooz/Abjjad
|
https://api.github.com/repos/EmanHylooz/Abjjad
|
closed
|
Reflect the In App subscription on Abjjad Web
|
1 Day Development Done IOS Under Testing
|
In this page: http://st.abjjad.com/account/console
If the user is subscribed through Apple in app subscription, it should reflect on:
طرق الدّفع <-- it should say iTunes
الدّفعات <-- his payment bill should refelct here as in the date and amount
نوع الإشتراك <-- this is standard, no need to customize
ألغِ الإشتراك الشهري <-- Take him to Itunes web page
|
1.0
|
Reflect the In App subscription on Abjjad Web - In this page: http://st.abjjad.com/account/console
If the user is subscribed through Apple in app subscription, it should reflect on:
طرق الدّفع <-- it should say iTunes
الدّفعات <-- his payment bill should refelct here as in the date and amount
نوع الإشتراك <-- this is standard, no need to customize
ألغِ الإشتراك الشهري <-- Take him to Itunes web page
|
non_process
|
reflect the in app subscription on abjjad web in this page if the user is subscribed through apple in app subscription it should reflect on طرق الدّفع it should say itunes الدّفعات his payment bill should refelct here as in the date and amount نوع الإشتراك this is standard no need to customize ألغِ الإشتراك الشهري take him to itunes web page
| 0
|
20,666
| 27,334,852,397
|
IssuesEvent
|
2023-02-26 03:50:59
|
cse442-at-ub/project_s23-team-infinity
|
https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity
|
closed
|
Settle the basic environment for React.js
|
Processing Task Sprint 1
|
**Task Test**
*Test 1*
1) Install Node.js and NPM (Node Package Manager)
2) Create a basic app name 'my-app' by running 'npx create-react-app my-app' command in the terminal
3) Open the directory by running 'cd my-app' command
4) start the app by running 'npm start' command, default hosted at localhost:3000
|
1.0
|
Settle the basic environment for React.js - **Task Test**
*Test 1*
1) Install Node.js and NPM (Node Package Manager)
2) Create a basic app name 'my-app' by running 'npx create-react-app my-app' command in the terminal
3) Open the directory by running 'cd my-app' command
4) start the app by running 'npm start' command, default hosted at localhost:3000
|
process
|
settle the basic environment for react js task test test install node js and npm node package manager create a basic app name my app by running npx create react app my app command in the terminal open the directory by running cd my app command start the app by running npm start command default hosted at localhost
| 1
|
401,929
| 27,342,559,393
|
IssuesEvent
|
2023-02-26 23:40:17
|
DelvinSynclaire/BetterCalendar
|
https://api.github.com/repos/DelvinSynclaire/BetterCalendar
|
closed
|
First Entry - App is 60% complete - Sat, 2/25 11:58 PM
|
bug documentation enhancement
|
Fixed a bug that makes it so that hone you hit any other button while doing a sub-task the app crashes when you go to add a new sub-task anywhere else.
Need to Add :
- sub-tasks should be automatically added if you are in the middle of typing them and you click the "add subtask" button
- the return key should add a new sub-task for you to fill out
- subtasks that are not filled out need to disappear when you click any other button or when you click the screen in general.
- all titles and names should be editable with a tap
- clicking locations should open google maps to that location.
|
1.0
|
First Entry - App is 60% complete - Sat, 2/25 11:58 PM - Fixed a bug that makes it so that hone you hit any other button while doing a sub-task the app crashes when you go to add a new sub-task anywhere else.
Need to Add :
- sub-tasks should be automatically added if you are in the middle of typing them and you click the "add subtask" button
- the return key should add a new sub-task for you to fill out
- subtasks that are not filled out need to disappear when you click any other button or when you click the screen in general.
- all titles and names should be editable with a tap
- clicking locations should open google maps to that location.
|
non_process
|
first entry app is complete sat pm fixed a bug that makes it so that hone you hit any other button while doing a sub task the app crashes when you go to add a new sub task anywhere else need to add sub tasks should be automatically added if you are in the middle of typing them and you click the add subtask button the return key should add a new sub task for you to fill out subtasks that are not filled out need to disappear when you click any other button or when you click the screen in general all titles and names should be editable with a tap clicking locations should open google maps to that location
| 0
|
57,360
| 6,545,050,548
|
IssuesEvent
|
2017-09-04 01:00:15
|
radare/radare2
|
https://api.github.com/repos/radare/radare2
|
closed
|
Incorrect ESIL for cwde (Intel 32 and 64 bit)
|
bug easy esil test-required
|
There's a potential error with ESIL statement for cwde instruction:
`16,ax,>>,ax,eax,=,$c,?{,0xffff0000,eax,|=}` - there seems to be a missing comma between |= and }
Same with x64:
`32,eax,>>,eax,rax,=,$c,?{,0xffffffff00000000,rax,|=}`
Reproduction steps for x86:
```
$ cat > cwde32.asm
BITS 32
cwde
EOF
$ nasm -f elf cwde32.o
$ r2 cwde32.o
[0x08000110]> pd 1
;-- entry0:
;-- section..text:
0x08000110 98 cwde ; section 1 va=0x08000110 pa=0x00000110 sz=1 vsz=1 rwx=m-r-x .text
[0x08000110]> pdj 1
[{"offset":134218000,"esil":"16,ax,>>,ax,eax,=,$c,?{,0xffff0000,eax,|=}","refptr":false,"fcn_addr":0,"fcn_last":0,"size":1,"opcode":"cwde","bytes":"98","family":"cpu","type":"null","type_num":0,"type2_num":0,"flags":["entry0","section..text"],"comment":"c2VjdGlvbiAxIHZhPTB4MDgwMDAxMTAgcGE9MHgwMDAwMDExMCBzej0xIHZzej0xIHJ3eD1tLXIteCAudGV4dA=="}]
```
Reproduction steps for x64:
```
$ cat > cwde64.asm
BITS 64
cwde
EOF
$ nasm -f elf64 cwde64.asm
$ r2 cwde64.o
[0x08000180]> pd 1
;-- entry0:
;-- section..text:
0x08000180 98 cwde ; section 1 va=0x08000180 pa=0x00000180 sz=1 vsz=1 rwx=m-r-x .text
[0x08000180]> pdj 1
[{"offset":134218112,"esil":"32,eax,>>,eax,rax,=,$c,?{,0xffffffff00000000,rax,|=}","refptr":false,"fcn_addr":0,"fcn_last":0,"size":1,"opcode":"cwde","bytes":"98","family":"cpu","type":"null","type_num":0,"type2_num":0,"flags":["entry0","section..text"],"comment":"c2VjdGlvbiAxIHZhPTB4MDgwMDAxODAgcGE9MHgwMDAwMDE4MCBzej0xIHZzej0xIHJ3eD1tLXIteCAudGV4dA=="}]
```
|
1.0
|
Incorrect ESIL for cwde (Intel 32 and 64 bit) - There's a potential error with ESIL statement for cwde instruction:
`16,ax,>>,ax,eax,=,$c,?{,0xffff0000,eax,|=}` - there seems to be a missing comma between |= and }
Same with x64:
`32,eax,>>,eax,rax,=,$c,?{,0xffffffff00000000,rax,|=}`
Reproduction steps for x86:
```
$ cat > cwde32.asm
BITS 32
cwde
EOF
$ nasm -f elf cwde32.o
$ r2 cwde32.o
[0x08000110]> pd 1
;-- entry0:
;-- section..text:
0x08000110 98 cwde ; section 1 va=0x08000110 pa=0x00000110 sz=1 vsz=1 rwx=m-r-x .text
[0x08000110]> pdj 1
[{"offset":134218000,"esil":"16,ax,>>,ax,eax,=,$c,?{,0xffff0000,eax,|=}","refptr":false,"fcn_addr":0,"fcn_last":0,"size":1,"opcode":"cwde","bytes":"98","family":"cpu","type":"null","type_num":0,"type2_num":0,"flags":["entry0","section..text"],"comment":"c2VjdGlvbiAxIHZhPTB4MDgwMDAxMTAgcGE9MHgwMDAwMDExMCBzej0xIHZzej0xIHJ3eD1tLXIteCAudGV4dA=="}]
```
Reproduction steps for x64:
```
$ cat > cwde64.asm
BITS 64
cwde
EOF
$ nasm -f elf64 cwde64.asm
$ r2 cwde64.o
[0x08000180]> pd 1
;-- entry0:
;-- section..text:
0x08000180 98 cwde ; section 1 va=0x08000180 pa=0x00000180 sz=1 vsz=1 rwx=m-r-x .text
[0x08000180]> pdj 1
[{"offset":134218112,"esil":"32,eax,>>,eax,rax,=,$c,?{,0xffffffff00000000,rax,|=}","refptr":false,"fcn_addr":0,"fcn_last":0,"size":1,"opcode":"cwde","bytes":"98","family":"cpu","type":"null","type_num":0,"type2_num":0,"flags":["entry0","section..text"],"comment":"c2VjdGlvbiAxIHZhPTB4MDgwMDAxODAgcGE9MHgwMDAwMDE4MCBzej0xIHZzej0xIHJ3eD1tLXIteCAudGV4dA=="}]
```
|
non_process
|
incorrect esil for cwde intel and bit there s a potential error with esil statement for cwde instruction ax ax eax c eax there seems to be a missing comma between and same with eax eax rax c rax reproduction steps for cat asm bits cwde eof nasm f elf o o pd section text cwde section va pa sz vsz rwx m r x text pdj comment reproduction steps for cat asm bits cwde eof nasm f asm o pd section text cwde section va pa sz vsz rwx m r x text pdj comment
| 0
|
361,440
| 10,708,912,567
|
IssuesEvent
|
2019-10-24 20:46:47
|
opencv/opencv
|
https://api.github.com/repos/opencv/opencv
|
closed
|
FREAK detector causes segmentation fault on detect
|
affected: 2.4 auto-transferred bug category: python bindings priority: low
|
Transferred from http://code.opencv.org/issues/2302
```
|| Joe Palmer on 2012-08-24 14:39
|| Priority: Low
|| Affected: branch '2.4'
|| Category: python bindings
|| Tracker: Bug
|| Difficulty: None
|| PR:
|| Platform: None / None
```
## FREAK detector causes segmentation fault on detect
```
I have the following code which works great for SURF:
<pre>
im = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
surfDetector = cv2.FeatureDetector_create("SURF")
surfDescriptorExtractor = cv2.DescriptorExtractor_create("SURF")
keypoints = surfDetector.detect(im)
(kp, descritors) = surfDescriptorExtractor.compute(im,keypoints)
</pre>
However, I want to use the new FREAK descriptor so I have updated the code to the following:
<pre>
im = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
freakDetector = cv2.FeatureDetector_create('FREAK');
freakDescriptorExtractor = cv2.DescriptorExtractor_create("FREAK")
keypoints2 = freakDetector.detect(im)
(kp2, descritors2) = freakDescriptorExtractor.compute(im,keypoints2)
</pre>
But then I get an empty response with the following error in the log:
<pre>
child pid 915 exit signal Segmentation fault (11)
</pre>
Have I missed something or is this a bug?
```
## History
##### Joe Palmer on 2012-09-15 13:12
```
Any luck on this?
I have tried many things but always get a segmentation fault
```
##### Rafael Saracchini on 2013-02-14 09:47
```
Try to use another feature detector. As far I know, FREAK is only a feature descriptor. The call "cv2.FeatureDetector_create('FREAK')" will return NULL, thus causing a segfault when you try to use "detect". The SURF algorithm, instead can perform both, that's why the first code works well.
```
##### Kirill Kornyakov on 2013-02-15 07:54
```
Joe, anybody, could you confirm that FREAK descriptor extractor works with other detectors? If so, we can cancel this bug. But it is possible that we need to update either documentation or to print an informative error message, rather than crash with segfault.
- Affected version set to branch '2.4'
- Priority changed from Normal to Low
```
##### Rafael Saracchini on 2013-02-26 13:44
```
Kirill Kornyakov wrote:
> Joe, anybody, could you confirm that FREAK descriptor extractor works with other detectors? If so, we can cancel this bug. But it is possible that we need to update either documentation or to print an informative error message, rather than crash with segfault.
I had compared FREAK against all feature detectors (by the "create" method) without segfaults. It barely detects something in most of my tests when trying to detect objects, but it does not crashes at all.
```
|
1.0
|
FREAK detector causes segmentation fault on detect - Transferred from http://code.opencv.org/issues/2302
```
|| Joe Palmer on 2012-08-24 14:39
|| Priority: Low
|| Affected: branch '2.4'
|| Category: python bindings
|| Tracker: Bug
|| Difficulty: None
|| PR:
|| Platform: None / None
```
## FREAK detector causes segmentation fault on detect
```
I have the following code which works great for SURF:
<pre>
im = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
surfDetector = cv2.FeatureDetector_create("SURF")
surfDescriptorExtractor = cv2.DescriptorExtractor_create("SURF")
keypoints = surfDetector.detect(im)
(kp, descritors) = surfDescriptorExtractor.compute(im,keypoints)
</pre>
However, I want to use the new FREAK descriptor so I have updated the code to the following:
<pre>
im = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
freakDetector = cv2.FeatureDetector_create('FREAK');
freakDescriptorExtractor = cv2.DescriptorExtractor_create("FREAK")
keypoints2 = freakDetector.detect(im)
(kp2, descritors2) = freakDescriptorExtractor.compute(im,keypoints2)
</pre>
But then I get an empty response with the following error in the log:
<pre>
child pid 915 exit signal Segmentation fault (11)
</pre>
Have I missed something or is this a bug?
```
## History
##### Joe Palmer on 2012-09-15 13:12
```
Any luck on this?
I have tried many things but always get a segmentation fault
```
##### Rafael Saracchini on 2013-02-14 09:47
```
Try to use another feature detector. As far I know, FREAK is only a feature descriptor. The call "cv2.FeatureDetector_create('FREAK')" will return NULL, thus causing a segfault when you try to use "detect". The SURF algorithm, instead can perform both, that's why the first code works well.
```
##### Kirill Kornyakov on 2013-02-15 07:54
```
Joe, anybody, could you confirm that FREAK descriptor extractor works with other detectors? If so, we can cancel this bug. But it is possible that we need to update either documentation or to print an informative error message, rather than crash with segfault.
- Affected version set to branch '2.4'
- Priority changed from Normal to Low
```
##### Rafael Saracchini on 2013-02-26 13:44
```
Kirill Kornyakov wrote:
> Joe, anybody, could you confirm that FREAK descriptor extractor works with other detectors? If so, we can cancel this bug. But it is possible that we need to update either documentation or to print an informative error message, rather than crash with segfault.
I had compared FREAK against all feature detectors (by the "create" method) without segfaults. It barely detects something in most of my tests when trying to detect objects, but it does not crashes at all.
```
|
non_process
|
freak detector causes segmentation fault on detect transferred from joe palmer on priority low affected branch category python bindings tracker bug difficulty none pr platform none none freak detector causes segmentation fault on detect i have the following code which works great for surf im cvtcolor image color surfdetector featuredetector create surf surfdescriptorextractor descriptorextractor create surf keypoints surfdetector detect im kp descritors surfdescriptorextractor compute im keypoints however i want to use the new freak descriptor so i have updated the code to the following im cvtcolor image color freakdetector featuredetector create freak freakdescriptorextractor descriptorextractor create freak freakdetector detect im freakdescriptorextractor compute im but then i get an empty response with the following error in the log child pid exit signal segmentation fault have i missed something or is this a bug history joe palmer on any luck on this i have tried many things but always get a segmentation fault rafael saracchini on try to use another feature detector as far i know freak is only a feature descriptor the call featuredetector create freak will return null thus causing a segfault when you try to use detect the surf algorithm instead can perform both that s why the first code works well kirill kornyakov on joe anybody could you confirm that freak descriptor extractor works with other detectors if so we can cancel this bug but it is possible that we need to update either documentation or to print an informative error message rather than crash with segfault affected version set to branch priority changed from normal to low rafael saracchini on kirill kornyakov wrote joe anybody could you confirm that freak descriptor extractor works with other detectors if so we can cancel this bug but it is possible that we need to update either documentation or to print an informative error message rather than crash with segfault i had compared freak against all feature detectors by the create method without segfaults it barely detects something in most of my tests when trying to detect objects but it does not crashes at all
| 0
|
603,011
| 18,521,418,556
|
IssuesEvent
|
2021-10-20 15:23:25
|
ARMmbed/mbed-os
|
https://api.github.com/repos/ARMmbed/mbed-os
|
closed
|
Exporting blinky example results in error
|
type: bug devices: nxp priority: untriaged
|
<!--
************************************** WARNING **************************************
The ciarcom bot parses this header automatically. Any deviation from the
template may cause the bot to automatically correct this header or may result in a
warning message, requesting updates.
Please ensure that nothing follows the Issue request type section, all
issue details are within the Description section and no changes are made to the
template format (as detailed below).
*************************************************************************************
-->
### Description
<!--
Required
Add detailed description of what you are reporting.
Good example: https://os.mbed.com/docs/latest/reference/workflow.html
Things to consider sharing:
- What target does this relate to?
- What toolchain (name + version) are you using?
- What tools (name + version - is it mbed-cli, online compiler or IDE) are you using?
- What is the SHA of Mbed OS (git log -n1 --oneline)?
- Steps to reproduce. (Did you publish code or a test case that exhibits the problem?)
-->
When trying to export the blinky example the following error occurs:
Traceback (most recent call last):
File "/Users/user/Downloads/ARM-mbed/mbed-cli_test2/mbed_blinky/.temp/tools/project.py", line 423, in <module>
main()
File "/Users/user/Downloads/ARM-mbed/mbed-cli_test2/mbed_blinky/.temp/tools/project.py", line 415, in main
ignore=options.ignore
File "/Users/user/Downloads/ARM-mbed/mbed-cli_test2/mbed_blinky/.temp/tools/project.py", line 145, in export
notify.info("Using targets from %s" % targets_json)
AttributeError: 'NoneType' object has no attribute 'info'
Steps to reproduce:
1. Use mbed-cli v1.8.3 to import Mbed OS 2 blinky example:
`mbed import https://mbed.org/teams/mbed/code/mbed_blinky/`
Hint: For my board (LPC11U68), only Mbed OS 2 is supported.
2. Export using
`mbed export -i mcuxpresso -m LPC11U68`
Solution:
Add the following to function `export` in `tools/project.py` (line 133):
if not notify:
notify=TerminalNotifier()
### Issue request type
<!--
Required
Please add only one X to one of the following types. Do not fill multiple types (split the issue otherwise.)
Please note this is not a GitHub task list, indenting the boxes or changing the format to add a '.' or '*' in front
of them would change the meaning incorrectly. The only changes to be made are to add a description text under the
description heading and to add a 'x' to the correct box.
-->
[ ] Question
[ ] Enhancement
[X] Bug
|
1.0
|
Exporting blinky example results in error - <!--
************************************** WARNING **************************************
The ciarcom bot parses this header automatically. Any deviation from the
template may cause the bot to automatically correct this header or may result in a
warning message, requesting updates.
Please ensure that nothing follows the Issue request type section, all
issue details are within the Description section and no changes are made to the
template format (as detailed below).
*************************************************************************************
-->
### Description
<!--
Required
Add detailed description of what you are reporting.
Good example: https://os.mbed.com/docs/latest/reference/workflow.html
Things to consider sharing:
- What target does this relate to?
- What toolchain (name + version) are you using?
- What tools (name + version - is it mbed-cli, online compiler or IDE) are you using?
- What is the SHA of Mbed OS (git log -n1 --oneline)?
- Steps to reproduce. (Did you publish code or a test case that exhibits the problem?)
-->
When trying to export the blinky example the following error occurs:
Traceback (most recent call last):
File "/Users/user/Downloads/ARM-mbed/mbed-cli_test2/mbed_blinky/.temp/tools/project.py", line 423, in <module>
main()
File "/Users/user/Downloads/ARM-mbed/mbed-cli_test2/mbed_blinky/.temp/tools/project.py", line 415, in main
ignore=options.ignore
File "/Users/user/Downloads/ARM-mbed/mbed-cli_test2/mbed_blinky/.temp/tools/project.py", line 145, in export
notify.info("Using targets from %s" % targets_json)
AttributeError: 'NoneType' object has no attribute 'info'
Steps to reproduce:
1. Use mbed-cli v1.8.3 to import Mbed OS 2 blinky example:
`mbed import https://mbed.org/teams/mbed/code/mbed_blinky/`
Hint: For my board (LPC11U68), only Mbed OS 2 is supported.
2. Export using
`mbed export -i mcuxpresso -m LPC11U68`
Solution:
Add the following to function `export` in `tools/project.py` (line 133):
if not notify:
notify=TerminalNotifier()
### Issue request type
<!--
Required
Please add only one X to one of the following types. Do not fill multiple types (split the issue otherwise.)
Please note this is not a GitHub task list, indenting the boxes or changing the format to add a '.' or '*' in front
of them would change the meaning incorrectly. The only changes to be made are to add a description text under the
description heading and to add a 'x' to the correct box.
-->
[ ] Question
[ ] Enhancement
[X] Bug
|
non_process
|
exporting blinky example results in error warning the ciarcom bot parses this header automatically any deviation from the template may cause the bot to automatically correct this header or may result in a warning message requesting updates please ensure that nothing follows the issue request type section all issue details are within the description section and no changes are made to the template format as detailed below description required add detailed description of what you are reporting good example things to consider sharing what target does this relate to what toolchain name version are you using what tools name version is it mbed cli online compiler or ide are you using what is the sha of mbed os git log oneline steps to reproduce did you publish code or a test case that exhibits the problem when trying to export the blinky example the following error occurs traceback most recent call last file users user downloads arm mbed mbed cli mbed blinky temp tools project py line in main file users user downloads arm mbed mbed cli mbed blinky temp tools project py line in main ignore options ignore file users user downloads arm mbed mbed cli mbed blinky temp tools project py line in export notify info using targets from s targets json attributeerror nonetype object has no attribute info steps to reproduce use mbed cli to import mbed os blinky example mbed import hint for my board only mbed os is supported export using mbed export i mcuxpresso m solution add the following to function export in tools project py line if not notify notify terminalnotifier issue request type required please add only one x to one of the following types do not fill multiple types split the issue otherwise please note this is not a github task list indenting the boxes or changing the format to add a or in front of them would change the meaning incorrectly the only changes to be made are to add a description text under the description heading and to add a x to the correct box question enhancement bug
| 0
|
42,117
| 12,877,203,791
|
IssuesEvent
|
2020-07-11 09:39:57
|
Tsanfer/vuepress_theme_reco-Github_Actions
|
https://api.github.com/repos/Tsanfer/vuepress_theme_reco-Github_Actions
|
closed
|
CVE-2018-14040 (Medium) detected in bootstrap-3.3.5.min.js
|
security vulnerability
|
## CVE-2018-14040 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.5.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/vuepress_theme_reco-Github_Actions/node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>Path to vulnerable library: /vuepress_theme_reco-Github_Actions/node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.5.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Tsanfer/vuepress_theme_reco-Github_Actions/commit/2ccef8a3eb55435c9702d5d1a3d773dd24734d7a">2ccef8a3eb55435c9702d5d1a3d773dd24734d7a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040>CVE-2018-14040</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-14040 (Medium) detected in bootstrap-3.3.5.min.js - ## CVE-2018-14040 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.5.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/vuepress_theme_reco-Github_Actions/node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>Path to vulnerable library: /vuepress_theme_reco-Github_Actions/node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.5.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Tsanfer/vuepress_theme_reco-Github_Actions/commit/2ccef8a3eb55435c9702d5d1a3d773dd24734d7a">2ccef8a3eb55435c9702d5d1a3d773dd24734d7a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040>CVE-2018-14040</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file tmp ws scm vuepress theme reco github actions node modules autocomplete js test playground jquery html path to vulnerable library vuepress theme reco github actions node modules autocomplete js test playground jquery html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href vulnerability details in bootstrap before xss is possible in the collapse data parent attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org webjars npm bootstrap org webjars bootstrap step up your open source security game with whitesource
| 0
|
227,971
| 25,139,826,603
|
IssuesEvent
|
2022-11-09 22:01:44
|
temporalio/web
|
https://api.github.com/repos/temporalio/web
|
closed
|
eslint-7.21.0.tgz: 3 vulnerabilities (highest severity is: 7.5) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-7.21.0.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimatch/package.json</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (eslint version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-3807](https://www.mend.io/vulnerability-database/CVE-2021-3807) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ansi-regex-5.0.0.tgz | Transitive | 7.22.0 | ✅ |
| [CVE-2020-28469](https://www.mend.io/vulnerability-database/CVE-2020-28469) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | glob-parent-5.1.1.tgz | Transitive | 7.22.0 | ✅ |
| [CVE-2022-3517](https://www.mend.io/vulnerability-database/CVE-2022-3517) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | minimatch-3.0.4.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-3807</summary>
### Vulnerable Library - <b>ansi-regex-5.0.0.tgz</b></p>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/eslint/node_modules/ansi-regex/package.json,/node_modules/cli-table3/node_modules/ansi-regex/package.json,/node_modules/table/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.21.0.tgz (Root Library)
- table-6.0.7.tgz
- string-width-4.2.2.tgz
- strip-ansi-6.0.0.tgz
- :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution (ansi-regex): 5.0.1</p>
<p>Direct dependency fix Resolution (eslint): 7.22.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-28469</summary>
### Vulnerable Library - <b>glob-parent-5.1.1.tgz</b></p>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.21.0.tgz (Root Library)
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution (glob-parent): 5.1.2</p>
<p>Direct dependency fix Resolution (eslint): 7.22.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-3517</summary>
### Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.21.0.tgz (Root Library)
- :x: **minimatch-3.0.4.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service.
<p>Publish Date: 2022-10-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3517>CVE-2022-3517</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-17</p>
<p>Fix Resolution: minimatch - 3.0.5</p>
</p>
<p></p>
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
True
|
eslint-7.21.0.tgz: 3 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-7.21.0.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimatch/package.json</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (eslint version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-3807](https://www.mend.io/vulnerability-database/CVE-2021-3807) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ansi-regex-5.0.0.tgz | Transitive | 7.22.0 | ✅ |
| [CVE-2020-28469](https://www.mend.io/vulnerability-database/CVE-2020-28469) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | glob-parent-5.1.1.tgz | Transitive | 7.22.0 | ✅ |
| [CVE-2022-3517](https://www.mend.io/vulnerability-database/CVE-2022-3517) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | minimatch-3.0.4.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-3807</summary>
### Vulnerable Library - <b>ansi-regex-5.0.0.tgz</b></p>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/eslint/node_modules/ansi-regex/package.json,/node_modules/cli-table3/node_modules/ansi-regex/package.json,/node_modules/table/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.21.0.tgz (Root Library)
- table-6.0.7.tgz
- string-width-4.2.2.tgz
- strip-ansi-6.0.0.tgz
- :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution (ansi-regex): 5.0.1</p>
<p>Direct dependency fix Resolution (eslint): 7.22.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-28469</summary>
### Vulnerable Library - <b>glob-parent-5.1.1.tgz</b></p>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.21.0.tgz (Root Library)
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution (glob-parent): 5.1.2</p>
<p>Direct dependency fix Resolution (eslint): 7.22.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-3517</summary>
### Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.21.0.tgz (Root Library)
- :x: **minimatch-3.0.4.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service.
<p>Publish Date: 2022-10-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3517>CVE-2022-3517</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-17</p>
<p>Fix Resolution: minimatch - 3.0.5</p>
</p>
<p></p>
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
non_process
|
eslint tgz vulnerabilities highest severity is autoclosed vulnerable library eslint tgz path to dependency file package json path to vulnerable library node modules minimatch package json vulnerabilities cve severity cvss dependency type fixed in eslint version remediation available high ansi regex tgz transitive high glob parent tgz transitive high minimatch tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file package json path to vulnerable library node modules eslint node modules ansi regex package json node modules cli node modules ansi regex package json node modules table node modules ansi regex package json dependency hierarchy eslint tgz root library table tgz string width tgz strip ansi tgz x ansi regex tgz vulnerable library found in base branch master vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex direct dependency fix resolution eslint rescue worker helmet automatic remediation is available for this issue cve vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href path to dependency file package json path to vulnerable library node modules glob parent package json dependency hierarchy eslint tgz root library x glob parent tgz vulnerable library found in base branch master vulnerability details this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent direct dependency fix resolution eslint rescue worker helmet automatic remediation is available for this issue cve vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file package json path to vulnerable library node modules minimatch package json dependency hierarchy eslint tgz root library x minimatch tgz vulnerable library found in base branch master vulnerability details a vulnerability was found in the minimatch package this flaw allows a regular expression denial of service redos when calling the braceexpand function with specific arguments resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution minimatch rescue worker helmet automatic remediation is available for this issue
| 0
|
71,495
| 15,207,764,677
|
IssuesEvent
|
2021-02-17 00:58:04
|
billmcchesney1/foxtrot
|
https://api.github.com/repos/billmcchesney1/foxtrot
|
opened
|
CVE-2020-36189 (Medium) detected in jackson-databind-2.9.9.1.jar
|
security vulnerability
|
## CVE-2020-36189 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: foxtrot/foxtrot-sql/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar</p>
<p>
Dependency Hierarchy:
- dropwizard-jackson-1.3.13.jar (Root Library)
- :x: **jackson-databind-2.9.9.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/foxtrot/commit/ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc">ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189>CVE-2020-36189</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/commit/33d96c13fe18a2dad01b19ce195548c9acea9da4">https://github.com/FasterXML/jackson-databind/commit/33d96c13fe18a2dad01b19ce195548c9acea9da4</a></p>
<p>Release Date: 2020-12-26</p>
<p>Fix Resolution: Replace or update the following files: SubTypeValidator.java, VERSION-2.x</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9.1","packageFilePaths":["/foxtrot-sql/pom.xml","/foxtrot-core/pom.xml","/foxtrot-server/pom.xml","/foxtrot-common/pom.xml","/foxtrot-translator/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"io.dropwizard:dropwizard-jackson:1.3.13;com.fasterxml.jackson.core:jackson-databind:2.9.9.1","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36189","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189","cvss2Severity":"medium","cvss2Score":"6.8","extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-36189 (Medium) detected in jackson-databind-2.9.9.1.jar - ## CVE-2020-36189 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: foxtrot/foxtrot-sql/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar</p>
<p>
Dependency Hierarchy:
- dropwizard-jackson-1.3.13.jar (Root Library)
- :x: **jackson-databind-2.9.9.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/foxtrot/commit/ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc">ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189>CVE-2020-36189</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/commit/33d96c13fe18a2dad01b19ce195548c9acea9da4">https://github.com/FasterXML/jackson-databind/commit/33d96c13fe18a2dad01b19ce195548c9acea9da4</a></p>
<p>Release Date: 2020-12-26</p>
<p>Fix Resolution: Replace or update the following files: SubTypeValidator.java, VERSION-2.x</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9.1","packageFilePaths":["/foxtrot-sql/pom.xml","/foxtrot-core/pom.xml","/foxtrot-server/pom.xml","/foxtrot-common/pom.xml","/foxtrot-translator/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"io.dropwizard:dropwizard-jackson:1.3.13;com.fasterxml.jackson.core:jackson-databind:2.9.9.1","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36189","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189","cvss2Severity":"medium","cvss2Score":"6.8","extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file foxtrot foxtrot sql pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy dropwizard jackson jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db drivermanagerconnectionsource publish date url a href cvss score details base score metrics not available suggested fix type change files origin a href release date fix resolution replace or update the following files subtypevalidator java version x isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree io dropwizard dropwizard jackson com fasterxml jackson core jackson databind isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db drivermanagerconnectionsource vulnerabilityurl
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.