Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
880
3,343,578,566
IssuesEvent
2015-11-15 16:31:32
pwittchen/ReactiveBeacons
https://api.github.com/repos/pwittchen/ReactiveBeacons
closed
Release 0.4.0
release process
**Initial release notes**: - added `MacAddress` class with MAC address validation - added `macAddress` field to `Beacon` class - added `exceptName(final String... names)` method to `Filter` class - added `exceptMacAddress(final String... macs)` method to `Filter` class - added `hasMacAddress(final MacAddress... macs)` method to `Filter` class - added `exceptMacAddress(final MacAddress... macs)` method to `Filter` class **Things to do**: - [x] bump library version - [x] upload archives to Maven Central - [x] close and release artifact on Maven Central - [x] update JavaDoc on gh-pages - [x] update `CHANGELOG.md` after Maven Sync - [x] bump library version in `README.md` - [x] create new GitHub release
1.0
Release 0.4.0 - **Initial release notes**: - added `MacAddress` class with MAC address validation - added `macAddress` field to `Beacon` class - added `exceptName(final String... names)` method to `Filter` class - added `exceptMacAddress(final String... macs)` method to `Filter` class - added `hasMacAddress(final MacAddress... macs)` method to `Filter` class - added `exceptMacAddress(final MacAddress... macs)` method to `Filter` class **Things to do**: - [x] bump library version - [x] upload archives to Maven Central - [x] close and release artifact on Maven Central - [x] update JavaDoc on gh-pages - [x] update `CHANGELOG.md` after Maven Sync - [x] bump library version in `README.md` - [x] create new GitHub release
process
release initial release notes added macaddress class with mac address validation added macaddress field to beacon class added exceptname final string names method to filter class added exceptmacaddress final string macs method to filter class added hasmacaddress final macaddress macs method to filter class added exceptmacaddress final macaddress macs method to filter class things to do bump library version upload archives to maven central close and release artifact on maven central update javadoc on gh pages update changelog md after maven sync bump library version in readme md create new github release
1
19,654
26,011,177,534
IssuesEvent
2022-12-21 02:00:07
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Wed, 21 Dec 22
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Masked Event Modeling: Self-Supervised Pretraining for Event Cameras - **Authors:** Simon Klenk, David Bonello, Lukas Koestler, Daniel Cremers - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.10368 - **Pdf link:** https://arxiv.org/pdf/2212.10368 - **Abstract** Event cameras offer the capacity to asynchronously capture brightness changes with low latency, high temporal resolution, and high dynamic range. Deploying deep learning methods for classification or other tasks to these sensors typically requires large labeled datasets. Since the amount of labeled event data is tiny compared to the bulk of labeled RGB imagery, the progress of event-based vision has remained limited. To reduce the dependency on labeled event data, we introduce Masked Event Modeling (MEM), a self-supervised pretraining framework for events. Our method pretrains a neural network on unlabeled events, which can originate from any event camera recording. Subsequently, the pretrained model is finetuned on a downstream task leading to an overall better performance while requiring fewer labels. Our method outperforms the state-of-the-art on N-ImageNet, N-Cars, and N-Caltech101, increasing the object classification accuracy on N-ImageNet by 7.96%. We demonstrate that Masked Event Modeling is superior to RGB-based pretraining on a real world dataset. ## Keyword: event camera ### Masked Event Modeling: Self-Supervised Pretraining for Event Cameras - **Authors:** Simon Klenk, David Bonello, Lukas Koestler, Daniel Cremers - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.10368 - **Pdf link:** https://arxiv.org/pdf/2212.10368 - **Abstract** Event cameras offer the capacity to asynchronously capture brightness changes with low latency, high temporal resolution, and high dynamic range. Deploying deep learning methods for classification or other tasks to these sensors typically requires large labeled datasets. Since the amount of labeled event data is tiny compared to the bulk of labeled RGB imagery, the progress of event-based vision has remained limited. To reduce the dependency on labeled event data, we introduce Masked Event Modeling (MEM), a self-supervised pretraining framework for events. Our method pretrains a neural network on unlabeled events, which can originate from any event camera recording. Subsequently, the pretrained model is finetuned on a downstream task leading to an overall better performance while requiring fewer labels. Our method outperforms the state-of-the-art on N-ImageNet, N-Cars, and N-Caltech101, increasing the object classification accuracy on N-ImageNet by 7.96%. We demonstrate that Masked Event Modeling is superior to RGB-based pretraining on a real world dataset. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### MetaCLUE: Towards Comprehensive Visual Metaphors Research - **Authors:** Arjun R. Akula, Brendan Driscoll, Pradyumna Narayana, Soravit Changpinyo, Zhiwei Jia, Suyash Damle, Garima Pruthi, Sugato Basu, Leonidas Guibas, William T. Freeman, Yuanzhen Li, Varun Jampani - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.09898 - **Pdf link:** https://arxiv.org/pdf/2212.09898 - **Abstract** Creativity is an indispensable part of human cognition and also an inherent part of how we make sense of the world. Metaphorical abstraction is fundamental in communicating creative ideas through nuanced relationships between abstract concepts such as feelings. While computer vision benchmarks and approaches predominantly focus on understanding and generating literal interpretations of images, metaphorical comprehension of images remains relatively unexplored. Towards this goal, we introduce MetaCLUE, a set of vision tasks on visual metaphor. We also collect high-quality and rich metaphor annotations (abstract objects, concepts, relationships along with their corresponding object boxes) as there do not exist any datasets that facilitate the evaluation of these tasks. We perform a comprehensive analysis of state-of-the-art models in vision and language based on our annotations, highlighting strengths and weaknesses of current approaches in visual metaphor Classification, Localization, Understanding (retrieval, question answering, captioning) and gEneration (text-to-image synthesis) tasks. We hope this work provides a concrete step towards developing AI systems with human-like creative capabilities. ### Robust and Resource-efficient Machine Learning Aided Viewport Prediction in Virtual Reality - **Authors:** Yuang Jiang, Konstantinos Poularakis, Diego Kiedanski, Sastry Kompella, Leandros Tassiulas - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2212.09945 - **Pdf link:** https://arxiv.org/pdf/2212.09945 - **Abstract** 360-degree panoramic videos have gained considerable attention in recent years due to the rapid development of head-mounted displays (HMDs) and panoramic cameras. One major problem in streaming panoramic videos is that panoramic videos are much larger in size compared to traditional ones. Moreover, the user devices are often in a wireless environment, with limited battery, computation power, and bandwidth. To reduce resource consumption, researchers have proposed ways to predict the users' viewports so that only part of the entire video needs to be transmitted from the server. However, the robustness of such prediction approaches has been overlooked in the literature: it is usually assumed that only a few models, pre-trained on past users' experiences, are applied for prediction to all users. We observe that those pre-trained models can perform poorly for some users because they might have drastically different behaviors from the majority, and the pre-trained models cannot capture the features in unseen videos. In this work, we propose a novel meta learning based viewport prediction paradigm to alleviate the worst prediction performance and ensure the robustness of viewport prediction. This paradigm uses two machine learning models, where the first model predicts the viewing direction, and the second model predicts the minimum video prefetch size that can include the actual viewport. We first train two meta models so that they are sensitive to new training data, and then quickly adapt them to users while they are watching the videos. Evaluation results reveal that the meta models can adapt quickly to each user, and can significantly increase the prediction accuracy, especially for the worst-performing predictions. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Content Adaptive Latents and Decoder for Neural Image Compression - **Authors:** Guanbo Pan, Guo Lu, Zhihao Hu, Dong Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.10132 - **Pdf link:** https://arxiv.org/pdf/2212.10132 - **Abstract** In recent years, neural image compression (NIC) algorithms have shown powerful coding performance. However, most of them are not adaptive to the image content. Although several content adaptive methods have been proposed by updating the encoder-side components, the adaptability of both latents and the decoder is not well exploited. In this work, we propose a new NIC framework that improves the content adaptability on both latents and the decoder. Specifically, to remove redundancy in the latents, our content adaptive channel dropping (CACD) method automatically selects the optimal quality levels for the latents spatially and drops the redundant channels. Additionally, we propose the content adaptive feature transformation (CAFT) method to improve decoder-side content adaptability by extracting the characteristic information of the image content, which is then used to transform the features in the decoder side. Experimental results demonstrate that our proposed methods with the encoder-side updating algorithm achieve the state-of-the-art performance. ### CSMPQ:Class Separability Based Mixed-Precision Quantization - **Authors:** Mingkai Wang, Taisong Jin, Miaohui Zhang, Zhengtao Yu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.10220 - **Pdf link:** https://arxiv.org/pdf/2212.10220 - **Abstract** Mixed-precision quantization has received increasing attention for its capability of reducing the computational burden and speeding up the inference time. Existing methods usually focus on the sensitivity of different network layers, which requires a time-consuming search or training process. To this end, a novel mixed-precision quantization method, termed CSMPQ, is proposed. Specifically, the TF-IDF metric that is widely used in natural language processing (NLP) is introduced to measure the class separability of layer-wise feature maps. Furthermore, a linear programming problem is designed to derive the optimal bit configuration for each layer. Without any iterative process, the proposed CSMPQ achieves better compression trade-offs than the state-of-the-art quantization methods. Specifically, CSMPQ achieves 73.03$\%$ Top-1 acc on ResNet-18 with only 59G BOPs for QAT, and 71.30$\%$ top-1 acc with only 1.5Mb on MobileNetV2 for PTQ. ## Keyword: RAW ### Eff-3DPSeg: 3D organ-level plant shoot segmentation using annotation-efficient point clouds - **Authors:** Liyi Luo, Xintong Jiang, Yu Yang, Eugene Roy Antony Samy, Mark Lefsrud, Valerio Hoyos-Villegas, Shangpeng Sun - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2212.10263 - **Pdf link:** https://arxiv.org/pdf/2212.10263 - **Abstract** Reliable and automated 3D plant shoot segmentation is a core prerequisite for the extraction of plant phenotypic traits at the organ level. Combining deep learning and point clouds can provide effective ways to address the challenge. However, fully supervised deep learning methods require datasets to be point-wise annotated, which is extremely expensive and time-consuming. In our work, we proposed a novel weakly supervised framework, Eff-3DPSeg, for 3D plant shoot segmentation. First, high-resolution point clouds of soybean were reconstructed using a low-cost photogrammetry system, and the Meshlab-based Plant Annotator was developed for plant point cloud annotation. Second, a weakly-supervised deep learning method was proposed for plant organ segmentation. The method contained: (1) Pretraining a self-supervised network using Viewpoint Bottleneck loss to learn meaningful intrinsic structure representation from the raw point clouds; (2) Fine-tuning the pre-trained model with about only 0.5% points being annotated to implement plant organ segmentation. After, three phenotypic traits (stem diameter, leaf width, and leaf length) were extracted. To test the generality of the proposed method, the public dataset Pheno4D was included in this study. Experimental results showed that the weakly-supervised network obtained similar segmentation performance compared with the fully-supervised setting. Our method achieved 95.1%, 96.6%, 95.8% and 92.2% in the Precision, Recall, F1-score, and mIoU for stem leaf segmentation and 53%, 62.8% and 70.3% in the AP, AP@25, and AP@50 for leaf instance segmentation. This study provides an effective way for characterizing 3D plant architecture, which will become useful for plant breeders to enhance selection processes. ### Fully and Weakly Supervised Referring Expression Segmentation with End-to-End Learning - **Authors:** Hui Li, Mingjie Sun, Jimin Xiao, Eng Gee Lim, Yao Zhao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.10278 - **Pdf link:** https://arxiv.org/pdf/2212.10278 - **Abstract** Referring Expression Segmentation (RES), which is aimed at localizing and segmenting the target according to the given language expression, has drawn increasing attention. Existing methods jointly consider the localization and segmentation steps, which rely on the fused visual and linguistic features for both steps. We argue that the conflict between the purpose of identifying an object and generating a mask limits the RES performance. To solve this problem, we propose a parallel position-kernel-segmentation pipeline to better isolate and then interact the localization and segmentation steps. In our pipeline, linguistic information will not directly contaminate the visual feature for segmentation. Specifically, the localization step localizes the target object in the image based on the referring expression, and then the visual kernel obtained from the localization step guides the segmentation step. This pipeline also enables us to train RES in a weakly-supervised way, where the pixel-level segmentation labels are replaced by click annotations on center and corner points. The position head is fully-supervised and trained with the click annotations as supervision, and the segmentation head is trained with weakly-supervised segmentation losses. To validate our framework on a weakly-supervised setting, we annotated three RES benchmark datasets (RefCOCO, RefCOCO+ and RefCOCOg) with click annotations.Our method is simple but surprisingly effective, outperforming all previous state-of-the-art RES methods on fully- and weakly-supervised settings by a large margin. The benchmark code and datasets will be released. ### Internal Diverse Image Completion - **Authors:** Noa Alkobi, Tamar Rott Shaham, Tomer Michaeli - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.10280 - **Pdf link:** https://arxiv.org/pdf/2212.10280 - **Abstract** Image completion is widely used in photo restoration and editing applications, e.g. for object removal. Recently, there has been a surge of research on generating diverse completions for missing regions. However, existing methods require large training sets from a specific domain of interest, and often fail on general-content images. In this paper, we propose a diverse completion method that does not require a training set and can thus treat arbitrary images from any domain. Our internal diverse completion (IDC) approach draws inspiration from recent single-image generative models that are trained on multiple scales of a single image, adapting them to the extreme setting in which only a small portion of the image is available for training. We illustrate the strength of IDC on several datasets, using both user studies and quantitative comparisons. ## Keyword: raw image There is no result
2.0
New submissions for Wed, 21 Dec 22 - ## Keyword: events ### Masked Event Modeling: Self-Supervised Pretraining for Event Cameras - **Authors:** Simon Klenk, David Bonello, Lukas Koestler, Daniel Cremers - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.10368 - **Pdf link:** https://arxiv.org/pdf/2212.10368 - **Abstract** Event cameras offer the capacity to asynchronously capture brightness changes with low latency, high temporal resolution, and high dynamic range. Deploying deep learning methods for classification or other tasks to these sensors typically requires large labeled datasets. Since the amount of labeled event data is tiny compared to the bulk of labeled RGB imagery, the progress of event-based vision has remained limited. To reduce the dependency on labeled event data, we introduce Masked Event Modeling (MEM), a self-supervised pretraining framework for events. Our method pretrains a neural network on unlabeled events, which can originate from any event camera recording. Subsequently, the pretrained model is finetuned on a downstream task leading to an overall better performance while requiring fewer labels. Our method outperforms the state-of-the-art on N-ImageNet, N-Cars, and N-Caltech101, increasing the object classification accuracy on N-ImageNet by 7.96%. We demonstrate that Masked Event Modeling is superior to RGB-based pretraining on a real world dataset. ## Keyword: event camera ### Masked Event Modeling: Self-Supervised Pretraining for Event Cameras - **Authors:** Simon Klenk, David Bonello, Lukas Koestler, Daniel Cremers - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.10368 - **Pdf link:** https://arxiv.org/pdf/2212.10368 - **Abstract** Event cameras offer the capacity to asynchronously capture brightness changes with low latency, high temporal resolution, and high dynamic range. Deploying deep learning methods for classification or other tasks to these sensors typically requires large labeled datasets. Since the amount of labeled event data is tiny compared to the bulk of labeled RGB imagery, the progress of event-based vision has remained limited. To reduce the dependency on labeled event data, we introduce Masked Event Modeling (MEM), a self-supervised pretraining framework for events. Our method pretrains a neural network on unlabeled events, which can originate from any event camera recording. Subsequently, the pretrained model is finetuned on a downstream task leading to an overall better performance while requiring fewer labels. Our method outperforms the state-of-the-art on N-ImageNet, N-Cars, and N-Caltech101, increasing the object classification accuracy on N-ImageNet by 7.96%. We demonstrate that Masked Event Modeling is superior to RGB-based pretraining on a real world dataset. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### MetaCLUE: Towards Comprehensive Visual Metaphors Research - **Authors:** Arjun R. Akula, Brendan Driscoll, Pradyumna Narayana, Soravit Changpinyo, Zhiwei Jia, Suyash Damle, Garima Pruthi, Sugato Basu, Leonidas Guibas, William T. Freeman, Yuanzhen Li, Varun Jampani - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.09898 - **Pdf link:** https://arxiv.org/pdf/2212.09898 - **Abstract** Creativity is an indispensable part of human cognition and also an inherent part of how we make sense of the world. Metaphorical abstraction is fundamental in communicating creative ideas through nuanced relationships between abstract concepts such as feelings. While computer vision benchmarks and approaches predominantly focus on understanding and generating literal interpretations of images, metaphorical comprehension of images remains relatively unexplored. Towards this goal, we introduce MetaCLUE, a set of vision tasks on visual metaphor. We also collect high-quality and rich metaphor annotations (abstract objects, concepts, relationships along with their corresponding object boxes) as there do not exist any datasets that facilitate the evaluation of these tasks. We perform a comprehensive analysis of state-of-the-art models in vision and language based on our annotations, highlighting strengths and weaknesses of current approaches in visual metaphor Classification, Localization, Understanding (retrieval, question answering, captioning) and gEneration (text-to-image synthesis) tasks. We hope this work provides a concrete step towards developing AI systems with human-like creative capabilities. ### Robust and Resource-efficient Machine Learning Aided Viewport Prediction in Virtual Reality - **Authors:** Yuang Jiang, Konstantinos Poularakis, Diego Kiedanski, Sastry Kompella, Leandros Tassiulas - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2212.09945 - **Pdf link:** https://arxiv.org/pdf/2212.09945 - **Abstract** 360-degree panoramic videos have gained considerable attention in recent years due to the rapid development of head-mounted displays (HMDs) and panoramic cameras. One major problem in streaming panoramic videos is that panoramic videos are much larger in size compared to traditional ones. Moreover, the user devices are often in a wireless environment, with limited battery, computation power, and bandwidth. To reduce resource consumption, researchers have proposed ways to predict the users' viewports so that only part of the entire video needs to be transmitted from the server. However, the robustness of such prediction approaches has been overlooked in the literature: it is usually assumed that only a few models, pre-trained on past users' experiences, are applied for prediction to all users. We observe that those pre-trained models can perform poorly for some users because they might have drastically different behaviors from the majority, and the pre-trained models cannot capture the features in unseen videos. In this work, we propose a novel meta learning based viewport prediction paradigm to alleviate the worst prediction performance and ensure the robustness of viewport prediction. This paradigm uses two machine learning models, where the first model predicts the viewing direction, and the second model predicts the minimum video prefetch size that can include the actual viewport. We first train two meta models so that they are sensitive to new training data, and then quickly adapt them to users while they are watching the videos. Evaluation results reveal that the meta models can adapt quickly to each user, and can significantly increase the prediction accuracy, especially for the worst-performing predictions. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Content Adaptive Latents and Decoder for Neural Image Compression - **Authors:** Guanbo Pan, Guo Lu, Zhihao Hu, Dong Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.10132 - **Pdf link:** https://arxiv.org/pdf/2212.10132 - **Abstract** In recent years, neural image compression (NIC) algorithms have shown powerful coding performance. However, most of them are not adaptive to the image content. Although several content adaptive methods have been proposed by updating the encoder-side components, the adaptability of both latents and the decoder is not well exploited. In this work, we propose a new NIC framework that improves the content adaptability on both latents and the decoder. Specifically, to remove redundancy in the latents, our content adaptive channel dropping (CACD) method automatically selects the optimal quality levels for the latents spatially and drops the redundant channels. Additionally, we propose the content adaptive feature transformation (CAFT) method to improve decoder-side content adaptability by extracting the characteristic information of the image content, which is then used to transform the features in the decoder side. Experimental results demonstrate that our proposed methods with the encoder-side updating algorithm achieve the state-of-the-art performance. ### CSMPQ:Class Separability Based Mixed-Precision Quantization - **Authors:** Mingkai Wang, Taisong Jin, Miaohui Zhang, Zhengtao Yu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.10220 - **Pdf link:** https://arxiv.org/pdf/2212.10220 - **Abstract** Mixed-precision quantization has received increasing attention for its capability of reducing the computational burden and speeding up the inference time. Existing methods usually focus on the sensitivity of different network layers, which requires a time-consuming search or training process. To this end, a novel mixed-precision quantization method, termed CSMPQ, is proposed. Specifically, the TF-IDF metric that is widely used in natural language processing (NLP) is introduced to measure the class separability of layer-wise feature maps. Furthermore, a linear programming problem is designed to derive the optimal bit configuration for each layer. Without any iterative process, the proposed CSMPQ achieves better compression trade-offs than the state-of-the-art quantization methods. Specifically, CSMPQ achieves 73.03$\%$ Top-1 acc on ResNet-18 with only 59G BOPs for QAT, and 71.30$\%$ top-1 acc with only 1.5Mb on MobileNetV2 for PTQ. ## Keyword: RAW ### Eff-3DPSeg: 3D organ-level plant shoot segmentation using annotation-efficient point clouds - **Authors:** Liyi Luo, Xintong Jiang, Yu Yang, Eugene Roy Antony Samy, Mark Lefsrud, Valerio Hoyos-Villegas, Shangpeng Sun - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2212.10263 - **Pdf link:** https://arxiv.org/pdf/2212.10263 - **Abstract** Reliable and automated 3D plant shoot segmentation is a core prerequisite for the extraction of plant phenotypic traits at the organ level. Combining deep learning and point clouds can provide effective ways to address the challenge. However, fully supervised deep learning methods require datasets to be point-wise annotated, which is extremely expensive and time-consuming. In our work, we proposed a novel weakly supervised framework, Eff-3DPSeg, for 3D plant shoot segmentation. First, high-resolution point clouds of soybean were reconstructed using a low-cost photogrammetry system, and the Meshlab-based Plant Annotator was developed for plant point cloud annotation. Second, a weakly-supervised deep learning method was proposed for plant organ segmentation. The method contained: (1) Pretraining a self-supervised network using Viewpoint Bottleneck loss to learn meaningful intrinsic structure representation from the raw point clouds; (2) Fine-tuning the pre-trained model with about only 0.5% points being annotated to implement plant organ segmentation. After, three phenotypic traits (stem diameter, leaf width, and leaf length) were extracted. To test the generality of the proposed method, the public dataset Pheno4D was included in this study. Experimental results showed that the weakly-supervised network obtained similar segmentation performance compared with the fully-supervised setting. Our method achieved 95.1%, 96.6%, 95.8% and 92.2% in the Precision, Recall, F1-score, and mIoU for stem leaf segmentation and 53%, 62.8% and 70.3% in the AP, AP@25, and AP@50 for leaf instance segmentation. This study provides an effective way for characterizing 3D plant architecture, which will become useful for plant breeders to enhance selection processes. ### Fully and Weakly Supervised Referring Expression Segmentation with End-to-End Learning - **Authors:** Hui Li, Mingjie Sun, Jimin Xiao, Eng Gee Lim, Yao Zhao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.10278 - **Pdf link:** https://arxiv.org/pdf/2212.10278 - **Abstract** Referring Expression Segmentation (RES), which is aimed at localizing and segmenting the target according to the given language expression, has drawn increasing attention. Existing methods jointly consider the localization and segmentation steps, which rely on the fused visual and linguistic features for both steps. We argue that the conflict between the purpose of identifying an object and generating a mask limits the RES performance. To solve this problem, we propose a parallel position-kernel-segmentation pipeline to better isolate and then interact the localization and segmentation steps. In our pipeline, linguistic information will not directly contaminate the visual feature for segmentation. Specifically, the localization step localizes the target object in the image based on the referring expression, and then the visual kernel obtained from the localization step guides the segmentation step. This pipeline also enables us to train RES in a weakly-supervised way, where the pixel-level segmentation labels are replaced by click annotations on center and corner points. The position head is fully-supervised and trained with the click annotations as supervision, and the segmentation head is trained with weakly-supervised segmentation losses. To validate our framework on a weakly-supervised setting, we annotated three RES benchmark datasets (RefCOCO, RefCOCO+ and RefCOCOg) with click annotations.Our method is simple but surprisingly effective, outperforming all previous state-of-the-art RES methods on fully- and weakly-supervised settings by a large margin. The benchmark code and datasets will be released. ### Internal Diverse Image Completion - **Authors:** Noa Alkobi, Tamar Rott Shaham, Tomer Michaeli - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.10280 - **Pdf link:** https://arxiv.org/pdf/2212.10280 - **Abstract** Image completion is widely used in photo restoration and editing applications, e.g. for object removal. Recently, there has been a surge of research on generating diverse completions for missing regions. However, existing methods require large training sets from a specific domain of interest, and often fail on general-content images. In this paper, we propose a diverse completion method that does not require a training set and can thus treat arbitrary images from any domain. Our internal diverse completion (IDC) approach draws inspiration from recent single-image generative models that are trained on multiple scales of a single image, adapting them to the extreme setting in which only a small portion of the image is available for training. We illustrate the strength of IDC on several datasets, using both user studies and quantitative comparisons. ## Keyword: raw image There is no result
process
new submissions for wed dec keyword events masked event modeling self supervised pretraining for event cameras authors simon klenk david bonello lukas koestler daniel cremers subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract event cameras offer the capacity to asynchronously capture brightness changes with low latency high temporal resolution and high dynamic range deploying deep learning methods for classification or other tasks to these sensors typically requires large labeled datasets since the amount of labeled event data is tiny compared to the bulk of labeled rgb imagery the progress of event based vision has remained limited to reduce the dependency on labeled event data we introduce masked event modeling mem a self supervised pretraining framework for events our method pretrains a neural network on unlabeled events which can originate from any event camera recording subsequently the pretrained model is finetuned on a downstream task leading to an overall better performance while requiring fewer labels our method outperforms the state of the art on n imagenet n cars and n increasing the object classification accuracy on n imagenet by we demonstrate that masked event modeling is superior to rgb based pretraining on a real world dataset keyword event camera masked event modeling self supervised pretraining for event cameras authors simon klenk david bonello lukas koestler daniel cremers subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract event cameras offer the capacity to asynchronously capture brightness changes with low latency high temporal resolution and high dynamic range deploying deep learning methods for classification or other tasks to these sensors typically requires large labeled datasets since the amount of labeled event data is tiny compared to the bulk of labeled rgb imagery the progress of event based vision has remained limited to reduce the dependency on labeled event data we introduce masked event modeling mem a self supervised pretraining framework for events our method pretrains a neural network on unlabeled events which can originate from any event camera recording subsequently the pretrained model is finetuned on a downstream task leading to an overall better performance while requiring fewer labels our method outperforms the state of the art on n imagenet n cars and n increasing the object classification accuracy on n imagenet by we demonstrate that masked event modeling is superior to rgb based pretraining on a real world dataset keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp metaclue towards comprehensive visual metaphors research authors arjun r akula brendan driscoll pradyumna narayana soravit changpinyo zhiwei jia suyash damle garima pruthi sugato basu leonidas guibas william t freeman yuanzhen li varun jampani subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract creativity is an indispensable part of human cognition and also an inherent part of how we make sense of the world metaphorical abstraction is fundamental in communicating creative ideas through nuanced relationships between abstract concepts such as feelings while computer vision benchmarks and approaches predominantly focus on understanding and generating literal interpretations of images metaphorical comprehension of images remains relatively unexplored towards this goal we introduce metaclue a set of vision tasks on visual metaphor we also collect high quality and rich metaphor annotations abstract objects concepts relationships along with their corresponding object boxes as there do not exist any datasets that facilitate the evaluation of these tasks we perform a comprehensive analysis of state of the art models in vision and language based on our annotations highlighting strengths and weaknesses of current approaches in visual metaphor classification localization understanding retrieval question answering captioning and generation text to image synthesis tasks we hope this work provides a concrete step towards developing ai systems with human like creative capabilities robust and resource efficient machine learning aided viewport prediction in virtual reality authors yuang jiang konstantinos poularakis diego kiedanski sastry kompella leandros tassiulas subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract degree panoramic videos have gained considerable attention in recent years due to the rapid development of head mounted displays hmds and panoramic cameras one major problem in streaming panoramic videos is that panoramic videos are much larger in size compared to traditional ones moreover the user devices are often in a wireless environment with limited battery computation power and bandwidth to reduce resource consumption researchers have proposed ways to predict the users viewports so that only part of the entire video needs to be transmitted from the server however the robustness of such prediction approaches has been overlooked in the literature it is usually assumed that only a few models pre trained on past users experiences are applied for prediction to all users we observe that those pre trained models can perform poorly for some users because they might have drastically different behaviors from the majority and the pre trained models cannot capture the features in unseen videos in this work we propose a novel meta learning based viewport prediction paradigm to alleviate the worst prediction performance and ensure the robustness of viewport prediction this paradigm uses two machine learning models where the first model predicts the viewing direction and the second model predicts the minimum video prefetch size that can include the actual viewport we first train two meta models so that they are sensitive to new training data and then quickly adapt them to users while they are watching the videos evaluation results reveal that the meta models can adapt quickly to each user and can significantly increase the prediction accuracy especially for the worst performing predictions keyword image signal processing there is no result keyword image signal process there is no result keyword compression content adaptive latents and decoder for neural image compression authors guanbo pan guo lu zhihao hu dong xu subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract in recent years neural image compression nic algorithms have shown powerful coding performance however most of them are not adaptive to the image content although several content adaptive methods have been proposed by updating the encoder side components the adaptability of both latents and the decoder is not well exploited in this work we propose a new nic framework that improves the content adaptability on both latents and the decoder specifically to remove redundancy in the latents our content adaptive channel dropping cacd method automatically selects the optimal quality levels for the latents spatially and drops the redundant channels additionally we propose the content adaptive feature transformation caft method to improve decoder side content adaptability by extracting the characteristic information of the image content which is then used to transform the features in the decoder side experimental results demonstrate that our proposed methods with the encoder side updating algorithm achieve the state of the art performance csmpq class separability based mixed precision quantization authors mingkai wang taisong jin miaohui zhang zhengtao yu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract mixed precision quantization has received increasing attention for its capability of reducing the computational burden and speeding up the inference time existing methods usually focus on the sensitivity of different network layers which requires a time consuming search or training process to this end a novel mixed precision quantization method termed csmpq is proposed specifically the tf idf metric that is widely used in natural language processing nlp is introduced to measure the class separability of layer wise feature maps furthermore a linear programming problem is designed to derive the optimal bit configuration for each layer without any iterative process the proposed csmpq achieves better compression trade offs than the state of the art quantization methods specifically csmpq achieves top acc on resnet with only bops for qat and top acc with only on for ptq keyword raw eff organ level plant shoot segmentation using annotation efficient point clouds authors liyi luo xintong jiang yu yang eugene roy antony samy mark lefsrud valerio hoyos villegas shangpeng sun subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract reliable and automated plant shoot segmentation is a core prerequisite for the extraction of plant phenotypic traits at the organ level combining deep learning and point clouds can provide effective ways to address the challenge however fully supervised deep learning methods require datasets to be point wise annotated which is extremely expensive and time consuming in our work we proposed a novel weakly supervised framework eff for plant shoot segmentation first high resolution point clouds of soybean were reconstructed using a low cost photogrammetry system and the meshlab based plant annotator was developed for plant point cloud annotation second a weakly supervised deep learning method was proposed for plant organ segmentation the method contained pretraining a self supervised network using viewpoint bottleneck loss to learn meaningful intrinsic structure representation from the raw point clouds fine tuning the pre trained model with about only points being annotated to implement plant organ segmentation after three phenotypic traits stem diameter leaf width and leaf length were extracted to test the generality of the proposed method the public dataset was included in this study experimental results showed that the weakly supervised network obtained similar segmentation performance compared with the fully supervised setting our method achieved and in the precision recall score and miou for stem leaf segmentation and and in the ap ap and ap for leaf instance segmentation this study provides an effective way for characterizing plant architecture which will become useful for plant breeders to enhance selection processes fully and weakly supervised referring expression segmentation with end to end learning authors hui li mingjie sun jimin xiao eng gee lim yao zhao subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract referring expression segmentation res which is aimed at localizing and segmenting the target according to the given language expression has drawn increasing attention existing methods jointly consider the localization and segmentation steps which rely on the fused visual and linguistic features for both steps we argue that the conflict between the purpose of identifying an object and generating a mask limits the res performance to solve this problem we propose a parallel position kernel segmentation pipeline to better isolate and then interact the localization and segmentation steps in our pipeline linguistic information will not directly contaminate the visual feature for segmentation specifically the localization step localizes the target object in the image based on the referring expression and then the visual kernel obtained from the localization step guides the segmentation step this pipeline also enables us to train res in a weakly supervised way where the pixel level segmentation labels are replaced by click annotations on center and corner points the position head is fully supervised and trained with the click annotations as supervision and the segmentation head is trained with weakly supervised segmentation losses to validate our framework on a weakly supervised setting we annotated three res benchmark datasets refcoco refcoco and refcocog with click annotations our method is simple but surprisingly effective outperforming all previous state of the art res methods on fully and weakly supervised settings by a large margin the benchmark code and datasets will be released internal diverse image completion authors noa alkobi tamar rott shaham tomer michaeli subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract image completion is widely used in photo restoration and editing applications e g for object removal recently there has been a surge of research on generating diverse completions for missing regions however existing methods require large training sets from a specific domain of interest and often fail on general content images in this paper we propose a diverse completion method that does not require a training set and can thus treat arbitrary images from any domain our internal diverse completion idc approach draws inspiration from recent single image generative models that are trained on multiple scales of a single image adapting them to the extreme setting in which only a small portion of the image is available for training we illustrate the strength of idc on several datasets using both user studies and quantitative comparisons keyword raw image there is no result
1
17,823
23,753,569,026
IssuesEvent
2022-08-31 23:27:46
googleapis/repo-automation-bots
https://api.github.com/repos/googleapis/repo-automation-bots
opened
Pinned dependencies
type: process priority: p2
Pinning some dependencies to older major versions. gcf-utils: - [ ] google-auth-library: v7 - [ ] yargs: v16 - [ ] @types/yargs: v16 - [ ] into-stream: v6 cron-utils: - [ ] google-auth-library: v7
1.0
Pinned dependencies - Pinning some dependencies to older major versions. gcf-utils: - [ ] google-auth-library: v7 - [ ] yargs: v16 - [ ] @types/yargs: v16 - [ ] into-stream: v6 cron-utils: - [ ] google-auth-library: v7
process
pinned dependencies pinning some dependencies to older major versions gcf utils google auth library yargs types yargs into stream cron utils google auth library
1
68,315
14,919,718,988
IssuesEvent
2021-01-23 01:08:30
jgeraigery/lando
https://api.github.com/repos/jgeraigery/lando
opened
CVE-2019-10768 (High) detected in angular-1.4.2.min.js
security vulnerability
## CVE-2019-10768 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.4.2.min.js</b></p></summary> <p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js</a></p> <p>Path to dependency file: lando/node_modules/autocomplete.js/test/playground_angular.html</p> <p>Path to vulnerable library: lando/node_modules/autocomplete.js/test/playground_angular.html,lando/node_modules/autocomplete.js/examples/basic_angular.html</p> <p> Dependency Hierarchy: - :x: **angular-1.4.2.min.js** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In AngularJS before 1.7.9 the function `merge()` could be tricked into adding or modifying properties of `Object.prototype` using a `__proto__` payload. <p>Publish Date: 2019-11-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10768>CVE-2019-10768</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10768">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10768</a></p> <p>Release Date: 2019-11-19</p> <p>Fix Resolution: v1.7.9</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"angular.js","packageVersion":"1.4.2","isTransitiveDependency":false,"dependencyTree":"angular.js:1.4.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.7.9"}],"vulnerabilityIdentifier":"CVE-2019-10768","vulnerabilityDetails":"In AngularJS before 1.7.9 the function `merge()` could be tricked into adding or modifying properties of `Object.prototype` using a `__proto__` payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10768","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-10768 (High) detected in angular-1.4.2.min.js - ## CVE-2019-10768 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.4.2.min.js</b></p></summary> <p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js</a></p> <p>Path to dependency file: lando/node_modules/autocomplete.js/test/playground_angular.html</p> <p>Path to vulnerable library: lando/node_modules/autocomplete.js/test/playground_angular.html,lando/node_modules/autocomplete.js/examples/basic_angular.html</p> <p> Dependency Hierarchy: - :x: **angular-1.4.2.min.js** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In AngularJS before 1.7.9 the function `merge()` could be tricked into adding or modifying properties of `Object.prototype` using a `__proto__` payload. <p>Publish Date: 2019-11-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10768>CVE-2019-10768</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10768">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10768</a></p> <p>Release Date: 2019-11-19</p> <p>Fix Resolution: v1.7.9</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"angular.js","packageVersion":"1.4.2","isTransitiveDependency":false,"dependencyTree":"angular.js:1.4.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.7.9"}],"vulnerabilityIdentifier":"CVE-2019-10768","vulnerabilityDetails":"In AngularJS before 1.7.9 the function `merge()` could be tricked into adding or modifying properties of `Object.prototype` using a `__proto__` payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10768","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in angular min js cve high severity vulnerability vulnerable library angular min js angularjs is an mvc framework for building web applications the core features include html enhanced with custom component and data binding capabilities dependency injection and strong focus on simplicity testability maintainability and boiler plate reduction library home page a href path to dependency file lando node modules autocomplete js test playground angular html path to vulnerable library lando node modules autocomplete js test playground angular html lando node modules autocomplete js examples basic angular html dependency hierarchy x angular min js vulnerable library found in base branch master vulnerability details in angularjs before the function merge could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in angularjs before the function merge could be tricked into adding or modifying properties of object prototype using a proto payload vulnerabilityurl
0
103,425
16,602,511,130
IssuesEvent
2021-06-01 21:42:19
gms-ws-sandbox/nibrs
https://api.github.com/repos/gms-ws-sandbox/nibrs
opened
CVE-2020-13954 (Medium) detected in multiple libraries
security vulnerability
## CVE-2020-13954 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>cxf-rt-transports-http-3.0.16.jar</b>, <b>cxf-rt-transports-http-3.2.5.jar</b>, <b>cxf-rt-transports-http-3.2.1.jar</b></p></summary> <p> <details><summary><b>cxf-rt-transports-http-3.0.16.jar</b></p></summary> <p>Apache CXF Runtime HTTP Transport</p> <p>Library home page: <a href="http://cxf.apache.org">http://cxf.apache.org</a></p> <p>Path to dependency file: nibrs/tools/nibrs-validation/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar</p> <p> Dependency Hierarchy: - :x: **cxf-rt-transports-http-3.0.16.jar** (Vulnerable Library) </details> <details><summary><b>cxf-rt-transports-http-3.2.5.jar</b></p></summary> <p>Apache CXF Runtime HTTP Transport</p> <p>Library home page: <a href="http://cxf.apache.org">http://cxf.apache.org</a></p> <p>Path to dependency file: nibrs/tools/nibrs-route/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.2.5/cxf-rt-transports-http-3.2.5.jar,nibrs/tools/nibrs-route/target/nibrs-route-1.0.0/WEB-INF/lib/cxf-rt-transports-http-3.2.5.jar</p> <p> Dependency Hierarchy: - :x: **cxf-rt-transports-http-3.2.5.jar** (Vulnerable Library) </details> <details><summary><b>cxf-rt-transports-http-3.2.1.jar</b></p></summary> <p>Apache CXF Runtime HTTP Transport</p> <p>Library home page: <a href="http://cxf.apache.org">http://cxf.apache.org</a></p> <p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.2.1/cxf-rt-transports-http-3.2.1.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/cxf-rt-transports-http-3.2.1.jar</p> <p> Dependency Hierarchy: - :x: **cxf-rt-transports-http-3.2.1.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs/commit/dba6b0930aa319c568021490e9259f5cae89b6c5">dba6b0930aa319c568021490e9259f5cae89b6c5</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> By default, Apache CXF creates a /services page containing a listing of the available endpoint names and addresses. This webpage is vulnerable to a reflected Cross-Site Scripting (XSS) attack via the styleSheetPath, which allows a malicious actor to inject javascript into the web page. This vulnerability affects all versions of Apache CXF prior to 3.4.1 and 3.3.8. Please note that this is a separate issue to CVE-2019-17573. <p>Publish Date: 2020-11-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13954>CVE-2020-13954</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://cxf.apache.org/security-advisories.data/CVE-2020-13954.txt.asc?version=1&modificationDate=1605183670659&api=v2">http://cxf.apache.org/security-advisories.data/CVE-2020-13954.txt.asc?version=1&modificationDate=1605183670659&api=v2</a></p> <p>Release Date: 2020-11-12</p> <p>Fix Resolution: org.apache.cxf:cxf-rt-transports-http:3.3.8, org.apache.cxf:cxf-rt-transports-http:3.4.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.cxf","packageName":"cxf-rt-transports-http","packageVersion":"3.0.16","packageFilePaths":["/tools/nibrs-validation/pom.xml","/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-validate-common/pom.xml","/tools/nibrs-summary-report-common/pom.xml","/tools/nibrs-common/pom.xml","/web/nibrs-web/pom.xml","/tools/nibrs-flatfile/pom.xml","/tools/nibrs-staging-data/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.cxf:cxf-rt-transports-http:3.0.16","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.cxf:cxf-rt-transports-http:3.3.8, org.apache.cxf:cxf-rt-transports-http:3.4.1"},{"packageType":"Java","groupId":"org.apache.cxf","packageName":"cxf-rt-transports-http","packageVersion":"3.2.5","packageFilePaths":["/tools/nibrs-route/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.cxf:cxf-rt-transports-http:3.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.cxf:cxf-rt-transports-http:3.3.8, org.apache.cxf:cxf-rt-transports-http:3.4.1"},{"packageType":"Java","groupId":"org.apache.cxf","packageName":"cxf-rt-transports-http","packageVersion":"3.2.1","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.cxf:cxf-rt-transports-http:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.cxf:cxf-rt-transports-http:3.3.8, org.apache.cxf:cxf-rt-transports-http:3.4.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-13954","vulnerabilityDetails":"By default, Apache CXF creates a /services page containing a listing of the available endpoint names and addresses. This webpage is vulnerable to a reflected Cross-Site Scripting (XSS) attack via the styleSheetPath, which allows a malicious actor to inject javascript into the web page. This vulnerability affects all versions of Apache CXF prior to 3.4.1 and 3.3.8. Please note that this is a separate issue to CVE-2019-17573.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13954","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-13954 (Medium) detected in multiple libraries - ## CVE-2020-13954 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>cxf-rt-transports-http-3.0.16.jar</b>, <b>cxf-rt-transports-http-3.2.5.jar</b>, <b>cxf-rt-transports-http-3.2.1.jar</b></p></summary> <p> <details><summary><b>cxf-rt-transports-http-3.0.16.jar</b></p></summary> <p>Apache CXF Runtime HTTP Transport</p> <p>Library home page: <a href="http://cxf.apache.org">http://cxf.apache.org</a></p> <p>Path to dependency file: nibrs/tools/nibrs-validation/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar,/home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.0.16/cxf-rt-transports-http-3.0.16.jar</p> <p> Dependency Hierarchy: - :x: **cxf-rt-transports-http-3.0.16.jar** (Vulnerable Library) </details> <details><summary><b>cxf-rt-transports-http-3.2.5.jar</b></p></summary> <p>Apache CXF Runtime HTTP Transport</p> <p>Library home page: <a href="http://cxf.apache.org">http://cxf.apache.org</a></p> <p>Path to dependency file: nibrs/tools/nibrs-route/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.2.5/cxf-rt-transports-http-3.2.5.jar,nibrs/tools/nibrs-route/target/nibrs-route-1.0.0/WEB-INF/lib/cxf-rt-transports-http-3.2.5.jar</p> <p> Dependency Hierarchy: - :x: **cxf-rt-transports-http-3.2.5.jar** (Vulnerable Library) </details> <details><summary><b>cxf-rt-transports-http-3.2.1.jar</b></p></summary> <p>Apache CXF Runtime HTTP Transport</p> <p>Library home page: <a href="http://cxf.apache.org">http://cxf.apache.org</a></p> <p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/cxf/cxf-rt-transports-http/3.2.1/cxf-rt-transports-http-3.2.1.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/cxf-rt-transports-http-3.2.1.jar</p> <p> Dependency Hierarchy: - :x: **cxf-rt-transports-http-3.2.1.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs/commit/dba6b0930aa319c568021490e9259f5cae89b6c5">dba6b0930aa319c568021490e9259f5cae89b6c5</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> By default, Apache CXF creates a /services page containing a listing of the available endpoint names and addresses. This webpage is vulnerable to a reflected Cross-Site Scripting (XSS) attack via the styleSheetPath, which allows a malicious actor to inject javascript into the web page. This vulnerability affects all versions of Apache CXF prior to 3.4.1 and 3.3.8. Please note that this is a separate issue to CVE-2019-17573. <p>Publish Date: 2020-11-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13954>CVE-2020-13954</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://cxf.apache.org/security-advisories.data/CVE-2020-13954.txt.asc?version=1&modificationDate=1605183670659&api=v2">http://cxf.apache.org/security-advisories.data/CVE-2020-13954.txt.asc?version=1&modificationDate=1605183670659&api=v2</a></p> <p>Release Date: 2020-11-12</p> <p>Fix Resolution: org.apache.cxf:cxf-rt-transports-http:3.3.8, org.apache.cxf:cxf-rt-transports-http:3.4.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.cxf","packageName":"cxf-rt-transports-http","packageVersion":"3.0.16","packageFilePaths":["/tools/nibrs-validation/pom.xml","/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-validate-common/pom.xml","/tools/nibrs-summary-report-common/pom.xml","/tools/nibrs-common/pom.xml","/web/nibrs-web/pom.xml","/tools/nibrs-flatfile/pom.xml","/tools/nibrs-staging-data/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.cxf:cxf-rt-transports-http:3.0.16","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.cxf:cxf-rt-transports-http:3.3.8, org.apache.cxf:cxf-rt-transports-http:3.4.1"},{"packageType":"Java","groupId":"org.apache.cxf","packageName":"cxf-rt-transports-http","packageVersion":"3.2.5","packageFilePaths":["/tools/nibrs-route/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.cxf:cxf-rt-transports-http:3.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.cxf:cxf-rt-transports-http:3.3.8, org.apache.cxf:cxf-rt-transports-http:3.4.1"},{"packageType":"Java","groupId":"org.apache.cxf","packageName":"cxf-rt-transports-http","packageVersion":"3.2.1","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.cxf:cxf-rt-transports-http:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.cxf:cxf-rt-transports-http:3.3.8, org.apache.cxf:cxf-rt-transports-http:3.4.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-13954","vulnerabilityDetails":"By default, Apache CXF creates a /services page containing a listing of the available endpoint names and addresses. This webpage is vulnerable to a reflected Cross-Site Scripting (XSS) attack via the styleSheetPath, which allows a malicious actor to inject javascript into the web page. This vulnerability affects all versions of Apache CXF prior to 3.4.1 and 3.3.8. Please note that this is a separate issue to CVE-2019-17573.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13954","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries cxf rt transports http jar cxf rt transports http jar cxf rt transports http jar cxf rt transports http jar apache cxf runtime http transport library home page a href path to dependency file nibrs tools nibrs validation pom xml path to vulnerable library home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar nibrs web nibrs web target nibrs web web inf lib cxf rt transports http jar home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar dependency hierarchy x cxf rt transports http jar vulnerable library cxf rt transports http jar apache cxf runtime http transport library home page a href path to dependency file nibrs tools nibrs route pom xml path to vulnerable library home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar nibrs tools nibrs route target nibrs route web inf lib cxf rt transports http jar dependency hierarchy x cxf rt transports http jar vulnerable library cxf rt transports http jar apache cxf runtime http transport library home page a href path to dependency file nibrs tools nibrs fbi service pom xml path to vulnerable library home wss scanner repository org apache cxf cxf rt transports http cxf rt transports http jar nibrs tools nibrs fbi service target nibrs fbi service web inf lib cxf rt transports http jar dependency hierarchy x cxf rt transports http jar vulnerable library found in head commit a href found in base branch master vulnerability details by default apache cxf creates a services page containing a listing of the available endpoint names and addresses this webpage is vulnerable to a reflected cross site scripting xss attack via the stylesheetpath which allows a malicious actor to inject javascript into the web page this vulnerability affects all versions of apache cxf prior to and please note that this is a separate issue to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache cxf cxf rt transports http org apache cxf cxf rt transports http isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache cxf cxf rt transports http isminimumfixversionavailable true minimumfixversion org apache cxf cxf rt transports http org apache cxf cxf rt transports http packagetype java groupid org apache cxf packagename cxf rt transports http packageversion packagefilepaths istransitivedependency false dependencytree org apache cxf cxf rt transports http isminimumfixversionavailable true minimumfixversion org apache cxf cxf rt transports http org apache cxf cxf rt transports http packagetype java groupid org apache cxf packagename cxf rt transports http packageversion packagefilepaths istransitivedependency false dependencytree org apache cxf cxf rt transports http isminimumfixversionavailable true minimumfixversion org apache cxf cxf rt transports http org apache cxf cxf rt transports http basebranches vulnerabilityidentifier cve vulnerabilitydetails by default apache cxf creates a services page containing a listing of the available endpoint names and addresses this webpage is vulnerable to a reflected cross site scripting xss attack via the stylesheetpath which allows a malicious actor to inject javascript into the web page this vulnerability affects all versions of apache cxf prior to and please note that this is a separate issue to cve vulnerabilityurl
0
3,279
6,366,981,275
IssuesEvent
2017-08-01 04:00:57
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Discuss planning a long-term renaming of `process.nextTick()`
discuss process
This continues to confuse people and we should probably do something about it since the current behavior is unlikely to change back and because the expected behavior is now actually found in `setImmediate()`. Doing this would be a very long-term thing. **`process.nextTick()` has very significant ecosystem usage.** ## I'm thinking we would created a differently named alias but not formally document it for an LTS cycle or so. That way, when we document it people can switch over in a way that won't hurt older versions or require shims. It's not exactly the "cleanest" way forward but there is little else we can do. **But**, I think we **should do something** about it for the future. ## I'm not sure what we would call it though. Maybe something like `callBeforeIO`? Idk yet. @nodejs/collaborators
1.0
Discuss planning a long-term renaming of `process.nextTick()` - This continues to confuse people and we should probably do something about it since the current behavior is unlikely to change back and because the expected behavior is now actually found in `setImmediate()`. Doing this would be a very long-term thing. **`process.nextTick()` has very significant ecosystem usage.** ## I'm thinking we would created a differently named alias but not formally document it for an LTS cycle or so. That way, when we document it people can switch over in a way that won't hurt older versions or require shims. It's not exactly the "cleanest" way forward but there is little else we can do. **But**, I think we **should do something** about it for the future. ## I'm not sure what we would call it though. Maybe something like `callBeforeIO`? Idk yet. @nodejs/collaborators
process
discuss planning a long term renaming of process nexttick this continues to confuse people and we should probably do something about it since the current behavior is unlikely to change back and because the expected behavior is now actually found in setimmediate doing this would be a very long term thing process nexttick has very significant ecosystem usage i m thinking we would created a differently named alias but not formally document it for an lts cycle or so that way when we document it people can switch over in a way that won t hurt older versions or require shims it s not exactly the cleanest way forward but there is little else we can do but i think we should do something about it for the future i m not sure what we would call it though maybe something like callbeforeio idk yet nodejs collaborators
1
73
2,524,611,204
IssuesEvent
2015-01-20 18:55:56
MozillaFoundation/plan
https://api.github.com/repos/MozillaFoundation/plan
closed
Move PTO calendar to mofo google apps?
p2 process
Sorry if this is the wrong place to file this. We previously had a PTO google calendar that I find pretty useful, are we still using that? If so, can we move it to mofo?
1.0
Move PTO calendar to mofo google apps? - Sorry if this is the wrong place to file this. We previously had a PTO google calendar that I find pretty useful, are we still using that? If so, can we move it to mofo?
process
move pto calendar to mofo google apps sorry if this is the wrong place to file this we previously had a pto google calendar that i find pretty useful are we still using that if so can we move it to mofo
1
13,724
16,486,839,660
IssuesEvent
2021-05-24 19:20:10
googleapis/nodejs-api-gateway
https://api.github.com/repos/googleapis/nodejs-api-gateway
closed
GA release of @google-cloud/api-gateway
api: apigateway type: process
Package name: **@google-cloud/api-gateway** Current release: **beta** Proposed release: **GA** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [ ] 28 days elapsed since last beta release with new API surface - [ ] Server API is GA - [ ] Package API is stable, and we can commit to backward compatibility - [ ] All dependencies are GA ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [ ] Per-API README includes a full description of the API - [ ] Per-API README contains at least one “getting started” sample using the most common API scenario - [ ] Manual code has been reviewed by API producer - [ ] Manual code has been reviewed by a DPE responsible for samples - [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
1.0
GA release of @google-cloud/api-gateway - Package name: **@google-cloud/api-gateway** Current release: **beta** Proposed release: **GA** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [ ] 28 days elapsed since last beta release with new API surface - [ ] Server API is GA - [ ] Package API is stable, and we can commit to backward compatibility - [ ] All dependencies are GA ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [ ] Per-API README includes a full description of the API - [ ] Per-API README contains at least one “getting started” sample using the most common API scenario - [ ] Manual code has been reviewed by API producer - [ ] Manual code has been reviewed by a DPE responsible for samples - [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
process
ga release of google cloud api gateway package name google cloud api gateway current release beta proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
1
1,889
4,716,744,470
IssuesEvent
2016-10-16 07:31:22
onlylemi/notes
https://api.github.com/repos/onlylemi/notes
closed
AndroidCapture For Processing(Processing Lib)
Processing 开源
一个 Processing 和 Android 交互的 lib。 * URL:https://onlylemi.github.io/processing-android-capture/ * Releases:https://github.com/onlylemi/processing-android-capture/releases * Github Repo:https://github.com/onlylemi/processing-android-capture * 中文介绍:https://onlylemi.github.io/projects/processing-android-capture/ * Processing:https://processing.org/reference/libraries/#data
1.0
AndroidCapture For Processing(Processing Lib) - 一个 Processing 和 Android 交互的 lib。 * URL:https://onlylemi.github.io/processing-android-capture/ * Releases:https://github.com/onlylemi/processing-android-capture/releases * Github Repo:https://github.com/onlylemi/processing-android-capture * 中文介绍:https://onlylemi.github.io/projects/processing-android-capture/ * Processing:https://processing.org/reference/libraries/#data
process
androidcapture for processing(processing lib) 一个 processing 和 android 交互的 lib。 url: releases: github repo: 中文介绍: processing:
1
13,138
15,556,886,828
IssuesEvent
2021-03-16 08:26:23
ethereumclassic/ECIPs
https://api.github.com/repos/ethereumclassic/ECIPs
closed
Please update ECIP Editor's to `Write Access`
meta:3 process
For some reason my permission aren't configured correctly. Will you resolve this issue other ecip editors? Thanks in advance. Current ECIP editors: Mr. Meows D. Bits ( @meowsbits ) Talha Cross ( @soc1c ) Zachary Belford ( @BelfordZ ) Felipe Faraggi ( @faraggi ) r0n1n ( @gitr0n1n ) Stevan Lohja ( @stevanlohja ) https://ecips.ethereumclassic.org/ECIPs/ecip-1000 I think this needs updating too: https://github.com/orgs/ethereumclassic/teams/ecip-editors @ethereumclassic/ecip-editors
1.0
Please update ECIP Editor's to `Write Access` - For some reason my permission aren't configured correctly. Will you resolve this issue other ecip editors? Thanks in advance. Current ECIP editors: Mr. Meows D. Bits ( @meowsbits ) Talha Cross ( @soc1c ) Zachary Belford ( @BelfordZ ) Felipe Faraggi ( @faraggi ) r0n1n ( @gitr0n1n ) Stevan Lohja ( @stevanlohja ) https://ecips.ethereumclassic.org/ECIPs/ecip-1000 I think this needs updating too: https://github.com/orgs/ethereumclassic/teams/ecip-editors @ethereumclassic/ecip-editors
process
please update ecip editor s to write access for some reason my permission aren t configured correctly will you resolve this issue other ecip editors thanks in advance current ecip editors mr meows d bits meowsbits talha cross zachary belford belfordz felipe faraggi faraggi stevan lohja stevanlohja i think this needs updating too ethereumclassic ecip editors
1
19,771
26,144,085,411
IssuesEvent
2022-12-30 00:09:33
vectordotdev/vector
https://api.github.com/repos/vectordotdev/vector
closed
Add `multiline` support for the `syslog` source
source: syslog type: feature domain: processing
This is likely covered by #3757, but I wanted an explicit issue for it. This is the same as #3307, but for the `syslog` source.
1.0
Add `multiline` support for the `syslog` source - This is likely covered by #3757, but I wanted an explicit issue for it. This is the same as #3307, but for the `syslog` source.
process
add multiline support for the syslog source this is likely covered by but i wanted an explicit issue for it this is the same as but for the syslog source
1
9,212
12,244,301,809
IssuesEvent
2020-05-05 10:51:53
googleapis/google-cloud-dotnet
https://api.github.com/repos/googleapis/google-cloud-dotnet
opened
[BigQuery]: Review Model CRUD operations.
api: bigquery release blocking type: feature request type: process
Some Model CRUD async operations are missing a cancellation token param. (This will be a breaking change, which is OK because we are in beta, but marking as release blocking.) Some Model CRUD async operations have not equivalence tests. Spotted while working on #3762, will address after.
1.0
[BigQuery]: Review Model CRUD operations. - Some Model CRUD async operations are missing a cancellation token param. (This will be a breaking change, which is OK because we are in beta, but marking as release blocking.) Some Model CRUD async operations have not equivalence tests. Spotted while working on #3762, will address after.
process
review model crud operations some model crud async operations are missing a cancellation token param this will be a breaking change which is ok because we are in beta but marking as release blocking some model crud async operations have not equivalence tests spotted while working on will address after
1
46,469
7,260,575,893
IssuesEvent
2018-02-18 11:42:35
spotbugs/spotbugs
https://api.github.com/repos/spotbugs/spotbugs
opened
Add explanation about -pluginList option
documentation good first issue
[SpotBugs official document](https://spotbugs.readthedocs.io/) explains nothing about `-pluginList` option. It's better to add. refs: https://stackoverflow.com/a/32992247/814928
1.0
Add explanation about -pluginList option - [SpotBugs official document](https://spotbugs.readthedocs.io/) explains nothing about `-pluginList` option. It's better to add. refs: https://stackoverflow.com/a/32992247/814928
non_process
add explanation about pluginlist option explains nothing about pluginlist option it s better to add refs
0
13,333
15,790,846,099
IssuesEvent
2021-04-02 02:41:28
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Hostname doesn't match specifier %h
command-line options log-processing question
ppp-mia-30.shadow.net - - [01/Jul/1995:00:00:41 -0400] "GET /images/NASA-logosmall.gif HTTP/1.0" 200 786 Token 'ppp-mia-30.shadow.net' doesn't match specifier '%h' Is it possible to read the hostname without it being ipv4 or ipv6?
1.0
Hostname doesn't match specifier %h - ppp-mia-30.shadow.net - - [01/Jul/1995:00:00:41 -0400] "GET /images/NASA-logosmall.gif HTTP/1.0" 200 786 Token 'ppp-mia-30.shadow.net' doesn't match specifier '%h' Is it possible to read the hostname without it being ipv4 or ipv6?
process
hostname doesn t match specifier h ppp mia shadow net get images nasa logosmall gif http token ppp mia shadow net doesn t match specifier h is it possible to read the hostname without it being or
1
19,969
26,450,714,324
IssuesEvent
2023-01-16 10:59:16
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
zsh comes up on macOS even though System.Diagnostics.ProcessStartInfo.CreateNoWindow is specified as true.
area-System.Diagnostics.Process untriaged
### Description A window is created when `zsh` is started with `System.Diagnostics.Process.Start`, even though `System.Diagnostics.ProcessStartInfo.CreateNoWindow` is set to true. ### Reproduction Steps The following code can be easily checked: ```fsharp open System.Diagnostics open System.Text [<Literal>] let private zsh' = "/System/Applications/Utilities/Terminal.app/Contents/MacOS/Terminal" let exec () = use p = ProcessStartInfo (zsh', UseShellExecute = false, RedirectStandardInput = true, RedirectStandardOutput = true, CreateNoWindow = true) |> System.Diagnostics.Process.Start let stdout = StringBuilder() p.OutputDataReceived.Add (fun e -> if e.Data <> null then stdout.AppendLine(e.Data) |> ignore) p.BeginOutputReadLine() p.StandardInput.WriteLine "ls ./" p.StandardInput.WriteLine "exit" p.WaitForExit() stdout.ToString() ``` It is reproduced for both net6.0 and net7.0. ### Expected behavior It is expected that no zsh window will be generated. ### Actual behavior A zsh window shows. ### Regression? No idea. ### Known Workarounds Nothing. ### Configuration - Which version of .NET is the code running on? → net6.0 / net7.0 - What OS and version, and what distro if applicable? → macOS Venture 13.1 - What is the architecture (x64, x86, ARM, ARM64)? → ARM64 ### Other information _No response_
1.0
zsh comes up on macOS even though System.Diagnostics.ProcessStartInfo.CreateNoWindow is specified as true. - ### Description A window is created when `zsh` is started with `System.Diagnostics.Process.Start`, even though `System.Diagnostics.ProcessStartInfo.CreateNoWindow` is set to true. ### Reproduction Steps The following code can be easily checked: ```fsharp open System.Diagnostics open System.Text [<Literal>] let private zsh' = "/System/Applications/Utilities/Terminal.app/Contents/MacOS/Terminal" let exec () = use p = ProcessStartInfo (zsh', UseShellExecute = false, RedirectStandardInput = true, RedirectStandardOutput = true, CreateNoWindow = true) |> System.Diagnostics.Process.Start let stdout = StringBuilder() p.OutputDataReceived.Add (fun e -> if e.Data <> null then stdout.AppendLine(e.Data) |> ignore) p.BeginOutputReadLine() p.StandardInput.WriteLine "ls ./" p.StandardInput.WriteLine "exit" p.WaitForExit() stdout.ToString() ``` It is reproduced for both net6.0 and net7.0. ### Expected behavior It is expected that no zsh window will be generated. ### Actual behavior A zsh window shows. ### Regression? No idea. ### Known Workarounds Nothing. ### Configuration - Which version of .NET is the code running on? → net6.0 / net7.0 - What OS and version, and what distro if applicable? → macOS Venture 13.1 - What is the architecture (x64, x86, ARM, ARM64)? → ARM64 ### Other information _No response_
process
zsh comes up on macos even though system diagnostics processstartinfo createnowindow is specified as true description a window is created when zsh is started with system diagnostics process start even though system diagnostics processstartinfo createnowindow is set to true reproduction steps the following code can be easily checked fsharp open system diagnostics open system text let private zsh system applications utilities terminal app contents macos terminal let exec use p processstartinfo zsh useshellexecute false redirectstandardinput true redirectstandardoutput true createnowindow true system diagnostics process start let stdout stringbuilder p outputdatareceived add fun e if e data null then stdout appendline e data ignore p beginoutputreadline p standardinput writeline ls p standardinput writeline exit p waitforexit stdout tostring it is reproduced for both and expected behavior it is expected that no zsh window will be generated actual behavior a zsh window shows regression no idea known workarounds nothing configuration which version of net is the code running on → what os and version and what distro if applicable → macos venture what is the architecture arm → other information no response
1
1,207
3,703,865,179
IssuesEvent
2016-02-29 21:55:49
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
NTR: 'leukocyte adhesion to vascular endothelial cell’
BHF-UCL miRNA New term request RNA processes
Dear Editors, I wish to request a new GO term for annotating paper PMID:23897866. It is demonstrated in Figure 6B and C in this paper that leukocytes adhere to vascular endothelial cells, following regulation of this process by hsa-miR-92a-3p, and stimulation with TNFalpha. In wish to annotate this as: Object: RNAC: URS00003768C5_9606 (hsa-miR-92a-1-3p) GO term: Negative regulation of leukocyte adhesion to vascular endothelial cell Evidence: IMP Annotation Extension: part_of GO:0042044 (cellular response to tumour necrosis factor) Therefore, I will also be requesting regulation terms for ‘leukocyte adhesion to vascular endothelial cell’ via TermGenie. I will look forward to hearing from you with regard to this NTR. Thank you, Barbara #12249 https://github.com/obophenotype/cell-ontology/issues/420 @rachhuntley @RLovering
1.0
NTR: 'leukocyte adhesion to vascular endothelial cell’ - Dear Editors, I wish to request a new GO term for annotating paper PMID:23897866. It is demonstrated in Figure 6B and C in this paper that leukocytes adhere to vascular endothelial cells, following regulation of this process by hsa-miR-92a-3p, and stimulation with TNFalpha. In wish to annotate this as: Object: RNAC: URS00003768C5_9606 (hsa-miR-92a-1-3p) GO term: Negative regulation of leukocyte adhesion to vascular endothelial cell Evidence: IMP Annotation Extension: part_of GO:0042044 (cellular response to tumour necrosis factor) Therefore, I will also be requesting regulation terms for ‘leukocyte adhesion to vascular endothelial cell’ via TermGenie. I will look forward to hearing from you with regard to this NTR. Thank you, Barbara #12249 https://github.com/obophenotype/cell-ontology/issues/420 @rachhuntley @RLovering
process
ntr leukocyte adhesion to vascular endothelial cell’ dear editors i wish to request a new go term for annotating paper pmid it is demonstrated in figure and c in this paper that leukocytes adhere to vascular endothelial cells following regulation of this process by hsa mir and stimulation with tnfalpha in wish to annotate this as object rnac hsa mir go term negative regulation of leukocyte adhesion to vascular endothelial cell evidence imp annotation extension part of go cellular response to tumour necrosis factor therefore i will also be requesting regulation terms for ‘leukocyte adhesion to vascular endothelial cell’ via termgenie i will look forward to hearing from you with regard to this ntr thank you barbara rachhuntley rlovering
1
1,816
4,567,214,924
IssuesEvent
2016-09-15 10:14:12
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Logging works only one out of four times in windows console
process windows
Description copied from issue of in electron [here](https://github.com/electron/electron/issues/7149). Seems to be an issue of Node.js. * Node.js Version: 3.6.0 * Electron version: 1.3.5 * Operating system: Windows 8.1 Pro I am using the [electron-quick-start][1] app. I have added logging statements in the main.js as follows: [...] // Keep a global reference of the window object, if you don't, the window will // be closed automatically when the JavaScript object is garbage collected. let mainWindow console.log("Test") function createWindow () { // Create the browser window. [...] I have now started the app four times in a row without changing anything. The result can be seen in the picture below. [![electron logging][2]][2] Only one of the four times did I get the logging message inside my console. [1]: https://github.com/electron/electron-quick-start [2]: http://i.stack.imgur.com/5oj3f.png
1.0
Logging works only one out of four times in windows console - Description copied from issue of in electron [here](https://github.com/electron/electron/issues/7149). Seems to be an issue of Node.js. * Node.js Version: 3.6.0 * Electron version: 1.3.5 * Operating system: Windows 8.1 Pro I am using the [electron-quick-start][1] app. I have added logging statements in the main.js as follows: [...] // Keep a global reference of the window object, if you don't, the window will // be closed automatically when the JavaScript object is garbage collected. let mainWindow console.log("Test") function createWindow () { // Create the browser window. [...] I have now started the app four times in a row without changing anything. The result can be seen in the picture below. [![electron logging][2]][2] Only one of the four times did I get the logging message inside my console. [1]: https://github.com/electron/electron-quick-start [2]: http://i.stack.imgur.com/5oj3f.png
process
logging works only one out of four times in windows console description copied from issue of in electron seems to be an issue of node js node js version electron version operating system windows pro i am using the app i have added logging statements in the main js as follows keep a global reference of the window object if you don t the window will be closed automatically when the javascript object is garbage collected let mainwindow console log test function createwindow create the browser window i have now started the app four times in a row without changing anything the result can be seen in the picture below only one of the four times did i get the logging message inside my console
1
179,571
13,889,160,392
IssuesEvent
2020-10-19 07:29:24
redhat-developer/rh-che
https://api.github.com/repos/redhat-developer/rh-che
closed
[Infra Issue] Slow workspace startup taking over 120 seconds
kind/periodic-e2e-test team/rhche-qe
Sometimes workspace startup hangs at different tasks of che-exec container - pulling plugins, starting workspace containers and sidecars This causes our tests to fail ![image](https://user-images.githubusercontent.com/16451875/91726585-cf186a80-eba0-11ea-85da-e93cfb444d0e.png) ![image](https://user-images.githubusercontent.com/16451875/91726605-d770a580-eba0-11ea-94b4-1e1e43b6d99a.png)
1.0
[Infra Issue] Slow workspace startup taking over 120 seconds - Sometimes workspace startup hangs at different tasks of che-exec container - pulling plugins, starting workspace containers and sidecars This causes our tests to fail ![image](https://user-images.githubusercontent.com/16451875/91726585-cf186a80-eba0-11ea-85da-e93cfb444d0e.png) ![image](https://user-images.githubusercontent.com/16451875/91726605-d770a580-eba0-11ea-94b4-1e1e43b6d99a.png)
non_process
slow workspace startup taking over seconds sometimes workspace startup hangs at different tasks of che exec container pulling plugins starting workspace containers and sidecars this causes our tests to fail
0
144,253
19,286,088,642
IssuesEvent
2021-12-11 01:34:49
tamirverthim/arthas
https://api.github.com/repos/tamirverthim/arthas
opened
CVE-2021-43797 (Medium) detected in netty-codec-http-4.1.29.Final.jar
security vulnerability
## CVE-2021-43797 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.29.Final.jar</b></p></summary> <p>Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers and clients.</p> <p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p> <p>Path to dependency file: arthas/core/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/io/netty/netty-codec-http/4.1.29.Final/netty-codec-http-4.1.29.Final.jar</p> <p> Dependency Hierarchy: - termd-core-1.1.7.1.jar (Root Library) - :x: **netty-codec-http-4.1.29.Final.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. Netty prior to version 4.1.7.1.Final skips control chars when they are present at the beginning / end of the header name. It should instead fail fast as these are not allowed by the spec and could lead to HTTP request smuggling. Failing to do the validation might cause netty to "sanitize" header names before it forward these to another remote system when used as proxy. This remote system can't see the invalid usage anymore, and therefore does not do the validation itself. Users should upgrade to version 4.1.7.1.Final to receive a patch. <p>Publish Date: 2021-12-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43797>CVE-2021-43797</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-wx5j-54mm-rqqq">https://github.com/advisories/GHSA-wx5j-54mm-rqqq</a></p> <p>Release Date: 2021-11-17</p> <p>Fix Resolution: io.netty:netty-codec-http:4.1.71.Final</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.29.Final","packageFilePaths":["/core/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.alibaba.middleware:termd-core:1.1.7.1;io.netty:netty-codec-http:4.1.29.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-codec-http:4.1.71.Final","isBinary":false}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-43797","vulnerabilityDetails":"Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers \u0026 clients. Netty prior to version 4.1.7.1.Final skips control chars when they are present at the beginning / end of the header name. It should instead fail fast as these are not allowed by the spec and could lead to HTTP request smuggling. Failing to do the validation might cause netty to \"sanitize\" header names before it forward these to another remote system when used as proxy. This remote system can\u0027t see the invalid usage anymore, and therefore does not do the validation itself. Users should upgrade to version 4.1.7.1.Final to receive a patch.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43797","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-43797 (Medium) detected in netty-codec-http-4.1.29.Final.jar - ## CVE-2021-43797 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.29.Final.jar</b></p></summary> <p>Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers and clients.</p> <p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p> <p>Path to dependency file: arthas/core/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/io/netty/netty-codec-http/4.1.29.Final/netty-codec-http-4.1.29.Final.jar</p> <p> Dependency Hierarchy: - termd-core-1.1.7.1.jar (Root Library) - :x: **netty-codec-http-4.1.29.Final.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. Netty prior to version 4.1.7.1.Final skips control chars when they are present at the beginning / end of the header name. It should instead fail fast as these are not allowed by the spec and could lead to HTTP request smuggling. Failing to do the validation might cause netty to "sanitize" header names before it forward these to another remote system when used as proxy. This remote system can't see the invalid usage anymore, and therefore does not do the validation itself. Users should upgrade to version 4.1.7.1.Final to receive a patch. <p>Publish Date: 2021-12-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43797>CVE-2021-43797</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-wx5j-54mm-rqqq">https://github.com/advisories/GHSA-wx5j-54mm-rqqq</a></p> <p>Release Date: 2021-11-17</p> <p>Fix Resolution: io.netty:netty-codec-http:4.1.71.Final</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.29.Final","packageFilePaths":["/core/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.alibaba.middleware:termd-core:1.1.7.1;io.netty:netty-codec-http:4.1.29.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-codec-http:4.1.71.Final","isBinary":false}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-43797","vulnerabilityDetails":"Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers \u0026 clients. Netty prior to version 4.1.7.1.Final skips control chars when they are present at the beginning / end of the header name. It should instead fail fast as these are not allowed by the spec and could lead to HTTP request smuggling. Failing to do the validation might cause netty to \"sanitize\" header names before it forward these to another remote system when used as proxy. This remote system can\u0027t see the invalid usage anymore, and therefore does not do the validation itself. Users should upgrade to version 4.1.7.1.Final to receive a patch.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43797","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in netty codec http final jar cve medium severity vulnerability vulnerable library netty codec http final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file arthas core pom xml path to vulnerable library root repository io netty netty codec http final netty codec http final jar dependency hierarchy termd core jar root library x netty codec http final jar vulnerable library vulnerability details netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers clients netty prior to version final skips control chars when they are present at the beginning end of the header name it should instead fail fast as these are not allowed by the spec and could lead to http request smuggling failing to do the validation might cause netty to sanitize header names before it forward these to another remote system when used as proxy this remote system can t see the invalid usage anymore and therefore does not do the validation itself users should upgrade to version final to receive a patch publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty codec http final isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com alibaba middleware termd core io netty netty codec http final isminimumfixversionavailable true minimumfixversion io netty netty codec http final isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers clients netty prior to version final skips control chars when they are present at the beginning end of the header name it should instead fail fast as these are not allowed by the spec and could lead to http request smuggling failing to do the validation might cause netty to sanitize header names before it forward these to another remote system when used as proxy this remote system can see the invalid usage anymore and therefore does not do the validation itself users should upgrade to version final to receive a patch vulnerabilityurl
0
99,090
12,397,122,215
IssuesEvent
2020-05-20 21:56:59
microsoft/vscode-azurestorage
https://api.github.com/repos/microsoft/vscode-azurestorage
closed
[Suggestion] Add a notification when successfully created a storage account
AT-CTI by design
**OS:** Windows **Build Version:** [20200518.1](https://dev.azure.com/ms-azuretools/AzCode/_build/results?buildId=19591&view=artifacts&type=publishedArtifacts) **Repro Steps:** 1. Right click the applied subscription -->Select 'Create Storage Account…(Advanced)' 2. Enter a name--> Select a resource group--> Select "No"--> Select a location 3. Check the result **Actual:** There is receive a notification during creating a storage account, but don’t receive a notification when the creation is complete. **Suggestion:** Add a notification when successfully created a storage account.
1.0
[Suggestion] Add a notification when successfully created a storage account - **OS:** Windows **Build Version:** [20200518.1](https://dev.azure.com/ms-azuretools/AzCode/_build/results?buildId=19591&view=artifacts&type=publishedArtifacts) **Repro Steps:** 1. Right click the applied subscription -->Select 'Create Storage Account…(Advanced)' 2. Enter a name--> Select a resource group--> Select "No"--> Select a location 3. Check the result **Actual:** There is receive a notification during creating a storage account, but don’t receive a notification when the creation is complete. **Suggestion:** Add a notification when successfully created a storage account.
non_process
add a notification when successfully created a storage account os windows build version repro steps right click the applied subscription select create storage account… advanced enter a name select a resource group select no select a location check the result actual there is receive a notification during creating a storage account but don’t receive a notification when the creation is complete suggestion add a notification when successfully created a storage account
0
645
2,813,518,155
IssuesEvent
2015-05-18 15:06:20
pulibrary/orangelight
https://api.github.com/repos/pulibrary/orangelight
closed
Solr Upgrade to 5.1
enhancement infrastructure
I don't know how soon this should happen but there have been some interesting changes in the last two Solr releases. First - Solr is now a standalone service (using Jetty internally) and doesn't support Tomcat: * https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+Tomcat * https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production Other features: * A [DateRange field](https://issues.apache.org/jira/browse/SOLR-6103) (useful for publication date) * A more powerful [facet query API](http://yonik.com/json-facet-api/) with [sub-facet](http://yonik.com/solr-subfacets/) (pivot facets with more functionality) support * [Parameter substitution](http://yonik.com/solr-query-parameter-substitution/)/interpolation The solr.xml file and cores have to be in autodiscovery mode starting in 5.0.
1.0
Solr Upgrade to 5.1 - I don't know how soon this should happen but there have been some interesting changes in the last two Solr releases. First - Solr is now a standalone service (using Jetty internally) and doesn't support Tomcat: * https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+Tomcat * https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production Other features: * A [DateRange field](https://issues.apache.org/jira/browse/SOLR-6103) (useful for publication date) * A more powerful [facet query API](http://yonik.com/json-facet-api/) with [sub-facet](http://yonik.com/solr-subfacets/) (pivot facets with more functionality) support * [Parameter substitution](http://yonik.com/solr-query-parameter-substitution/)/interpolation The solr.xml file and cores have to be in autodiscovery mode starting in 5.0.
non_process
solr upgrade to i don t know how soon this should happen but there have been some interesting changes in the last two solr releases first solr is now a standalone service using jetty internally and doesn t support tomcat other features a useful for publication date a more powerful with pivot facets with more functionality support the solr xml file and cores have to be in autodiscovery mode starting in
0
260,188
27,772,239,513
IssuesEvent
2023-03-16 15:07:03
RG4421/tp-qemu
https://api.github.com/repos/RG4421/tp-qemu
opened
CVE-2023-27788 (Medium) detected in tcpreplaytcpreplay-4.3.1
Mend: dependency security vulnerability
## CVE-2023-27788 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tcpreplaytcpreplay-4.3.1</b></p></summary> <p> <p>edit and replay captured network traffic</p> <p>Library home page: <a href=https://sourceforge.net/projects/tcpreplay/>https://sourceforge.net/projects/tcpreplay/</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/deps/tcpreplay/tcpreplay-4.3.1.tar/tcpreplay-4.3.1/src/tcpedit/portmap.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue found in TCPrewrite v.4.4.3 allows a remote attacker to cause a denial of service via the ports2PORT function at the portmap.c:69 endpoint. <p>Publish Date: 2023-03-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-27788>CVE-2023-27788</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p>
True
CVE-2023-27788 (Medium) detected in tcpreplaytcpreplay-4.3.1 - ## CVE-2023-27788 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tcpreplaytcpreplay-4.3.1</b></p></summary> <p> <p>edit and replay captured network traffic</p> <p>Library home page: <a href=https://sourceforge.net/projects/tcpreplay/>https://sourceforge.net/projects/tcpreplay/</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/deps/tcpreplay/tcpreplay-4.3.1.tar/tcpreplay-4.3.1/src/tcpedit/portmap.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue found in TCPrewrite v.4.4.3 allows a remote attacker to cause a denial of service via the ports2PORT function at the portmap.c:69 endpoint. <p>Publish Date: 2023-03-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-27788>CVE-2023-27788</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p>
non_process
cve medium detected in tcpreplaytcpreplay cve medium severity vulnerability vulnerable library tcpreplaytcpreplay edit and replay captured network traffic library home page a href found in base branch master vulnerable source files deps tcpreplay tcpreplay tar tcpreplay src tcpedit portmap c vulnerability details an issue found in tcprewrite v allows a remote attacker to cause a denial of service via the function at the portmap c endpoint publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
0
282,573
8,708,090,677
IssuesEvent
2018-12-06 09:55:18
ESMValGroup/ESMValTool
https://api.github.com/repos/ESMValGroup/ESMValTool
closed
R lintr too strict
HIGH PRIORITY MAGIC standards
The R linter is too strict or outright wrong in some cases. For example, it complains about the naming conventions of 3rd party libraries. The unit test needs to be adjusted so it does not fail on this.
1.0
R lintr too strict - The R linter is too strict or outright wrong in some cases. For example, it complains about the naming conventions of 3rd party libraries. The unit test needs to be adjusted so it does not fail on this.
non_process
r lintr too strict the r linter is too strict or outright wrong in some cases for example it complains about the naming conventions of party libraries the unit test needs to be adjusted so it does not fail on this
0
4,681
7,521,946,782
IssuesEvent
2018-04-12 18:48:08
nodejs/node
https://api.github.com/repos/nodejs/node
closed
test: investigate child-process-pass-fd
CI / flaky test aix child_process test
This failed on a recent CI run. https://ci.nodejs.org/job/node-test-commit-aix/8817/nodes=aix61-ppc64/console ``` not ok 1753 sequential/test-child-process-pass-fd --- duration_ms: 2.0 severity: fail stack: |- events.js:182 throw er; // Unhandled 'error' event ^ Error: spawn /home/iojs/build/workspace/node-test-commit-aix/nodes/aix61-ppc64/out/Release/node EAGAIN at _errnoException (util.js:1018:13) at Process.ChildProcess._handle.onexit (internal/child_process.js:202:19) at onErrorNT (internal/child_process.js:390:16) at _combinedTickCallback (internal/process/next_tick.js:138:11) at process._tickCallback (internal/process/next_tick.js:180:9) at Function.Module.runMain (module.js:643:11) at startup (bootstrap_node.js:187:16) at bootstrap_node.js:608:3 ... ```
1.0
test: investigate child-process-pass-fd - This failed on a recent CI run. https://ci.nodejs.org/job/node-test-commit-aix/8817/nodes=aix61-ppc64/console ``` not ok 1753 sequential/test-child-process-pass-fd --- duration_ms: 2.0 severity: fail stack: |- events.js:182 throw er; // Unhandled 'error' event ^ Error: spawn /home/iojs/build/workspace/node-test-commit-aix/nodes/aix61-ppc64/out/Release/node EAGAIN at _errnoException (util.js:1018:13) at Process.ChildProcess._handle.onexit (internal/child_process.js:202:19) at onErrorNT (internal/child_process.js:390:16) at _combinedTickCallback (internal/process/next_tick.js:138:11) at process._tickCallback (internal/process/next_tick.js:180:9) at Function.Module.runMain (module.js:643:11) at startup (bootstrap_node.js:187:16) at bootstrap_node.js:608:3 ... ```
process
test investigate child process pass fd this failed on a recent ci run not ok sequential test child process pass fd duration ms severity fail stack events js throw er unhandled error event error spawn home iojs build workspace node test commit aix nodes out release node eagain at errnoexception util js at process childprocess handle onexit internal child process js at onerrornt internal child process js at combinedtickcallback internal process next tick js at process tickcallback internal process next tick js at function module runmain module js at startup bootstrap node js at bootstrap node js
1
804,017
29,298,695,915
IssuesEvent
2023-05-25 00:24:51
kubernetes/website
https://api.github.com/repos/kubernetes/website
closed
Legacy kubelet image pull credentials mechanism not documented
sig/node kind/bug priority/backlog sig/cloud-provider language/en triage/accepted
**This is a Bug Report** <!-- Thanks for filing an issue! Before submitting, please fill in the following information. --> <!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. --> <!--Required Information--> **Problem:** Add (reinstate!) documentation for the legacy credential provider integration in the kubelet that has integrations for: - ACR (Azure Container Registry) - ECR (Elastic Container Registry) - GCR (Google Container Registry) **Why is this needed:** > Kubelet has a credential provider mechanism, which gives kubelet the ability to dynamically fetch credentials for image registries. Today there are three built-in implementations of the kubelet credential provider for ACR (Azure Container Registry), ECR (Elastic Container Registry), and GCR (Google Container Registry). &mdash; https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2133-kubelet-credential-providers That legacy mechanism exists but is not currently documented. **Additional Information:** PR https://github.com/kubernetes/website/pull/21438 removed some earlier documentation for the built-in mechanism. The removal was aimed at removing documentation for third-party integrations but (in error) also removed details of the legacy in-tree implementation. /language en /sig node /sig cloud-provider
1.0
Legacy kubelet image pull credentials mechanism not documented - **This is a Bug Report** <!-- Thanks for filing an issue! Before submitting, please fill in the following information. --> <!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. --> <!--Required Information--> **Problem:** Add (reinstate!) documentation for the legacy credential provider integration in the kubelet that has integrations for: - ACR (Azure Container Registry) - ECR (Elastic Container Registry) - GCR (Google Container Registry) **Why is this needed:** > Kubelet has a credential provider mechanism, which gives kubelet the ability to dynamically fetch credentials for image registries. Today there are three built-in implementations of the kubelet credential provider for ACR (Azure Container Registry), ECR (Elastic Container Registry), and GCR (Google Container Registry). &mdash; https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2133-kubelet-credential-providers That legacy mechanism exists but is not currently documented. **Additional Information:** PR https://github.com/kubernetes/website/pull/21438 removed some earlier documentation for the built-in mechanism. The removal was aimed at removing documentation for third-party integrations but (in error) also removed details of the legacy in-tree implementation. /language en /sig node /sig cloud-provider
non_process
legacy kubelet image pull credentials mechanism not documented this is a bug report problem add reinstate documentation for the legacy credential provider integration in the kubelet that has integrations for acr azure container registry ecr elastic container registry gcr google container registry why is this needed kubelet has a credential provider mechanism which gives kubelet the ability to dynamically fetch credentials for image registries today there are three built in implementations of the kubelet credential provider for acr azure container registry ecr elastic container registry and gcr google container registry mdash that legacy mechanism exists but is not currently documented additional information pr removed some earlier documentation for the built in mechanism the removal was aimed at removing documentation for third party integrations but in error also removed details of the legacy in tree implementation language en sig node sig cloud provider
0
83,617
15,712,467,847
IssuesEvent
2021-03-27 12:15:23
emilykaldwin1827/goof
https://api.github.com/repos/emilykaldwin1827/goof
closed
WS-2019-0231 (Medium) detected in adm-zip-0.4.7.tgz
security vulnerability
## WS-2019-0231 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>adm-zip-0.4.7.tgz</b></p></summary> <p>A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk</p> <p>Library home page: <a href="https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.7.tgz">https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.7.tgz</a></p> <p>Path to dependency file: goof/package.json</p> <p>Path to vulnerable library: goof/node_modules/adm-zip/package.json</p> <p> Dependency Hierarchy: - :x: **adm-zip-0.4.7.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/emilykaldwin1827/goof/commit/27563f2447d85b487d3c44ea67f0f561f0c44b91">27563f2447d85b487d3c44ea67f0f561f0c44b91</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> adm-zip versions before 0.4.9 are vulnerable to Arbitrary File Write due to extraction of a specifically crafted archive that contains path traversal filenames <p>Publish Date: 2018-04-22 <p>URL: <a href=https://hackerone.com/reports/362118>WS-2019-0231</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/994">https://www.npmjs.com/advisories/994</a></p> <p>Release Date: 2019-09-09</p> <p>Fix Resolution: 0.4.9</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0231 (Medium) detected in adm-zip-0.4.7.tgz - ## WS-2019-0231 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>adm-zip-0.4.7.tgz</b></p></summary> <p>A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk</p> <p>Library home page: <a href="https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.7.tgz">https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.7.tgz</a></p> <p>Path to dependency file: goof/package.json</p> <p>Path to vulnerable library: goof/node_modules/adm-zip/package.json</p> <p> Dependency Hierarchy: - :x: **adm-zip-0.4.7.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/emilykaldwin1827/goof/commit/27563f2447d85b487d3c44ea67f0f561f0c44b91">27563f2447d85b487d3c44ea67f0f561f0c44b91</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> adm-zip versions before 0.4.9 are vulnerable to Arbitrary File Write due to extraction of a specifically crafted archive that contains path traversal filenames <p>Publish Date: 2018-04-22 <p>URL: <a href=https://hackerone.com/reports/362118>WS-2019-0231</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/994">https://www.npmjs.com/advisories/994</a></p> <p>Release Date: 2019-09-09</p> <p>Fix Resolution: 0.4.9</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws medium detected in adm zip tgz ws medium severity vulnerability vulnerable library adm zip tgz a javascript implementation of zip for nodejs allows user to create or extract zip files both in memory or to from disk library home page a href path to dependency file goof package json path to vulnerable library goof node modules adm zip package json dependency hierarchy x adm zip tgz vulnerable library found in head commit a href found in base branch master vulnerability details adm zip versions before are vulnerable to arbitrary file write due to extraction of a specifically crafted archive that contains path traversal filenames publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
20,983
27,850,711,669
IssuesEvent
2023-03-20 18:27:57
dDevTech/tapas-top-backend
https://api.github.com/repos/dDevTech/tapas-top-backend
closed
Crear Modelo Tapa 20/03/23
in process
Crear Tapa en package com.mycompany.myapp.domain; Deberá tener los siguientes atributos: - id: long - Name: str - Description: str - Type: str @ManyToOne - Country: Str - Photo: Lob (byte[]) - Establishment: Establishment
1.0
Crear Modelo Tapa 20/03/23 - Crear Tapa en package com.mycompany.myapp.domain; Deberá tener los siguientes atributos: - id: long - Name: str - Description: str - Type: str @ManyToOne - Country: Str - Photo: Lob (byte[]) - Establishment: Establishment
process
crear modelo tapa crear tapa en package com mycompany myapp domain deberá tener los siguientes atributos id long name str description str type str manytoone country str photo lob byte establishment establishment
1
2,778
5,713,211,724
IssuesEvent
2017-04-19 07:03:18
g8os/initramfs
https://api.github.com/repos/g8os/initramfs
closed
Automate building with travis
process_wontfix type_feature
That would be nice if we could automate the build using Travis CI. Every commit would trigger a built and if the build succeed we push the resulting ramfs to a store somewhere. Like that we would always have the last kernel built avaialble
1.0
Automate building with travis - That would be nice if we could automate the build using Travis CI. Every commit would trigger a built and if the build succeed we push the resulting ramfs to a store somewhere. Like that we would always have the last kernel built avaialble
process
automate building with travis that would be nice if we could automate the build using travis ci every commit would trigger a built and if the build succeed we push the resulting ramfs to a store somewhere like that we would always have the last kernel built avaialble
1
2,804
5,736,106,105
IssuesEvent
2017-04-22 05:03:21
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
closed
Support -- comment
CONNECTION POOL QUERY PROCESSOR
Comments starting with `-- ` are not supported. See [this](https://dev.mysql.com/doc/refman/5.7/en/ansi-diff-comments.html) for a reference on MySQL's syntax. Issue #989 could be affected by this
1.0
Support -- comment - Comments starting with `-- ` are not supported. See [this](https://dev.mysql.com/doc/refman/5.7/en/ansi-diff-comments.html) for a reference on MySQL's syntax. Issue #989 could be affected by this
process
support comment comments starting with are not supported see for a reference on mysql s syntax issue could be affected by this
1
120,852
17,644,321,324
IssuesEvent
2021-08-20 02:12:18
Baneeishaque/Raindrop-Removal-With-Light-Field-Image-Using-Image-Inpainting
https://api.github.com/repos/Baneeishaque/Raindrop-Removal-With-Light-Field-Image-Using-Image-Inpainting
opened
CVE-2021-29607 (High) detected in tensorflow-1.15.0-cp37-cp37m-manylinux2010_x86_64.whl
security vulnerability
## CVE-2021-29607 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/92/2b/e3af15221da9ff323521565fa3324b0d7c7c5b1d7a8ca66984c8d59cb0ce/tensorflow-1.15.0-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/92/2b/e3af15221da9ff323521565fa3324b0d7c7c5b1d7a8ca66984c8d59cb0ce/tensorflow-1.15.0-cp37-cp37m-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: Raindrop-Removal-With-Light-Field-Image-Using-Image-Inpainting/requirements.txt</p> <p>Path to vulnerable library: Raindrop-Removal-With-Light-Field-Image-Using-Image-Inpainting/requirements.txt</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.15.0-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. Incomplete validation in `SparseAdd` results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data. The implementation(https://github.com/tensorflow/tensorflow/blob/656e7673b14acd7835dc778867f84916c6d1cac2/tensorflow/core/kernels/sparse_sparse_binary_op_shared.cc) has a large set of validation for the two sparse tensor inputs (6 tensors in total), but does not validate that the tensors are not empty or that the second dimension of `*_indices` matches the size of corresponding `*_shape`. This allows attackers to send tensor triples that represent invalid sparse tensors to abuse code assumptions that are not protected by validation. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. <p>Publish Date: 2021-05-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29607>CVE-2021-29607</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gv26-jpj9-c8gq">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gv26-jpj9-c8gq</a></p> <p>Release Date: 2021-05-14</p> <p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-29607 (High) detected in tensorflow-1.15.0-cp37-cp37m-manylinux2010_x86_64.whl - ## CVE-2021-29607 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/92/2b/e3af15221da9ff323521565fa3324b0d7c7c5b1d7a8ca66984c8d59cb0ce/tensorflow-1.15.0-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/92/2b/e3af15221da9ff323521565fa3324b0d7c7c5b1d7a8ca66984c8d59cb0ce/tensorflow-1.15.0-cp37-cp37m-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: Raindrop-Removal-With-Light-Field-Image-Using-Image-Inpainting/requirements.txt</p> <p>Path to vulnerable library: Raindrop-Removal-With-Light-Field-Image-Using-Image-Inpainting/requirements.txt</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.15.0-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. Incomplete validation in `SparseAdd` results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data. The implementation(https://github.com/tensorflow/tensorflow/blob/656e7673b14acd7835dc778867f84916c6d1cac2/tensorflow/core/kernels/sparse_sparse_binary_op_shared.cc) has a large set of validation for the two sparse tensor inputs (6 tensors in total), but does not validate that the tensors are not empty or that the second dimension of `*_indices` matches the size of corresponding `*_shape`. This allows attackers to send tensor triples that represent invalid sparse tensors to abuse code assumptions that are not protected by validation. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. <p>Publish Date: 2021-05-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29607>CVE-2021-29607</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gv26-jpj9-c8gq">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gv26-jpj9-c8gq</a></p> <p>Release Date: 2021-05-14</p> <p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file raindrop removal with light field image using image inpainting requirements txt path to vulnerable library raindrop removal with light field image using image inpainting requirements txt dependency hierarchy x tensorflow whl vulnerable library found in base branch main vulnerability details tensorflow is an end to end open source platform for machine learning incomplete validation in sparseadd results in allowing attackers to exploit undefined behavior dereferencing null pointers as well as write outside of bounds of heap allocated data the implementation has a large set of validation for the two sparse tensor inputs tensors in total but does not validate that the tensors are not empty or that the second dimension of indices matches the size of corresponding shape this allows attackers to send tensor triples that represent invalid sparse tensors to abuse code assumptions that are not protected by validation the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
0
189,517
22,047,055,740
IssuesEvent
2022-05-30 03:48:06
panasalap/linux-4.1.15
https://api.github.com/repos/panasalap/linux-4.1.15
closed
CVE-2017-12193 (Medium) detected in linuxlinux-4.1.17 - autoclosed
security vulnerability
## CVE-2017-12193 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.17</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/assoc_array.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/assoc_array.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The assoc_array_insert_into_terminal_node function in lib/assoc_array.c in the Linux kernel before 4.13.11 mishandles node splitting, which allows local users to cause a denial of service (NULL pointer dereference and panic) via a crafted application, as demonstrated by the keyring key type, and key addition and link creation operations. <p>Publish Date: 2017-11-22 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-12193>CVE-2017-12193</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-12193">https://nvd.nist.gov/vuln/detail/CVE-2017-12193</a></p> <p>Release Date: 2017-11-22</p> <p>Fix Resolution: 4.13.11</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-12193 (Medium) detected in linuxlinux-4.1.17 - autoclosed - ## CVE-2017-12193 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.17</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/assoc_array.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/assoc_array.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The assoc_array_insert_into_terminal_node function in lib/assoc_array.c in the Linux kernel before 4.13.11 mishandles node splitting, which allows local users to cause a denial of service (NULL pointer dereference and panic) via a crafted application, as demonstrated by the keyring key type, and key addition and link creation operations. <p>Publish Date: 2017-11-22 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-12193>CVE-2017-12193</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-12193">https://nvd.nist.gov/vuln/detail/CVE-2017-12193</a></p> <p>Release Date: 2017-11-22</p> <p>Fix Resolution: 4.13.11</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files lib assoc array c lib assoc array c vulnerability details the assoc array insert into terminal node function in lib assoc array c in the linux kernel before mishandles node splitting which allows local users to cause a denial of service null pointer dereference and panic via a crafted application as demonstrated by the keyring key type and key addition and link creation operations publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
18,487
24,550,898,509
IssuesEvent
2022-10-12 12:32:14
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] [Offline indicator] App is getting crashed in the following scenarios
Bug P0 iOS Process: Fixed Process: Tested QA Process: Tested dev
The app is getting crashed where ever app is navigating to other screens (for eg. refer to the below scenario) Steps: 1. Sign in to the app 2. Click on the enrolled study 3. Turn off the data 4. Click on the Activity 5. Turn on the data 6. Enter the passcode 7. observe AR: App is getting crashed ER: App should not be crashed and participants should stay on the activities screen Note: The issue should be fixed on all the screen 1. When the participant clicks on SIgnou or signout later screen with the above scenarios 2. When participants click on consent with the above scenarios etc. https://user-images.githubusercontent.com/71445210/178310060-33d7898c-a757-4d89-8a92-ab50ca2e0271.MOV
3.0
[iOS] [Offline indicator] App is getting crashed in the following scenarios - The app is getting crashed where ever app is navigating to other screens (for eg. refer to the below scenario) Steps: 1. Sign in to the app 2. Click on the enrolled study 3. Turn off the data 4. Click on the Activity 5. Turn on the data 6. Enter the passcode 7. observe AR: App is getting crashed ER: App should not be crashed and participants should stay on the activities screen Note: The issue should be fixed on all the screen 1. When the participant clicks on SIgnou or signout later screen with the above scenarios 2. When participants click on consent with the above scenarios etc. https://user-images.githubusercontent.com/71445210/178310060-33d7898c-a757-4d89-8a92-ab50ca2e0271.MOV
process
app is getting crashed in the following scenarios the app is getting crashed where ever app is navigating to other screens for eg refer to the below scenario steps sign in to the app click on the enrolled study turn off the data click on the activity turn on the data enter the passcode observe ar app is getting crashed er app should not be crashed and participants should stay on the activities screen note the issue should be fixed on all the screen when the participant clicks on signou or signout later screen with the above scenarios when participants click on consent with the above scenarios etc
1
699,974
24,039,994,712
IssuesEvent
2022-09-16 00:01:42
unitaryfund/mitiq
https://api.github.com/repos/unitaryfund/mitiq
closed
Using a CZPowGate produces an error with CDR
cdr priority/p1 stale
Pre-Report Checklist -------------------- - [X] I am running the latest version of mitiq - [X] I checked to make sure that this bug has not already been reported Issue Description ----------------- Using a CZPowGate produces an error with CDR. How to Reproduce ---------------- ### Code Snippet ```python import warnings warnings.filterwarnings("ignore") import cirq import numpy as np from cirq import CZPowGate from mitiq import cdr, Observable, PauliString from mitiq.interface.mitiq_cirq import compute_density_matrix from cirq.circuits import InsertStrategy #Create cirucit circuit1 = cirq.Circuit() t=1 #Number of qubits in the control register n=1 #Number of qubits in the target register #Create t control qubits control = [cirq.LineQubit(i) for i in range(t) ] #Create n target qubits target = [cirq.LineQubit(i) for i in range(t,t+n) ] crk = CZPowGate(exponent = -2/2**(2)) circuit1.append(crk(control[0], target[0]),strategy = InsertStrategy.NEW) print(circuit1) obs1 = Observable(PauliString("ZZ")) def simulate(circuit: cirq.Circuit) -> np.ndarray: return compute_density_matrix(circuit, noise_level=(0.0,)) cdr.execute_with_cdr( circuit1, compute_density_matrix, observable=obs1, simulator=simulate, ).real ``` ### Error Output ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-2-ca147fd1cc89> in <module> 31 return compute_density_matrix(circuit, noise_level=(0.0,)) 32 ---> 33 cdr.execute_with_cdr( 34 circuit1, 35 compute_density_matrix, ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/cdr/cdr.py in execute_with_cdr(circuit, executor, observable, simulator, num_training_circuits, fraction_non_clifford, fit_function, num_fit_parameters, scale_factors, scale_noise, **kwargs) 131 132 # Generate training circuits. --> 133 training_circuits = generate_training_circuits( 134 circuit, 135 num_training_circuits, ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/interface/conversions.py in qprogram_modifier(circuit, *args, **kwargs) 204 mitiq_circuit, input_circuit_type = convert_to_mitiq(circuit) 205 --> 206 modified_circuits: Iterable[Circuit] = cirq_circuit_modifier( 207 mitiq_circuit, *args, **kwargs 208 ) ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/cdr/clifford_training_data.py in generate_training_circuits(circuit, num_training_circuits, fraction_non_clifford, method_select, method_replace, random_state, **kwargs) 96 near_clifford_circuits = [] 97 for _ in range(num_training_circuits): ---> 98 new_ops = _map_to_near_clifford( 99 non_clifford_ops, 100 fraction_non_clifford, ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/cdr/clifford_training_data.py in _map_to_near_clifford(non_clifford_ops, fraction_non_clifford, method_select, method_replace, random_state, **kwargs) 175 176 # Replace selected operations. --> 177 clifford_ops: Sequence[cirq.ops.Operation] = _replace( 178 [non_clifford_ops[i] for i in indices_of_selected_ops], 179 method_replace, ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/cdr/clifford_training_data.py in _replace(non_clifford_ops, method, sigma, random_state) 298 299 # TODO: Write function to replace the angles in a list of operations? --> 300 return [ 301 cirq.ops.rz(a).on(*q) 302 for (a, q) in zip( ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/cdr/clifford_training_data.py in <listcomp>(.0) 299 # TODO: Write function to replace the angles in a list of operations? 300 return [ --> 301 cirq.ops.rz(a).on(*q) 302 for (a, q) in zip( 303 clifford_angles, [op.qubits for op in non_clifford_ops], ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/cirq/ops/raw_types.py in on(self, *qubits) 211 from cirq.ops import gate_operation 212 --> 213 return gate_operation.GateOperation(self, list(qubits)) 214 215 def wrap_in_linear_combination( ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/cirq/ops/gate_operation.py in __init__(self, gate, qubits) 58 qubits: The qubits to operate on. 59 """ ---> 60 gate.validate_args(qubits) 61 self._gate = gate 62 self._qubits = tuple(qubits) ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/cirq/ops/raw_types.py in validate_args(self, qubits) 200 ValueError: The gate can't be applied to the qubits. 201 """ --> 202 _validate_qid_shape(self, qubits) 203 204 def on(self, *qubits: Qid) -> 'Operation': ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/cirq/ops/raw_types.py in _validate_qid_shape(val, qubits) 784 qid_shape = protocols.qid_shape(val) 785 if len(qubits) != len(qid_shape): --> 786 raise ValueError( 787 'Wrong number of qubits for <{!r}>. ' 788 'Expected {} qubits but got <{!r}>.'.format(val, len(qid_shape), qubits) ValueError: Wrong number of qubits for <cirq.rz(np.pi*1.5)>. Expected 1 qubits but got <[cirq.LineQubit(0), cirq.LineQubit(1)]>. ``` Environment Context ------------------- Use the `about()` function to summarize information on operating system, python version and dependencies. ```python Mitiq: A Python toolkit for implementing error mitigation on quantum computers ============================================================================== Authored by: Mitiq team, 2020 & later (https://github.com/unitaryfund/mitiq) Mitiq Version: 0.11.1 Core Dependencies ----------------- Cirq Version: 0.10.0 NumPy Version: 1.20.1 SciPy Version: 1.7.3 Optional Dependencies --------------------- PyQuil Version: Not installed Qiskit Version: 0.29.0 Braket Version: Not installed Python Version: 3.9.2 Platform Info: Linux (x86_64) ``` Additional Python Environment Details (`pip freeze` or `conda list`): ``` # packages in environment at /home/amir/miniconda3/envs/mitiq: # # Name Version Build Channel _libgcc_mutex 0.1 main anyio 2.2.0 py39hf3d152e_0 conda-forge argon2-cffi 20.1.0 py39hbd71b63_2 conda-forge async_generator 1.10 py_0 conda-forge attrs 20.3.0 pyhd3deb0d_0 conda-forge babel 2.9.0 pyhd3deb0d_0 conda-forge backcall 0.2.0 pyh9f0ad1d_0 conda-forge backports 1.0 py_2 conda-forge backports.functools_lru_cache 1.6.3 pyhd8ed1ab_0 conda-forge bleach 3.3.0 pyh44b312d_0 conda-forge brotlipy 0.7.0 py39h38d8fee_1001 conda-forge ca-certificates 2020.12.5 ha878542_0 conda-forge cachetools 4.2.4 pypi_0 pypi certifi 2020.12.5 py39hf3d152e_1 conda-forge cffi 1.14.5 py39h261ae71_0 chardet 4.0.0 py39hf3d152e_1 conda-forge cirq 0.10.0 pypi_0 pypi cryptography 3.4.7 py39hbca0aa6_0 conda-forge cycler 0.10.0 pypi_0 pypi decorator 5.0.6 pyhd8ed1ab_0 conda-forge defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge dill 0.3.3 pypi_0 pypi dlx 1.0.4 pypi_0 pypi docplex 2.21.207 pypi_0 pypi entrypoints 0.3 pyhd8ed1ab_1003 conda-forge fastdtw 0.3.4 pypi_0 pypi fastjsonschema 2.15.0 pypi_0 pypi google-api-core 1.31.4 pypi_0 pypi google-auth 1.35.0 pypi_0 pypi googleapis-common-protos 1.54.0 pypi_0 pypi grpcio 1.42.0 pypi_0 pypi h5py 3.1.0 pypi_0 pypi idna 2.10 pyh9f0ad1d_0 conda-forge importlib-metadata 3.10.0 py39hf3d152e_0 conda-forge inflection 0.5.1 pypi_0 pypi ipykernel 5.5.3 py39hef51801_0 conda-forge ipython 7.22.0 py39hef51801_0 conda-forge ipython_genutils 0.2.0 py_1 conda-forge ipywidgets 7.6.3 pypi_0 pypi jedi 0.18.0 py39hf3d152e_2 conda-forge jinja2 2.11.3 pyh44b312d_0 conda-forge joblib 1.0.1 pypi_0 pypi json5 0.9.5 pyh9f0ad1d_0 conda-forge jsonschema 3.2.0 pyhd8ed1ab_3 conda-forge jupyter-packaging 0.7.12 pyhd8ed1ab_0 conda-forge jupyter_client 6.1.12 pyhd8ed1ab_0 conda-forge jupyter_core 4.7.1 py39hf3d152e_0 conda-forge jupyter_server 1.6.0 py39hf3d152e_0 conda-forge jupyterlab 3.0.13 pyhd8ed1ab_0 conda-forge jupyterlab-widgets 1.0.0 pypi_0 pypi jupyterlab_pygments 0.1.2 pyh9f0ad1d_0 conda-forge jupyterlab_server 2.4.0 pyhd8ed1ab_0 conda-forge kiwisolver 1.3.1 pypi_0 pypi ld_impl_linux-64 2.33.1 h53a641e_7 libffi 3.3 he6710b0_2 libgcc-ng 9.1.0 hdf63c60_0 libsodium 1.0.18 h36c2ea0_1 conda-forge libstdcxx-ng 9.1.0 hdf63c60_0 lxml 4.6.3 pypi_0 pypi markupsafe 1.1.1 py39h38d8fee_2 conda-forge matplotlib 3.4.1 pypi_0 pypi mistune 0.8.4 py39hbd71b63_1002 conda-forge mitiq 0.11.1 pypi_0 pypi more-itertools 8.7.0 pypi_0 pypi mpmath 1.2.1 pypi_0 pypi multitasking 0.0.9 pypi_0 pypi nbclassic 0.2.7 pyhd8ed1ab_0 conda-forge nbclient 0.5.3 pyhd8ed1ab_0 conda-forge nbconvert 6.0.7 py39hf3d152e_3 conda-forge nbformat 5.1.3 pyhd8ed1ab_0 conda-forge ncurses 6.2 he6710b0_1 nest-asyncio 1.5.1 pyhd8ed1ab_0 conda-forge networkx 2.6.3 pypi_0 pypi notebook 6.3.0 py39hf3d152e_0 conda-forge ntlm-auth 1.5.0 pypi_0 pypi numpy 1.20.1 pypi_0 pypi openssl 1.1.1k h27cfd23_0 packaging 20.9 pyh44b312d_0 conda-forge pandas 1.2.3 pypi_0 pypi pandoc 2.12 h7f98852_0 conda-forge pandocfilters 1.4.2 py_1 conda-forge parso 0.8.2 pyhd8ed1ab_0 conda-forge pexpect 4.8.0 pyh9f0ad1d_2 conda-forge pickleshare 0.7.5 py_1003 conda-forge pillow 8.2.0 pypi_0 pypi pip 21.0.1 py39h06a4308_0 ply 3.11 pypi_0 pypi prometheus_client 0.10.1 pyhd8ed1ab_0 conda-forge prompt-toolkit 3.0.18 pyha770c72_0 conda-forge protobuf 3.13.0 pypi_0 pypi psutil 5.8.0 pypi_0 pypi ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge pulsemaker 0.1.1b0 dev_0 <develop> pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pybind11 2.6.2 pypi_0 pypi pycparser 2.20 pyh9f0ad1d_2 conda-forge pydot 1.4.2 pypi_0 pypi pygments 2.8.1 pyhd8ed1ab_0 conda-forge pylatexenc 2.10 pypi_0 pypi pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge pyrsistent 0.17.3 py39hbd71b63_1 conda-forge pysocks 1.7.1 py39hf3d152e_3 conda-forge python 3.9.2 hdb3f193_0 python-constraint 1.4.0 pypi_0 pypi python-dateutil 2.8.1 py_0 conda-forge python_abi 3.9 1_cp39 conda-forge pytz 2021.1 pyhd8ed1ab_0 conda-forge pyzmq 19.0.2 py39hb69f2a1_2 conda-forge qiskit 0.29.0 pypi_0 pypi qiskit-aer 0.8.2 pypi_0 pypi qiskit-aqua 0.9.4 pypi_0 pypi qiskit-ibmq-provider 0.16.0 pypi_0 pypi qiskit-ignis 0.6.0 pypi_0 pypi qiskit-terra 0.18.1 pypi_0 pypi qonduit 0.1.2b2 dev_0 <develop> quandl 3.6.0 pypi_0 pypi readline 8.1 h27cfd23_0 requests 2.25.1 pyhd3deb0d_0 conda-forge requests-ntlm 1.1.0 pypi_0 pypi retworkx 0.9.0 pypi_0 pypi rsa 4.8 pypi_0 pypi scikit-learn 0.24.1 pypi_0 pypi scipy 1.7.3 pypi_0 pypi seaborn 0.11.1 pypi_0 pypi send2trash 1.5.0 py_0 conda-forge setuptools 52.0.0 py39h06a4308_0 six 1.15.0 pyh9f0ad1d_0 conda-forge sniffio 1.2.0 py39hf3d152e_1 conda-forge sortedcontainers 2.4.0 pypi_0 pypi sqlite 3.35.4 hdfb4753_0 symengine 0.7.2 pypi_0 pypi sympy 1.7.1 pypi_0 pypi terminado 0.9.4 py39hf3d152e_0 conda-forge testpath 0.4.4 py_0 conda-forge threadpoolctl 2.1.0 pypi_0 pypi tk 8.6.10 hbc83047_0 tornado 6.1 py39hbd71b63_0 conda-forge tqdm 4.62.3 pypi_0 pypi traitlets 5.0.5 py_0 conda-forge tweedledum 1.1.0 pypi_0 pypi typing-extensions 4.0.1 pypi_0 pypi tzdata 2020f h52ac0ba_0 urllib3 1.26.4 pyhd8ed1ab_0 conda-forge wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge webencodings 0.5.1 py_1 conda-forge websocket-client 1.2.1 pypi_0 pypi websockets 8.1 pypi_0 pypi wheel 0.36.2 pyhd3eb1b0_0 widgetsnbextension 3.5.1 pypi_0 pypi xz 5.2.5 h7b6447c_0 yfinance 0.1.55 pypi_0 pypi zeromq 4.3.4 h2531618_0 zipp 3.4.1 pyhd8ed1ab_0 conda-forge zlib 1.2.11 h7b6447c_3 ```
1.0
Using a CZPowGate produces an error with CDR - Pre-Report Checklist -------------------- - [X] I am running the latest version of mitiq - [X] I checked to make sure that this bug has not already been reported Issue Description ----------------- Using a CZPowGate produces an error with CDR. How to Reproduce ---------------- ### Code Snippet ```python import warnings warnings.filterwarnings("ignore") import cirq import numpy as np from cirq import CZPowGate from mitiq import cdr, Observable, PauliString from mitiq.interface.mitiq_cirq import compute_density_matrix from cirq.circuits import InsertStrategy #Create cirucit circuit1 = cirq.Circuit() t=1 #Number of qubits in the control register n=1 #Number of qubits in the target register #Create t control qubits control = [cirq.LineQubit(i) for i in range(t) ] #Create n target qubits target = [cirq.LineQubit(i) for i in range(t,t+n) ] crk = CZPowGate(exponent = -2/2**(2)) circuit1.append(crk(control[0], target[0]),strategy = InsertStrategy.NEW) print(circuit1) obs1 = Observable(PauliString("ZZ")) def simulate(circuit: cirq.Circuit) -> np.ndarray: return compute_density_matrix(circuit, noise_level=(0.0,)) cdr.execute_with_cdr( circuit1, compute_density_matrix, observable=obs1, simulator=simulate, ).real ``` ### Error Output ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-2-ca147fd1cc89> in <module> 31 return compute_density_matrix(circuit, noise_level=(0.0,)) 32 ---> 33 cdr.execute_with_cdr( 34 circuit1, 35 compute_density_matrix, ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/cdr/cdr.py in execute_with_cdr(circuit, executor, observable, simulator, num_training_circuits, fraction_non_clifford, fit_function, num_fit_parameters, scale_factors, scale_noise, **kwargs) 131 132 # Generate training circuits. --> 133 training_circuits = generate_training_circuits( 134 circuit, 135 num_training_circuits, ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/interface/conversions.py in qprogram_modifier(circuit, *args, **kwargs) 204 mitiq_circuit, input_circuit_type = convert_to_mitiq(circuit) 205 --> 206 modified_circuits: Iterable[Circuit] = cirq_circuit_modifier( 207 mitiq_circuit, *args, **kwargs 208 ) ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/cdr/clifford_training_data.py in generate_training_circuits(circuit, num_training_circuits, fraction_non_clifford, method_select, method_replace, random_state, **kwargs) 96 near_clifford_circuits = [] 97 for _ in range(num_training_circuits): ---> 98 new_ops = _map_to_near_clifford( 99 non_clifford_ops, 100 fraction_non_clifford, ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/cdr/clifford_training_data.py in _map_to_near_clifford(non_clifford_ops, fraction_non_clifford, method_select, method_replace, random_state, **kwargs) 175 176 # Replace selected operations. --> 177 clifford_ops: Sequence[cirq.ops.Operation] = _replace( 178 [non_clifford_ops[i] for i in indices_of_selected_ops], 179 method_replace, ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/cdr/clifford_training_data.py in _replace(non_clifford_ops, method, sigma, random_state) 298 299 # TODO: Write function to replace the angles in a list of operations? --> 300 return [ 301 cirq.ops.rz(a).on(*q) 302 for (a, q) in zip( ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/mitiq/cdr/clifford_training_data.py in <listcomp>(.0) 299 # TODO: Write function to replace the angles in a list of operations? 300 return [ --> 301 cirq.ops.rz(a).on(*q) 302 for (a, q) in zip( 303 clifford_angles, [op.qubits for op in non_clifford_ops], ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/cirq/ops/raw_types.py in on(self, *qubits) 211 from cirq.ops import gate_operation 212 --> 213 return gate_operation.GateOperation(self, list(qubits)) 214 215 def wrap_in_linear_combination( ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/cirq/ops/gate_operation.py in __init__(self, gate, qubits) 58 qubits: The qubits to operate on. 59 """ ---> 60 gate.validate_args(qubits) 61 self._gate = gate 62 self._qubits = tuple(qubits) ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/cirq/ops/raw_types.py in validate_args(self, qubits) 200 ValueError: The gate can't be applied to the qubits. 201 """ --> 202 _validate_qid_shape(self, qubits) 203 204 def on(self, *qubits: Qid) -> 'Operation': ~/miniconda3/envs/mitiq/lib/python3.9/site-packages/cirq/ops/raw_types.py in _validate_qid_shape(val, qubits) 784 qid_shape = protocols.qid_shape(val) 785 if len(qubits) != len(qid_shape): --> 786 raise ValueError( 787 'Wrong number of qubits for <{!r}>. ' 788 'Expected {} qubits but got <{!r}>.'.format(val, len(qid_shape), qubits) ValueError: Wrong number of qubits for <cirq.rz(np.pi*1.5)>. Expected 1 qubits but got <[cirq.LineQubit(0), cirq.LineQubit(1)]>. ``` Environment Context ------------------- Use the `about()` function to summarize information on operating system, python version and dependencies. ```python Mitiq: A Python toolkit for implementing error mitigation on quantum computers ============================================================================== Authored by: Mitiq team, 2020 & later (https://github.com/unitaryfund/mitiq) Mitiq Version: 0.11.1 Core Dependencies ----------------- Cirq Version: 0.10.0 NumPy Version: 1.20.1 SciPy Version: 1.7.3 Optional Dependencies --------------------- PyQuil Version: Not installed Qiskit Version: 0.29.0 Braket Version: Not installed Python Version: 3.9.2 Platform Info: Linux (x86_64) ``` Additional Python Environment Details (`pip freeze` or `conda list`): ``` # packages in environment at /home/amir/miniconda3/envs/mitiq: # # Name Version Build Channel _libgcc_mutex 0.1 main anyio 2.2.0 py39hf3d152e_0 conda-forge argon2-cffi 20.1.0 py39hbd71b63_2 conda-forge async_generator 1.10 py_0 conda-forge attrs 20.3.0 pyhd3deb0d_0 conda-forge babel 2.9.0 pyhd3deb0d_0 conda-forge backcall 0.2.0 pyh9f0ad1d_0 conda-forge backports 1.0 py_2 conda-forge backports.functools_lru_cache 1.6.3 pyhd8ed1ab_0 conda-forge bleach 3.3.0 pyh44b312d_0 conda-forge brotlipy 0.7.0 py39h38d8fee_1001 conda-forge ca-certificates 2020.12.5 ha878542_0 conda-forge cachetools 4.2.4 pypi_0 pypi certifi 2020.12.5 py39hf3d152e_1 conda-forge cffi 1.14.5 py39h261ae71_0 chardet 4.0.0 py39hf3d152e_1 conda-forge cirq 0.10.0 pypi_0 pypi cryptography 3.4.7 py39hbca0aa6_0 conda-forge cycler 0.10.0 pypi_0 pypi decorator 5.0.6 pyhd8ed1ab_0 conda-forge defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge dill 0.3.3 pypi_0 pypi dlx 1.0.4 pypi_0 pypi docplex 2.21.207 pypi_0 pypi entrypoints 0.3 pyhd8ed1ab_1003 conda-forge fastdtw 0.3.4 pypi_0 pypi fastjsonschema 2.15.0 pypi_0 pypi google-api-core 1.31.4 pypi_0 pypi google-auth 1.35.0 pypi_0 pypi googleapis-common-protos 1.54.0 pypi_0 pypi grpcio 1.42.0 pypi_0 pypi h5py 3.1.0 pypi_0 pypi idna 2.10 pyh9f0ad1d_0 conda-forge importlib-metadata 3.10.0 py39hf3d152e_0 conda-forge inflection 0.5.1 pypi_0 pypi ipykernel 5.5.3 py39hef51801_0 conda-forge ipython 7.22.0 py39hef51801_0 conda-forge ipython_genutils 0.2.0 py_1 conda-forge ipywidgets 7.6.3 pypi_0 pypi jedi 0.18.0 py39hf3d152e_2 conda-forge jinja2 2.11.3 pyh44b312d_0 conda-forge joblib 1.0.1 pypi_0 pypi json5 0.9.5 pyh9f0ad1d_0 conda-forge jsonschema 3.2.0 pyhd8ed1ab_3 conda-forge jupyter-packaging 0.7.12 pyhd8ed1ab_0 conda-forge jupyter_client 6.1.12 pyhd8ed1ab_0 conda-forge jupyter_core 4.7.1 py39hf3d152e_0 conda-forge jupyter_server 1.6.0 py39hf3d152e_0 conda-forge jupyterlab 3.0.13 pyhd8ed1ab_0 conda-forge jupyterlab-widgets 1.0.0 pypi_0 pypi jupyterlab_pygments 0.1.2 pyh9f0ad1d_0 conda-forge jupyterlab_server 2.4.0 pyhd8ed1ab_0 conda-forge kiwisolver 1.3.1 pypi_0 pypi ld_impl_linux-64 2.33.1 h53a641e_7 libffi 3.3 he6710b0_2 libgcc-ng 9.1.0 hdf63c60_0 libsodium 1.0.18 h36c2ea0_1 conda-forge libstdcxx-ng 9.1.0 hdf63c60_0 lxml 4.6.3 pypi_0 pypi markupsafe 1.1.1 py39h38d8fee_2 conda-forge matplotlib 3.4.1 pypi_0 pypi mistune 0.8.4 py39hbd71b63_1002 conda-forge mitiq 0.11.1 pypi_0 pypi more-itertools 8.7.0 pypi_0 pypi mpmath 1.2.1 pypi_0 pypi multitasking 0.0.9 pypi_0 pypi nbclassic 0.2.7 pyhd8ed1ab_0 conda-forge nbclient 0.5.3 pyhd8ed1ab_0 conda-forge nbconvert 6.0.7 py39hf3d152e_3 conda-forge nbformat 5.1.3 pyhd8ed1ab_0 conda-forge ncurses 6.2 he6710b0_1 nest-asyncio 1.5.1 pyhd8ed1ab_0 conda-forge networkx 2.6.3 pypi_0 pypi notebook 6.3.0 py39hf3d152e_0 conda-forge ntlm-auth 1.5.0 pypi_0 pypi numpy 1.20.1 pypi_0 pypi openssl 1.1.1k h27cfd23_0 packaging 20.9 pyh44b312d_0 conda-forge pandas 1.2.3 pypi_0 pypi pandoc 2.12 h7f98852_0 conda-forge pandocfilters 1.4.2 py_1 conda-forge parso 0.8.2 pyhd8ed1ab_0 conda-forge pexpect 4.8.0 pyh9f0ad1d_2 conda-forge pickleshare 0.7.5 py_1003 conda-forge pillow 8.2.0 pypi_0 pypi pip 21.0.1 py39h06a4308_0 ply 3.11 pypi_0 pypi prometheus_client 0.10.1 pyhd8ed1ab_0 conda-forge prompt-toolkit 3.0.18 pyha770c72_0 conda-forge protobuf 3.13.0 pypi_0 pypi psutil 5.8.0 pypi_0 pypi ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge pulsemaker 0.1.1b0 dev_0 <develop> pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pybind11 2.6.2 pypi_0 pypi pycparser 2.20 pyh9f0ad1d_2 conda-forge pydot 1.4.2 pypi_0 pypi pygments 2.8.1 pyhd8ed1ab_0 conda-forge pylatexenc 2.10 pypi_0 pypi pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge pyrsistent 0.17.3 py39hbd71b63_1 conda-forge pysocks 1.7.1 py39hf3d152e_3 conda-forge python 3.9.2 hdb3f193_0 python-constraint 1.4.0 pypi_0 pypi python-dateutil 2.8.1 py_0 conda-forge python_abi 3.9 1_cp39 conda-forge pytz 2021.1 pyhd8ed1ab_0 conda-forge pyzmq 19.0.2 py39hb69f2a1_2 conda-forge qiskit 0.29.0 pypi_0 pypi qiskit-aer 0.8.2 pypi_0 pypi qiskit-aqua 0.9.4 pypi_0 pypi qiskit-ibmq-provider 0.16.0 pypi_0 pypi qiskit-ignis 0.6.0 pypi_0 pypi qiskit-terra 0.18.1 pypi_0 pypi qonduit 0.1.2b2 dev_0 <develop> quandl 3.6.0 pypi_0 pypi readline 8.1 h27cfd23_0 requests 2.25.1 pyhd3deb0d_0 conda-forge requests-ntlm 1.1.0 pypi_0 pypi retworkx 0.9.0 pypi_0 pypi rsa 4.8 pypi_0 pypi scikit-learn 0.24.1 pypi_0 pypi scipy 1.7.3 pypi_0 pypi seaborn 0.11.1 pypi_0 pypi send2trash 1.5.0 py_0 conda-forge setuptools 52.0.0 py39h06a4308_0 six 1.15.0 pyh9f0ad1d_0 conda-forge sniffio 1.2.0 py39hf3d152e_1 conda-forge sortedcontainers 2.4.0 pypi_0 pypi sqlite 3.35.4 hdfb4753_0 symengine 0.7.2 pypi_0 pypi sympy 1.7.1 pypi_0 pypi terminado 0.9.4 py39hf3d152e_0 conda-forge testpath 0.4.4 py_0 conda-forge threadpoolctl 2.1.0 pypi_0 pypi tk 8.6.10 hbc83047_0 tornado 6.1 py39hbd71b63_0 conda-forge tqdm 4.62.3 pypi_0 pypi traitlets 5.0.5 py_0 conda-forge tweedledum 1.1.0 pypi_0 pypi typing-extensions 4.0.1 pypi_0 pypi tzdata 2020f h52ac0ba_0 urllib3 1.26.4 pyhd8ed1ab_0 conda-forge wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge webencodings 0.5.1 py_1 conda-forge websocket-client 1.2.1 pypi_0 pypi websockets 8.1 pypi_0 pypi wheel 0.36.2 pyhd3eb1b0_0 widgetsnbextension 3.5.1 pypi_0 pypi xz 5.2.5 h7b6447c_0 yfinance 0.1.55 pypi_0 pypi zeromq 4.3.4 h2531618_0 zipp 3.4.1 pyhd8ed1ab_0 conda-forge zlib 1.2.11 h7b6447c_3 ```
non_process
using a czpowgate produces an error with cdr pre report checklist i am running the latest version of mitiq i checked to make sure that this bug has not already been reported issue description using a czpowgate produces an error with cdr how to reproduce code snippet python import warnings warnings filterwarnings ignore import cirq import numpy as np from cirq import czpowgate from mitiq import cdr observable paulistring from mitiq interface mitiq cirq import compute density matrix from cirq circuits import insertstrategy create cirucit cirq circuit t number of qubits in the control register n number of qubits in the target register create t control qubits control create n target qubits target crk czpowgate exponent append crk control target strategy insertstrategy new print observable paulistring zz def simulate circuit cirq circuit np ndarray return compute density matrix circuit noise level cdr execute with cdr compute density matrix observable simulator simulate real error output valueerror traceback most recent call last in return compute density matrix circuit noise level cdr execute with cdr compute density matrix envs mitiq lib site packages mitiq cdr cdr py in execute with cdr circuit executor observable simulator num training circuits fraction non clifford fit function num fit parameters scale factors scale noise kwargs generate training circuits training circuits generate training circuits circuit num training circuits envs mitiq lib site packages mitiq interface conversions py in qprogram modifier circuit args kwargs mitiq circuit input circuit type convert to mitiq circuit modified circuits iterable cirq circuit modifier mitiq circuit args kwargs envs mitiq lib site packages mitiq cdr clifford training data py in generate training circuits circuit num training circuits fraction non clifford method select method replace random state kwargs near clifford circuits for in range num training circuits new ops map to near clifford non clifford ops fraction non clifford envs mitiq lib site packages mitiq cdr clifford training data py in map to near clifford non clifford ops fraction non clifford method select method replace random state kwargs replace selected operations clifford ops sequence replace for i in indices of selected ops method replace envs mitiq lib site packages mitiq cdr clifford training data py in replace non clifford ops method sigma random state todo write function to replace the angles in a list of operations return cirq ops rz a on q for a q in zip envs mitiq lib site packages mitiq cdr clifford training data py in todo write function to replace the angles in a list of operations return cirq ops rz a on q for a q in zip clifford angles envs mitiq lib site packages cirq ops raw types py in on self qubits from cirq ops import gate operation return gate operation gateoperation self list qubits def wrap in linear combination envs mitiq lib site packages cirq ops gate operation py in init self gate qubits qubits the qubits to operate on gate validate args qubits self gate gate self qubits tuple qubits envs mitiq lib site packages cirq ops raw types py in validate args self qubits valueerror the gate can t be applied to the qubits validate qid shape self qubits def on self qubits qid operation envs mitiq lib site packages cirq ops raw types py in validate qid shape val qubits qid shape protocols qid shape val if len qubits len qid shape raise valueerror wrong number of qubits for expected qubits but got format val len qid shape qubits valueerror wrong number of qubits for expected qubits but got environment context use the about function to summarize information on operating system python version and dependencies python mitiq a python toolkit for implementing error mitigation on quantum computers authored by mitiq team later mitiq version core dependencies cirq version numpy version scipy version optional dependencies pyquil version not installed qiskit version braket version not installed python version platform info linux additional python environment details pip freeze or conda list packages in environment at home amir envs mitiq name version build channel libgcc mutex main anyio conda forge cffi conda forge async generator py conda forge attrs conda forge babel conda forge backcall conda forge backports py conda forge backports functools lru cache conda forge bleach conda forge brotlipy conda forge ca certificates conda forge cachetools pypi pypi certifi conda forge cffi chardet conda forge cirq pypi pypi cryptography conda forge cycler pypi pypi decorator conda forge defusedxml conda forge dill pypi pypi dlx pypi pypi docplex pypi pypi entrypoints conda forge fastdtw pypi pypi fastjsonschema pypi pypi google api core pypi pypi google auth pypi pypi googleapis common protos pypi pypi grpcio pypi pypi pypi pypi idna conda forge importlib metadata conda forge inflection pypi pypi ipykernel conda forge ipython conda forge ipython genutils py conda forge ipywidgets pypi pypi jedi conda forge conda forge joblib pypi pypi conda forge jsonschema conda forge jupyter packaging conda forge jupyter client conda forge jupyter core conda forge jupyter server conda forge jupyterlab conda forge jupyterlab widgets pypi pypi jupyterlab pygments conda forge jupyterlab server conda forge kiwisolver pypi pypi ld impl linux libffi libgcc ng libsodium conda forge libstdcxx ng lxml pypi pypi markupsafe conda forge matplotlib pypi pypi mistune conda forge mitiq pypi pypi more itertools pypi pypi mpmath pypi pypi multitasking pypi pypi nbclassic conda forge nbclient conda forge nbconvert conda forge nbformat conda forge ncurses nest asyncio conda forge networkx pypi pypi notebook conda forge ntlm auth pypi pypi numpy pypi pypi openssl packaging conda forge pandas pypi pypi pandoc conda forge pandocfilters py conda forge parso conda forge pexpect conda forge pickleshare py conda forge pillow pypi pypi pip ply pypi pypi prometheus client conda forge prompt toolkit conda forge protobuf pypi pypi psutil pypi pypi ptyprocess conda forge pulsemaker dev pypi pypi modules pypi pypi pypi pypi pycparser conda forge pydot pypi pypi pygments conda forge pylatexenc pypi pypi pyopenssl conda forge pyparsing conda forge pyrsistent conda forge pysocks conda forge python python constraint pypi pypi python dateutil py conda forge python abi conda forge pytz conda forge pyzmq conda forge qiskit pypi pypi qiskit aer pypi pypi qiskit aqua pypi pypi qiskit ibmq provider pypi pypi qiskit ignis pypi pypi qiskit terra pypi pypi qonduit dev quandl pypi pypi readline requests conda forge requests ntlm pypi pypi retworkx pypi pypi rsa pypi pypi scikit learn pypi pypi scipy pypi pypi seaborn pypi pypi py conda forge setuptools six conda forge sniffio conda forge sortedcontainers pypi pypi sqlite symengine pypi pypi sympy pypi pypi terminado conda forge testpath py conda forge threadpoolctl pypi pypi tk tornado conda forge tqdm pypi pypi traitlets py conda forge tweedledum pypi pypi typing extensions pypi pypi tzdata conda forge wcwidth conda forge webencodings py conda forge websocket client pypi pypi websockets pypi pypi wheel widgetsnbextension pypi pypi xz yfinance pypi pypi zeromq zipp conda forge zlib
0
318,493
23,723,996,807
IssuesEvent
2022-08-30 17:47:42
opendatahub-io/odh-dashboard
https://api.github.com/repos/opendatahub-io/odh-dashboard
closed
[Feature Request]: Move CONTRIBUTING file to root
kind/documentation kind/enhancement infrastructure priority/low
### Feature description Right now, the CONTRIBUTING file is under `/docs` path. We should move that file into the root just to follow the standard structure of documenting a project. This might be a subtask of #195 ### Describe alternatives you've considered _No response_ ### Anything else? _No response_
1.0
[Feature Request]: Move CONTRIBUTING file to root - ### Feature description Right now, the CONTRIBUTING file is under `/docs` path. We should move that file into the root just to follow the standard structure of documenting a project. This might be a subtask of #195 ### Describe alternatives you've considered _No response_ ### Anything else? _No response_
non_process
move contributing file to root feature description right now the contributing file is under docs path we should move that file into the root just to follow the standard structure of documenting a project this might be a subtask of describe alternatives you ve considered no response anything else no response
0
54,150
13,900,889,246
IssuesEvent
2020-10-20 01:33:12
gate5/angular
https://api.github.com/repos/gate5/angular
opened
CVE-2019-20920 (High) detected in handlebars-4.0.12.tgz, handlebars-4.0.11.tgz
security vulnerability
## CVE-2019-20920 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>handlebars-4.0.12.tgz</b>, <b>handlebars-4.0.11.tgz</b></p></summary> <p> <details><summary><b>handlebars-4.0.12.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.12.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.12.tgz</a></p> <p>Path to dependency file: angular/integration/cli-hello-world-ivy-minimal/yarn.lock</p> <p>Path to vulnerable library: angular/integration/cli-hello-world-ivy-minimal/yarn.lock,angular/integration/cli-hello-world-ivy-compat/yarn.lock</p> <p> Dependency Hierarchy: - karma-coverage-istanbul-reporter-2.0.4.tgz (Root Library) - istanbul-api-2.0.6.tgz - istanbul-reports-2.0.1.tgz - :x: **handlebars-4.0.12.tgz** (Vulnerable Library) </details> <details><summary><b>handlebars-4.0.11.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz</a></p> <p>Path to dependency file: angular/integration/cli-hello-world/yarn.lock</p> <p>Path to vulnerable library: angular/integration/cli-hello-world/yarn.lock,angular/aio/yarn.lock,angular/yarn.lock</p> <p> Dependency Hierarchy: - karma-coverage-istanbul-reporter-1.4.1.tgz (Root Library) - istanbul-api-1.2.1.tgz - istanbul-reports-1.1.3.tgz - :x: **handlebars-4.0.11.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/gate5/angular/commit/cf1f1c0344fa01406f61ff7437a72714be39b47e">cf1f1c0344fa01406f61ff7437a72714be39b47e</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim's browser (effectively serving as XSS). <p>Publish Date: 2020-09-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20920>CVE-2019-20920</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20920">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20920</a></p> <p>Release Date: 2020-09-30</p> <p>Fix Resolution: v3.0.8, v4.5.3</p> </p> </details> <p></p>
True
CVE-2019-20920 (High) detected in handlebars-4.0.12.tgz, handlebars-4.0.11.tgz - ## CVE-2019-20920 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>handlebars-4.0.12.tgz</b>, <b>handlebars-4.0.11.tgz</b></p></summary> <p> <details><summary><b>handlebars-4.0.12.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.12.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.12.tgz</a></p> <p>Path to dependency file: angular/integration/cli-hello-world-ivy-minimal/yarn.lock</p> <p>Path to vulnerable library: angular/integration/cli-hello-world-ivy-minimal/yarn.lock,angular/integration/cli-hello-world-ivy-compat/yarn.lock</p> <p> Dependency Hierarchy: - karma-coverage-istanbul-reporter-2.0.4.tgz (Root Library) - istanbul-api-2.0.6.tgz - istanbul-reports-2.0.1.tgz - :x: **handlebars-4.0.12.tgz** (Vulnerable Library) </details> <details><summary><b>handlebars-4.0.11.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz</a></p> <p>Path to dependency file: angular/integration/cli-hello-world/yarn.lock</p> <p>Path to vulnerable library: angular/integration/cli-hello-world/yarn.lock,angular/aio/yarn.lock,angular/yarn.lock</p> <p> Dependency Hierarchy: - karma-coverage-istanbul-reporter-1.4.1.tgz (Root Library) - istanbul-api-1.2.1.tgz - istanbul-reports-1.1.3.tgz - :x: **handlebars-4.0.11.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/gate5/angular/commit/cf1f1c0344fa01406f61ff7437a72714be39b47e">cf1f1c0344fa01406f61ff7437a72714be39b47e</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim's browser (effectively serving as XSS). <p>Publish Date: 2020-09-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20920>CVE-2019-20920</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20920">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20920</a></p> <p>Release Date: 2020-09-30</p> <p>Fix Resolution: v3.0.8, v4.5.3</p> </p> </details> <p></p>
non_process
cve high detected in handlebars tgz handlebars tgz cve high severity vulnerability vulnerable libraries handlebars tgz handlebars tgz handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file angular integration cli hello world ivy minimal yarn lock path to vulnerable library angular integration cli hello world ivy minimal yarn lock angular integration cli hello world ivy compat yarn lock dependency hierarchy karma coverage istanbul reporter tgz root library istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file angular integration cli hello world yarn lock path to vulnerable library angular integration cli hello world yarn lock angular aio yarn lock angular yarn lock dependency hierarchy karma coverage istanbul reporter tgz root library istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library found in head commit a href vulnerability details handlebars before and x before is vulnerable to arbitrary code execution the lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript this can be used to run arbitrary code on a server processing handlebars templates or in a victim s browser effectively serving as xss publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
605,633
18,738,123,368
IssuesEvent
2021-11-04 10:17:29
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
Current lang.error `RetryManager ` shouldRetry() function argument type is different from the spec.
Type/Bug Priority/Blocker Team/CompilerFE Lang/LangLib
**Description:** The current implementation has the following signature ```ballerina public type RetryManager object { public function shouldRetry(error? e) returns boolean; }; ``` But Spec(https://ballerina.io/ballerina-spec/spec.html#lang.error) says it should be ```ballerina public type RetryManager object { public function shouldRetry(error e) returns boolean; }; ``` Affect version: slbeta3
1.0
Current lang.error `RetryManager ` shouldRetry() function argument type is different from the spec. - **Description:** The current implementation has the following signature ```ballerina public type RetryManager object { public function shouldRetry(error? e) returns boolean; }; ``` But Spec(https://ballerina.io/ballerina-spec/spec.html#lang.error) says it should be ```ballerina public type RetryManager object { public function shouldRetry(error e) returns boolean; }; ``` Affect version: slbeta3
non_process
current lang error retrymanager shouldretry function argument type is different from the spec description the current implementation has the following signature ballerina public type retrymanager object public function shouldretry error e returns boolean but spec says it should be ballerina public type retrymanager object public function shouldretry error e returns boolean affect version
0
19,975
26,457,147,051
IssuesEvent
2023-01-16 15:04:12
vnphanquang/svelte-put
https://api.github.com/repos/vnphanquang/svelte-put
closed
[toc] Performance Improvements
scope:toc scope:preprocess-auto-slug type:refactor type:perf
## Context As `toc@2.0.0` was released, almost everything was re-implemented from the ground up. This issue lists potential performance improvement ## Todos - [x] reduce to use only one `IntersectionObserver` instead of one for each matching elements <- https://github.com/vnphanquang/svelte-put/commit/60c2029db21ba6b6a478bff25fd4a8d39c07038f - [x] run observe operations async <- https://github.com/vnphanquang/svelte-put/commit/a8cd5c2dce7af3aa382a38e69d3c357381e7087b - [x] use simple callback instead of store for tracking active element <- https://github.com/vnphanquang/svelte-put/commit/d04d44e010cbe74f63c5304451524902c68036c7
1.0
[toc] Performance Improvements - ## Context As `toc@2.0.0` was released, almost everything was re-implemented from the ground up. This issue lists potential performance improvement ## Todos - [x] reduce to use only one `IntersectionObserver` instead of one for each matching elements <- https://github.com/vnphanquang/svelte-put/commit/60c2029db21ba6b6a478bff25fd4a8d39c07038f - [x] run observe operations async <- https://github.com/vnphanquang/svelte-put/commit/a8cd5c2dce7af3aa382a38e69d3c357381e7087b - [x] use simple callback instead of store for tracking active element <- https://github.com/vnphanquang/svelte-put/commit/d04d44e010cbe74f63c5304451524902c68036c7
process
performance improvements context as toc was released almost everything was re implemented from the ground up this issue lists potential performance improvement todos reduce to use only one intersectionobserver instead of one for each matching elements run observe operations async use simple callback instead of store for tracking active element
1
428,325
12,406,569,290
IssuesEvent
2020-05-21 19:22:52
inverse-inc/packetfence
https://api.github.com/repos/inverse-inc/packetfence
closed
Issue viewing interfaces
Priority: Critical Type: Bug
**Describe the bug** The interfaces are empty when viewing them in the admin **To Reproduce** 1. I got the following error: ![image](https://user-images.githubusercontent.com/3857942/82590954-95c96b00-9b6c-11ea-86ec-2b8c1d194ae4.png) 2. And the following is the API response which seems good at first hand: ``` { "items": [ { "additional_listening_daemons": [], "address": "172.16.201.1/24", "coa": "disabled", "dhcpd_enabled": "enabled", "dns": "172.16.201.1", "high_availability": 0, "hwaddr": "00:00:00:00:00:b1", "id": "ens192.201", "ifindex": "5", "ipaddress": "172.16.201.1", "ipv6_address": null, "ipv6_prefix": null, "is_running": true, "master": "ens192", "name": "ens192", "nat_enabled": "disabled", "netflow_accounting_enabled": null, "netmask": "255.255.255.0", "network": "172.16.201.0", "network_iseditable": true, "networks": [], "reg_network": null, "split_network": "disabled", "type": "vlan-registration", "vip": null, "vlan": "201" }, { "additional_listening_daemons": [ "portal", "radius", "dhcp", "dns" ], "address": "172.16.10.35/24", "coa": null, "dhcpd_enabled": null, "dns": null, "high_availability": 0, "hwaddr": "00:00:00:00:00:b1", "id": "ens192.10", "ifindex": "4", "ipaddress": "172.16.10.35", "ipv6_address": null, "ipv6_prefix": null, "is_running": true, "master": "ens192", "name": "ens192", "nat_enabled": null, "netmask": "255.255.255.0", "network": "172.16.10.0", "network_iseditable": false, "networks": [ "172.16.202.0", "172.16.220.0" ], "reg_network": null, "split_network": null, "type": "management", "vip": null, "vlan": "10" }, { "additional_listening_daemons": [], "coa": null, "dhcpd_enabled": null, "dns": null, "high_availability": 0, "hwaddr": "00:00:00:00:00:b1", "id": "ens192", "ifindex": "2", "ipv6_address": null, "ipv6_prefix": null, "is_running": true, "master": null, "name": "ens192", "nat_enabled": null, "network": null, "network_iseditable": false, "networks": [], "reg_network": null, "split_network": null, "type": "none", "vip": null, "vlan": null }, { "additional_listening_daemons": [], "address": "172.16.224.1/24", "coa": null, "dhcpd_enabled": null, "dns": null, "high_availability": 0, "hwaddr": "00:00:00:00:00:e8", "id": "ens224", "ifindex": "3", "ipaddress": "172.16.224.1", "ipv6_address": null, "ipv6_prefix": null, "is_running": true, "master": null, "name": "ens224", "nat_enabled": null, "netmask": "255.255.255.0", "network": "172.16.224.0", "network_iseditable": false, "networks": [], "reg_network": null, "split_network": null, "type": "dhcp-listener", "vip": null, "vlan": null }, { "additional_listening_daemons": [], "address": "172.16.210.1/24", "coa": null, "dhcpd_enabled": "enabled", "dns": "172.16.210.1", "high_availability": 0, "hwaddr": "00:00:00:00:00:b1", "id": "ens192.210", "ifindex": "6", "ipaddress": "172.16.210.1", "ipv6_address": null, "ipv6_prefix": null, "is_running": true, "master": "ens192", "name": "ens192", "nat_enabled": "disabled", "netflow_accounting_enabled": null, "netmask": "255.255.255.0", "network": "172.16.210.0", "network_iseditable": true, "networks": [], "reg_network": null, "split_network": "disabled", "type": "vlan-isolation", "vip": null, "vlan": "210" } ], "status": 200 } ```
1.0
Issue viewing interfaces - **Describe the bug** The interfaces are empty when viewing them in the admin **To Reproduce** 1. I got the following error: ![image](https://user-images.githubusercontent.com/3857942/82590954-95c96b00-9b6c-11ea-86ec-2b8c1d194ae4.png) 2. And the following is the API response which seems good at first hand: ``` { "items": [ { "additional_listening_daemons": [], "address": "172.16.201.1/24", "coa": "disabled", "dhcpd_enabled": "enabled", "dns": "172.16.201.1", "high_availability": 0, "hwaddr": "00:00:00:00:00:b1", "id": "ens192.201", "ifindex": "5", "ipaddress": "172.16.201.1", "ipv6_address": null, "ipv6_prefix": null, "is_running": true, "master": "ens192", "name": "ens192", "nat_enabled": "disabled", "netflow_accounting_enabled": null, "netmask": "255.255.255.0", "network": "172.16.201.0", "network_iseditable": true, "networks": [], "reg_network": null, "split_network": "disabled", "type": "vlan-registration", "vip": null, "vlan": "201" }, { "additional_listening_daemons": [ "portal", "radius", "dhcp", "dns" ], "address": "172.16.10.35/24", "coa": null, "dhcpd_enabled": null, "dns": null, "high_availability": 0, "hwaddr": "00:00:00:00:00:b1", "id": "ens192.10", "ifindex": "4", "ipaddress": "172.16.10.35", "ipv6_address": null, "ipv6_prefix": null, "is_running": true, "master": "ens192", "name": "ens192", "nat_enabled": null, "netmask": "255.255.255.0", "network": "172.16.10.0", "network_iseditable": false, "networks": [ "172.16.202.0", "172.16.220.0" ], "reg_network": null, "split_network": null, "type": "management", "vip": null, "vlan": "10" }, { "additional_listening_daemons": [], "coa": null, "dhcpd_enabled": null, "dns": null, "high_availability": 0, "hwaddr": "00:00:00:00:00:b1", "id": "ens192", "ifindex": "2", "ipv6_address": null, "ipv6_prefix": null, "is_running": true, "master": null, "name": "ens192", "nat_enabled": null, "network": null, "network_iseditable": false, "networks": [], "reg_network": null, "split_network": null, "type": "none", "vip": null, "vlan": null }, { "additional_listening_daemons": [], "address": "172.16.224.1/24", "coa": null, "dhcpd_enabled": null, "dns": null, "high_availability": 0, "hwaddr": "00:00:00:00:00:e8", "id": "ens224", "ifindex": "3", "ipaddress": "172.16.224.1", "ipv6_address": null, "ipv6_prefix": null, "is_running": true, "master": null, "name": "ens224", "nat_enabled": null, "netmask": "255.255.255.0", "network": "172.16.224.0", "network_iseditable": false, "networks": [], "reg_network": null, "split_network": null, "type": "dhcp-listener", "vip": null, "vlan": null }, { "additional_listening_daemons": [], "address": "172.16.210.1/24", "coa": null, "dhcpd_enabled": "enabled", "dns": "172.16.210.1", "high_availability": 0, "hwaddr": "00:00:00:00:00:b1", "id": "ens192.210", "ifindex": "6", "ipaddress": "172.16.210.1", "ipv6_address": null, "ipv6_prefix": null, "is_running": true, "master": "ens192", "name": "ens192", "nat_enabled": "disabled", "netflow_accounting_enabled": null, "netmask": "255.255.255.0", "network": "172.16.210.0", "network_iseditable": true, "networks": [], "reg_network": null, "split_network": "disabled", "type": "vlan-isolation", "vip": null, "vlan": "210" } ], "status": 200 } ```
non_process
issue viewing interfaces describe the bug the interfaces are empty when viewing them in the admin to reproduce i got the following error and the following is the api response which seems good at first hand items additional listening daemons address coa disabled dhcpd enabled enabled dns high availability hwaddr id ifindex ipaddress address null prefix null is running true master name nat enabled disabled netflow accounting enabled null netmask network network iseditable true networks reg network null split network disabled type vlan registration vip null vlan additional listening daemons portal radius dhcp dns address coa null dhcpd enabled null dns null high availability hwaddr id ifindex ipaddress address null prefix null is running true master name nat enabled null netmask network network iseditable false networks reg network null split network null type management vip null vlan additional listening daemons coa null dhcpd enabled null dns null high availability hwaddr id ifindex address null prefix null is running true master null name nat enabled null network null network iseditable false networks reg network null split network null type none vip null vlan null additional listening daemons address coa null dhcpd enabled null dns null high availability hwaddr id ifindex ipaddress address null prefix null is running true master null name nat enabled null netmask network network iseditable false networks reg network null split network null type dhcp listener vip null vlan null additional listening daemons address coa null dhcpd enabled enabled dns high availability hwaddr id ifindex ipaddress address null prefix null is running true master name nat enabled disabled netflow accounting enabled null netmask network network iseditable true networks reg network null split network disabled type vlan isolation vip null vlan status
0
56,870
6,530,731,578
IssuesEvent
2017-08-30 16:02:27
mautic/mautic
https://api.github.com/repos/mautic/mautic
closed
Focus not working / HTTP 500 viewpixel.gif
Bug Ready To Test
What type of report is this: | Q | A | ---| --- | Bug report? | Y | Feature request? | | Enhancement? | ## Description: If you are using a focus item there is a cal to viewpixel.gif which results in a HTTP 500 and therefore the focus item is not working. ## If a bug: | Q | A | --- | --- | Mautic version | v2.7.1 | PHP version | PHP Version 5.6.30 ### Steps to reproduce: 1. Use a focus item 2. GET /focus/5/viewpixel.gif 500 ()
1.0
Focus not working / HTTP 500 viewpixel.gif - What type of report is this: | Q | A | ---| --- | Bug report? | Y | Feature request? | | Enhancement? | ## Description: If you are using a focus item there is a cal to viewpixel.gif which results in a HTTP 500 and therefore the focus item is not working. ## If a bug: | Q | A | --- | --- | Mautic version | v2.7.1 | PHP version | PHP Version 5.6.30 ### Steps to reproduce: 1. Use a focus item 2. GET /focus/5/viewpixel.gif 500 ()
non_process
focus not working http viewpixel gif what type of report is this q a bug report y feature request enhancement description if you are using a focus item there is a cal to viewpixel gif which results in a http and therefore the focus item is not working if a bug q a mautic version php version php version steps to reproduce use a focus item get focus viewpixel gif
0
2,850
5,809,953,531
IssuesEvent
2017-05-04 14:29:57
Hurence/logisland
https://api.github.com/repos/Hurence/logisland
closed
correct debug messages in pcapParser processor
cyber-security processor
# Expected behavior and actual behavior. in the current implementation, debug message are always displayed regardless of the setting of the processor "debug" variable # Steps to reproduce the problem. # Specifications like the version of the project, operating system, or hardware. "
1.0
correct debug messages in pcapParser processor - # Expected behavior and actual behavior. in the current implementation, debug message are always displayed regardless of the setting of the processor "debug" variable # Steps to reproduce the problem. # Specifications like the version of the project, operating system, or hardware. "
process
correct debug messages in pcapparser processor expected behavior and actual behavior in the current implementation debug message are always displayed regardless of the setting of the processor debug variable steps to reproduce the problem specifications like the version of the project operating system or hardware
1
22,083
30,605,963,353
IssuesEvent
2023-07-23 02:00:08
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 21 Jul 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Ethosight: A Joint-Embedding Based System for Nuanced Perception Using Contextual Label Affinity Metric and Reasoning Based Iterative Learning - **Authors:** Hugo Latapie, Kristinn R. Thorisson, Shan Yu, Vahagn Petrosyan, Patrick Hammer, Pei Wang, Brandon Kynoch, Hanning Chen, Tangrui Li - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2307.10577 - **Pdf link:** https://arxiv.org/pdf/2307.10577 - **Abstract** Traditional computer vision models often require extensive manual effort for data acquisition and validation, particularly when detecting subtle behavioral nuances or events. The difficulty in distinguishing routine behaviors from potential risks in real-world applications, like differentiating routine shopping from potential shoplifting, further complicates the process. We present Ethosight, a novel zero-shot computer vision algorithm. Ethosight eradicates the need for pre-existing symbolic knowledge, initiating from a clean slate based on user requirements and semantic knowledge of interest. Using localized label affinity calculations and a reasoning-guided iterative learning loop, Ethosight infers scene details and iteratively refines the label set. Reasoning mechanisms can be derived from large language models like GPT4, symbolic reasoners like OpenNARS, or hybrid systems. Ethosight further capitalizes on the capabilities of a pre-trained multi-modal model, ImageBind, generating accurate semantic knowledge of images within a few cycles. It successfully captures both explicit and nuanced elements efficiently. We also introduce the implementation of Korzybski's "time-binding" concept in machines, which allows for generational learning and knowledge sharing across deployments. Our evaluations demonstrate Ethosight's efficacy across 40 complex use cases. It has exhibited an exceptional ability to discern new areas of interest, consistently generating high-affinity scores within the top five labels from a set of a thousand. Tests conducted across diverse environments attest to Ethosight's robust performance. Detailed results and case studies within the main body of this paper and an appendix underscore a promising trajectory towards enhancing the adaptability and resilience of computer vision models in detecting and extracting subtle and nuanced behaviors. ### Event Blob Tracking: An Asynchronous Real-Time Algorithm - **Authors:** Ziwei Wang, Timothy Molloy, Pieter van Goor, Robert Mahony - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10593 - **Pdf link:** https://arxiv.org/pdf/2307.10593 - **Abstract** Event-based cameras have become increasingly popular for tracking fast-moving objects due to their high temporal resolution, low latency, and high dynamic range. In this paper, we propose a novel algorithm for tracking event blobs using raw events asynchronously in real time. We introduce the concept of an event blob as a spatio-temporal likelihood of event occurrence where the conditional spatial likelihood is blob-like. Many real-world objects generate event blob data, for example, flickering LEDs such as car headlights or any small foreground object moving against a static or slowly varying background. The proposed algorithm uses a nearest neighbour classifier with a dynamic threshold criteria for data association coupled with a Kalman filter to track the event blob state. Our algorithm achieves highly accurate tracking and event blob shape estimation even under challenging lighting conditions and high-speed motions. The microsecond time resolution achieved means that the filter output can be used to derive secondary information such as time-to-contact or range estimation, that will enable applications to real-world problems such as collision avoidance in autonomous driving. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### HRFNet: High-Resolution Forgery Network for Localizing Satellite Image Manipulation - **Authors:** Fahim Faisal Niloy, Kishor Kumar Bhaumik, Simon S. Woo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11052 - **Pdf link:** https://arxiv.org/pdf/2307.11052 - **Abstract** Existing high-resolution satellite image forgery localization methods rely on patch-based or downsampling-based training. Both of these training methods have major drawbacks, such as inaccurate boundaries between pristine and forged regions, the generation of unwanted artifacts, etc. To tackle the aforementioned challenges, inspired by the high-resolution image segmentation literature, we propose a novel model called HRFNet to enable satellite image forgery localization effectively. Specifically, equipped with shallow and deep branches, our model can successfully integrate RGB and resampling features in both global and local manners to localize forgery more accurately. We perform various experiments to demonstrate that our method achieves the best performance, while the memory requirement and processing speed are not compromised compared to existing methods. ## Keyword: ISP ### Tapestry of Time and Actions: Modeling Human Activity Sequences using Temporal Point Process Flows - **Authors:** Vinayak Gupta, Srikanta Bedathur - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2307.10305 - **Pdf link:** https://arxiv.org/pdf/2307.10305 - **Abstract** Human beings always engage in a vast range of activities and tasks that demonstrate their ability to adapt to different scenarios. Any human activity can be represented as a temporal sequence of actions performed to achieve a certain goal. Unlike the time series datasets extracted from electronics or machines, these action sequences are highly disparate in their nature -- the time to finish a sequence of actions can vary between different persons. Therefore, understanding the dynamics of these sequences is essential for many downstream tasks such as activity length prediction, goal prediction, next action recommendation, etc. Existing neural network-based approaches that learn a continuous-time activity sequence (or CTAS) are limited to the presence of only visual data or are designed specifically for a particular task, i.e., limited to next action or goal prediction. In this paper, we present ProActive, a neural marked temporal point process (MTPP) framework for modeling the continuous-time distribution of actions in an activity sequence while simultaneously addressing three high-impact problems -- next action prediction, sequence-goal prediction, and end-to-end sequence generation. Specifically, we utilize a self-attention module with temporal normalizing flows to model the influence and the inter-arrival times between actions in a sequence. In addition, we propose a novel addition over the ProActive model that can handle variations in the order of actions, i.e., different methods of achieving a given goal. We demonstrate that this variant can learn the order in which the person or actor prefers to do their actions. Extensive experiments on sequences derived from three activity recognition datasets show the significant accuracy boost of ProActive over the state-of-the-art in terms of action and goal prediction, and the first-ever application of end-to-end action sequence generation. ### Exploring Effective Priors and Efficient Models for Weakly-Supervised Change Detection - **Authors:** Zhenghui Zhao, Lixiang Ru, Chen Wu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10853 - **Pdf link:** https://arxiv.org/pdf/2307.10853 - **Abstract** Weakly-supervised change detection (WSCD) aims to detect pixel-level changes with only image-level annotations. Owing to its label efficiency, WSCD is drawing increasing attention recently. However, current WSCD methods often encounter the challenge of change missing and fabricating, i.e., the inconsistency between image-level annotations and pixel-level predictions. Specifically, change missing refer to the situation that the WSCD model fails to predict any changed pixels, even though the image-level label indicates changed, and vice versa for change fabricating. To address this challenge, in this work, we leverage global-scale and local-scale priors in WSCD and propose two components: a Dilated Prior (DP) decoder and a Label Gated (LG) constraint. The DP decoder decodes samples with the changed image-level label, skips samples with the unchanged label, and replaces them with an all-unchanged pixel-level label. The LG constraint is derived from the correspondence between changed representations and image-level labels, penalizing the model when it mispredicts the change status. Additionally, we develop TransWCD, a simple yet powerful transformer-based model, showcasing the potential of weakly-supervised learning in change detection. By integrating the DP decoder and LG constraint into TransWCD, we form TransWCD-DL. Our proposed TransWCD and TransWCD-DL achieve significant +6.33% and +9.55% F1 score improvements over the state-of-the-art methods on the WHU-CD dataset, respectively. Some performance metrics even exceed several fully-supervised change detection (FSCD) competitors. Code will be available at https://github.com/zhenghuizhao/TransWCD. ### OCTraN: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios - **Authors:** Aditya Nalgunda Ganesh, Dhruval Pobbathi Badrinath, Harshith Mohan Kumar, Priya SS, Surabhi Narayan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10934 - **Pdf link:** https://arxiv.org/pdf/2307.10934 - **Abstract** Modern approaches for vision-centric environment perception for autonomous navigation make extensive use of self-supervised monocular depth estimation algorithms that output disparity maps. However, when this disparity map is projected onto 3D space, the errors in disparity are magnified, resulting in a depth estimation error that increases quadratically as the distance from the camera increases. Though Light Detection and Ranging (LiDAR) can solve this issue, it is expensive and not feasible for many applications. To address the challenge of accurate ranging with low-cost sensors, we propose, OCTraN, a transformer architecture that uses iterative-attention to convert 2D image features into 3D occupancy features and makes use of convolution and transpose convolution to efficiently operate on spatial information. We also develop a self-supervised training pipeline to generalize the model to any scene by eliminating the need for LiDAR ground truth by substituting it with pseudo-ground truth labels obtained from boosted monocular depth estimation. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### Mitigating Viewer Impact from Disturbing Imagery using AI Filters: A User-Study - **Authors:** Ioannis Sarridis, Jochen Spangenberg, Olga Papadopoulou, Symeon Papadopoulos - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10334 - **Pdf link:** https://arxiv.org/pdf/2307.10334 - **Abstract** Exposure to disturbing imagery can significantly impact individuals, especially professionals who encounter such content as part of their work. This paper presents a user study, involving 107 participants, predominantly journalists and human rights investigators, that explores the capability of Artificial Intelligence (AI)-based image filters to potentially mitigate the emotional impact of viewing such disturbing content. We tested five different filter styles, both traditional (Blurring and Partial Blurring) and AI-based (Drawing, Colored Drawing, and Painting), and measured their effectiveness in terms of conveying image information while reducing emotional distress. Our findings suggest that the AI-based Drawing style filter demonstrates the best performance, offering a promising solution for reducing negative feelings (-30.38%) while preserving the interpretability of the image (97.19%). Despite the requirement for many professionals to eventually inspect the original images, participants suggested potential strategies for integrating AI filters into their workflow, such as using AI filters as an initial, preparatory step before viewing the original image. Overall, this paper contributes to the development of a more ethically considerate and effective visual environment for professionals routinely engaging with potentially disturbing imagery. ### A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset - **Authors:** Zahra Gharaee, ZeMing Gong, Nicholas Pellegrino, Iuliia Zarubiieva, Joakim Bruslund Haurum, Scott C. Lowe, Jaclyn T.A. McKeown, Chris C.Y. Ho, Joschka McLeod, Yi-Yun C Wei, Jireh Agda, Sujeevan Ratnasingham, Dirk Steinke, Angel X. Chang, Graham W. Taylor, Paul Fieguth - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2307.10455 - **Pdf link:** https://arxiv.org/pdf/2307.10455 - **Abstract** In an effort to catalog insect biodiversity, we propose a new large dataset of hand-labelled insect images, the BIOSCAN-Insect Dataset. Each record is taxonomically classified by an expert, and also has associated genetic information including raw nucleotide barcode sequences and assigned barcode index numbers, which are genetically-based proxies for species classification. This paper presents a curated million-image dataset, primarily to train computer-vision models capable of providing image-based taxonomic assessment, however, the dataset also presents compelling characteristics, the study of which would be of interest to the broader machine learning community. Driven by the biological nature inherent to the dataset, a characteristic long-tailed class-imbalance distribution is exhibited. Furthermore, taxonomic labelling is a hierarchical classification scheme, presenting a highly fine-grained classification problem at lower levels. Beyond spurring interest in biodiversity research within the machine learning community, progress on creating an image-based taxonomic classifier will also further the ultimate goal of all BIOSCAN research: to lay the foundation for a comprehensive survey of global biodiversity. This paper introduces the dataset and explores the classification task through the implementation and analysis of a baseline classifier. ### Classification of Visualization Types and Perspectives in Patents - **Authors:** Junaid Ahmed Ghauri, Eric Müller-Budack, Ralph Ewerth - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Digital Libraries (cs.DL); Information Retrieval (cs.IR); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2307.10471 - **Pdf link:** https://arxiv.org/pdf/2307.10471 - **Abstract** Due to the swift growth of patent applications each year, information and multimedia retrieval approaches that facilitate patent exploration and retrieval are of utmost importance. Different types of visualizations (e.g., graphs, technical drawings) and perspectives (e.g., side view, perspective) are used to visualize details of innovations in patents. The classification of these images enables a more efficient search and allows for further analysis. So far, datasets for image type classification miss some important visualization types for patents. Furthermore, related work does not make use of recent deep learning approaches including transformers. In this paper, we adopt state-of-the-art deep learning methods for the classification of visualization types and perspectives in patent images. We extend the CLEF-IP dataset for image type classification in patents to ten classes and provide manual ground truth annotations. In addition, we derive a set of hierarchical classes from a dataset that provides weakly-labeled data for image perspectives. Experimental results have demonstrated the feasibility of the proposed approaches. Source code, models, and dataset will be made publicly available. ### Event Blob Tracking: An Asynchronous Real-Time Algorithm - **Authors:** Ziwei Wang, Timothy Molloy, Pieter van Goor, Robert Mahony - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10593 - **Pdf link:** https://arxiv.org/pdf/2307.10593 - **Abstract** Event-based cameras have become increasingly popular for tracking fast-moving objects due to their high temporal resolution, low latency, and high dynamic range. In this paper, we propose a novel algorithm for tracking event blobs using raw events asynchronously in real time. We introduce the concept of an event blob as a spatio-temporal likelihood of event occurrence where the conditional spatial likelihood is blob-like. Many real-world objects generate event blob data, for example, flickering LEDs such as car headlights or any small foreground object moving against a static or slowly varying background. The proposed algorithm uses a nearest neighbour classifier with a dynamic threshold criteria for data association coupled with a Kalman filter to track the event blob state. Our algorithm achieves highly accurate tracking and event blob shape estimation even under challenging lighting conditions and high-speed motions. The microsecond time resolution achieved means that the filter output can be used to derive secondary information such as time-to-contact or range estimation, that will enable applications to real-world problems such as collision avoidance in autonomous driving. ### Meta-Transformer: A Unified Framework for Multimodal Learning - **Authors:** Yiyuan Zhang, Kaixiong Gong, Kaipeng Zhang, Hongsheng Li, Yu Qiao, Wanli Ouyang, Xiangyu Yue - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG); Multimedia (cs.MM) - **Arxiv link:** https://arxiv.org/abs/2307.10802 - **Pdf link:** https://arxiv.org/pdf/2307.10802 - **Abstract** Multimodal learning aims to build models that can process and relate information from multiple modalities. Despite years of development in this field, it still remains challenging to design a unified network for processing various modalities ($\textit{e.g.}$ natural language, 2D images, 3D point clouds, audio, video, time series, tabular data) due to the inherent gaps among them. In this work, we propose a framework, named Meta-Transformer, that leverages a $\textbf{frozen}$ encoder to perform multimodal perception without any paired multimodal training data. In Meta-Transformer, the raw input data from various modalities are mapped into a shared token space, allowing a subsequent encoder with frozen parameters to extract high-level semantic features of the input data. Composed of three main components: a unified data tokenizer, a modality-shared encoder, and task-specific heads for downstream tasks, Meta-Transformer is the first framework to perform unified learning across 12 modalities with unpaired data. Experiments on different benchmarks reveal that Meta-Transformer can handle a wide range of tasks including fundamental perception (text, image, point cloud, audio, video), practical application (X-Ray, infrared, hyperspectral, and IMU), and data mining (graph, tabular, and time-series). Meta-Transformer indicates a promising future for developing unified multimodal intelligence with transformers. Code will be available at https://github.com/invictus717/MetaTransformer ### Exploring Effective Priors and Efficient Models for Weakly-Supervised Change Detection - **Authors:** Zhenghui Zhao, Lixiang Ru, Chen Wu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10853 - **Pdf link:** https://arxiv.org/pdf/2307.10853 - **Abstract** Weakly-supervised change detection (WSCD) aims to detect pixel-level changes with only image-level annotations. Owing to its label efficiency, WSCD is drawing increasing attention recently. However, current WSCD methods often encounter the challenge of change missing and fabricating, i.e., the inconsistency between image-level annotations and pixel-level predictions. Specifically, change missing refer to the situation that the WSCD model fails to predict any changed pixels, even though the image-level label indicates changed, and vice versa for change fabricating. To address this challenge, in this work, we leverage global-scale and local-scale priors in WSCD and propose two components: a Dilated Prior (DP) decoder and a Label Gated (LG) constraint. The DP decoder decodes samples with the changed image-level label, skips samples with the unchanged label, and replaces them with an all-unchanged pixel-level label. The LG constraint is derived from the correspondence between changed representations and image-level labels, penalizing the model when it mispredicts the change status. Additionally, we develop TransWCD, a simple yet powerful transformer-based model, showcasing the potential of weakly-supervised learning in change detection. By integrating the DP decoder and LG constraint into TransWCD, we form TransWCD-DL. Our proposed TransWCD and TransWCD-DL achieve significant +6.33% and +9.55% F1 score improvements over the state-of-the-art methods on the WHU-CD dataset, respectively. Some performance metrics even exceed several fully-supervised change detection (FSCD) competitors. Code will be available at https://github.com/zhenghuizhao/TransWCD. ### HRFNet: High-Resolution Forgery Network for Localizing Satellite Image Manipulation - **Authors:** Fahim Faisal Niloy, Kishor Kumar Bhaumik, Simon S. Woo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11052 - **Pdf link:** https://arxiv.org/pdf/2307.11052 - **Abstract** Existing high-resolution satellite image forgery localization methods rely on patch-based or downsampling-based training. Both of these training methods have major drawbacks, such as inaccurate boundaries between pristine and forged regions, the generation of unwanted artifacts, etc. To tackle the aforementioned challenges, inspired by the high-resolution image segmentation literature, we propose a novel model called HRFNet to enable satellite image forgery localization effectively. Specifically, equipped with shallow and deep branches, our model can successfully integrate RGB and resampling features in both global and local manners to localize forgery more accurately. We perform various experiments to demonstrate that our method achieves the best performance, while the memory requirement and processing speed are not compromised compared to existing methods. ## Keyword: raw image There is no result
2.0
New submissions for Fri, 21 Jul 23 - ## Keyword: events ### Ethosight: A Joint-Embedding Based System for Nuanced Perception Using Contextual Label Affinity Metric and Reasoning Based Iterative Learning - **Authors:** Hugo Latapie, Kristinn R. Thorisson, Shan Yu, Vahagn Petrosyan, Patrick Hammer, Pei Wang, Brandon Kynoch, Hanning Chen, Tangrui Li - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2307.10577 - **Pdf link:** https://arxiv.org/pdf/2307.10577 - **Abstract** Traditional computer vision models often require extensive manual effort for data acquisition and validation, particularly when detecting subtle behavioral nuances or events. The difficulty in distinguishing routine behaviors from potential risks in real-world applications, like differentiating routine shopping from potential shoplifting, further complicates the process. We present Ethosight, a novel zero-shot computer vision algorithm. Ethosight eradicates the need for pre-existing symbolic knowledge, initiating from a clean slate based on user requirements and semantic knowledge of interest. Using localized label affinity calculations and a reasoning-guided iterative learning loop, Ethosight infers scene details and iteratively refines the label set. Reasoning mechanisms can be derived from large language models like GPT4, symbolic reasoners like OpenNARS, or hybrid systems. Ethosight further capitalizes on the capabilities of a pre-trained multi-modal model, ImageBind, generating accurate semantic knowledge of images within a few cycles. It successfully captures both explicit and nuanced elements efficiently. We also introduce the implementation of Korzybski's "time-binding" concept in machines, which allows for generational learning and knowledge sharing across deployments. Our evaluations demonstrate Ethosight's efficacy across 40 complex use cases. It has exhibited an exceptional ability to discern new areas of interest, consistently generating high-affinity scores within the top five labels from a set of a thousand. Tests conducted across diverse environments attest to Ethosight's robust performance. Detailed results and case studies within the main body of this paper and an appendix underscore a promising trajectory towards enhancing the adaptability and resilience of computer vision models in detecting and extracting subtle and nuanced behaviors. ### Event Blob Tracking: An Asynchronous Real-Time Algorithm - **Authors:** Ziwei Wang, Timothy Molloy, Pieter van Goor, Robert Mahony - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10593 - **Pdf link:** https://arxiv.org/pdf/2307.10593 - **Abstract** Event-based cameras have become increasingly popular for tracking fast-moving objects due to their high temporal resolution, low latency, and high dynamic range. In this paper, we propose a novel algorithm for tracking event blobs using raw events asynchronously in real time. We introduce the concept of an event blob as a spatio-temporal likelihood of event occurrence where the conditional spatial likelihood is blob-like. Many real-world objects generate event blob data, for example, flickering LEDs such as car headlights or any small foreground object moving against a static or slowly varying background. The proposed algorithm uses a nearest neighbour classifier with a dynamic threshold criteria for data association coupled with a Kalman filter to track the event blob state. Our algorithm achieves highly accurate tracking and event blob shape estimation even under challenging lighting conditions and high-speed motions. The microsecond time resolution achieved means that the filter output can be used to derive secondary information such as time-to-contact or range estimation, that will enable applications to real-world problems such as collision avoidance in autonomous driving. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### HRFNet: High-Resolution Forgery Network for Localizing Satellite Image Manipulation - **Authors:** Fahim Faisal Niloy, Kishor Kumar Bhaumik, Simon S. Woo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11052 - **Pdf link:** https://arxiv.org/pdf/2307.11052 - **Abstract** Existing high-resolution satellite image forgery localization methods rely on patch-based or downsampling-based training. Both of these training methods have major drawbacks, such as inaccurate boundaries between pristine and forged regions, the generation of unwanted artifacts, etc. To tackle the aforementioned challenges, inspired by the high-resolution image segmentation literature, we propose a novel model called HRFNet to enable satellite image forgery localization effectively. Specifically, equipped with shallow and deep branches, our model can successfully integrate RGB and resampling features in both global and local manners to localize forgery more accurately. We perform various experiments to demonstrate that our method achieves the best performance, while the memory requirement and processing speed are not compromised compared to existing methods. ## Keyword: ISP ### Tapestry of Time and Actions: Modeling Human Activity Sequences using Temporal Point Process Flows - **Authors:** Vinayak Gupta, Srikanta Bedathur - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2307.10305 - **Pdf link:** https://arxiv.org/pdf/2307.10305 - **Abstract** Human beings always engage in a vast range of activities and tasks that demonstrate their ability to adapt to different scenarios. Any human activity can be represented as a temporal sequence of actions performed to achieve a certain goal. Unlike the time series datasets extracted from electronics or machines, these action sequences are highly disparate in their nature -- the time to finish a sequence of actions can vary between different persons. Therefore, understanding the dynamics of these sequences is essential for many downstream tasks such as activity length prediction, goal prediction, next action recommendation, etc. Existing neural network-based approaches that learn a continuous-time activity sequence (or CTAS) are limited to the presence of only visual data or are designed specifically for a particular task, i.e., limited to next action or goal prediction. In this paper, we present ProActive, a neural marked temporal point process (MTPP) framework for modeling the continuous-time distribution of actions in an activity sequence while simultaneously addressing three high-impact problems -- next action prediction, sequence-goal prediction, and end-to-end sequence generation. Specifically, we utilize a self-attention module with temporal normalizing flows to model the influence and the inter-arrival times between actions in a sequence. In addition, we propose a novel addition over the ProActive model that can handle variations in the order of actions, i.e., different methods of achieving a given goal. We demonstrate that this variant can learn the order in which the person or actor prefers to do their actions. Extensive experiments on sequences derived from three activity recognition datasets show the significant accuracy boost of ProActive over the state-of-the-art in terms of action and goal prediction, and the first-ever application of end-to-end action sequence generation. ### Exploring Effective Priors and Efficient Models for Weakly-Supervised Change Detection - **Authors:** Zhenghui Zhao, Lixiang Ru, Chen Wu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10853 - **Pdf link:** https://arxiv.org/pdf/2307.10853 - **Abstract** Weakly-supervised change detection (WSCD) aims to detect pixel-level changes with only image-level annotations. Owing to its label efficiency, WSCD is drawing increasing attention recently. However, current WSCD methods often encounter the challenge of change missing and fabricating, i.e., the inconsistency between image-level annotations and pixel-level predictions. Specifically, change missing refer to the situation that the WSCD model fails to predict any changed pixels, even though the image-level label indicates changed, and vice versa for change fabricating. To address this challenge, in this work, we leverage global-scale and local-scale priors in WSCD and propose two components: a Dilated Prior (DP) decoder and a Label Gated (LG) constraint. The DP decoder decodes samples with the changed image-level label, skips samples with the unchanged label, and replaces them with an all-unchanged pixel-level label. The LG constraint is derived from the correspondence between changed representations and image-level labels, penalizing the model when it mispredicts the change status. Additionally, we develop TransWCD, a simple yet powerful transformer-based model, showcasing the potential of weakly-supervised learning in change detection. By integrating the DP decoder and LG constraint into TransWCD, we form TransWCD-DL. Our proposed TransWCD and TransWCD-DL achieve significant +6.33% and +9.55% F1 score improvements over the state-of-the-art methods on the WHU-CD dataset, respectively. Some performance metrics even exceed several fully-supervised change detection (FSCD) competitors. Code will be available at https://github.com/zhenghuizhao/TransWCD. ### OCTraN: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios - **Authors:** Aditya Nalgunda Ganesh, Dhruval Pobbathi Badrinath, Harshith Mohan Kumar, Priya SS, Surabhi Narayan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10934 - **Pdf link:** https://arxiv.org/pdf/2307.10934 - **Abstract** Modern approaches for vision-centric environment perception for autonomous navigation make extensive use of self-supervised monocular depth estimation algorithms that output disparity maps. However, when this disparity map is projected onto 3D space, the errors in disparity are magnified, resulting in a depth estimation error that increases quadratically as the distance from the camera increases. Though Light Detection and Ranging (LiDAR) can solve this issue, it is expensive and not feasible for many applications. To address the challenge of accurate ranging with low-cost sensors, we propose, OCTraN, a transformer architecture that uses iterative-attention to convert 2D image features into 3D occupancy features and makes use of convolution and transpose convolution to efficiently operate on spatial information. We also develop a self-supervised training pipeline to generalize the model to any scene by eliminating the need for LiDAR ground truth by substituting it with pseudo-ground truth labels obtained from boosted monocular depth estimation. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### Mitigating Viewer Impact from Disturbing Imagery using AI Filters: A User-Study - **Authors:** Ioannis Sarridis, Jochen Spangenberg, Olga Papadopoulou, Symeon Papadopoulos - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10334 - **Pdf link:** https://arxiv.org/pdf/2307.10334 - **Abstract** Exposure to disturbing imagery can significantly impact individuals, especially professionals who encounter such content as part of their work. This paper presents a user study, involving 107 participants, predominantly journalists and human rights investigators, that explores the capability of Artificial Intelligence (AI)-based image filters to potentially mitigate the emotional impact of viewing such disturbing content. We tested five different filter styles, both traditional (Blurring and Partial Blurring) and AI-based (Drawing, Colored Drawing, and Painting), and measured their effectiveness in terms of conveying image information while reducing emotional distress. Our findings suggest that the AI-based Drawing style filter demonstrates the best performance, offering a promising solution for reducing negative feelings (-30.38%) while preserving the interpretability of the image (97.19%). Despite the requirement for many professionals to eventually inspect the original images, participants suggested potential strategies for integrating AI filters into their workflow, such as using AI filters as an initial, preparatory step before viewing the original image. Overall, this paper contributes to the development of a more ethically considerate and effective visual environment for professionals routinely engaging with potentially disturbing imagery. ### A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset - **Authors:** Zahra Gharaee, ZeMing Gong, Nicholas Pellegrino, Iuliia Zarubiieva, Joakim Bruslund Haurum, Scott C. Lowe, Jaclyn T.A. McKeown, Chris C.Y. Ho, Joschka McLeod, Yi-Yun C Wei, Jireh Agda, Sujeevan Ratnasingham, Dirk Steinke, Angel X. Chang, Graham W. Taylor, Paul Fieguth - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2307.10455 - **Pdf link:** https://arxiv.org/pdf/2307.10455 - **Abstract** In an effort to catalog insect biodiversity, we propose a new large dataset of hand-labelled insect images, the BIOSCAN-Insect Dataset. Each record is taxonomically classified by an expert, and also has associated genetic information including raw nucleotide barcode sequences and assigned barcode index numbers, which are genetically-based proxies for species classification. This paper presents a curated million-image dataset, primarily to train computer-vision models capable of providing image-based taxonomic assessment, however, the dataset also presents compelling characteristics, the study of which would be of interest to the broader machine learning community. Driven by the biological nature inherent to the dataset, a characteristic long-tailed class-imbalance distribution is exhibited. Furthermore, taxonomic labelling is a hierarchical classification scheme, presenting a highly fine-grained classification problem at lower levels. Beyond spurring interest in biodiversity research within the machine learning community, progress on creating an image-based taxonomic classifier will also further the ultimate goal of all BIOSCAN research: to lay the foundation for a comprehensive survey of global biodiversity. This paper introduces the dataset and explores the classification task through the implementation and analysis of a baseline classifier. ### Classification of Visualization Types and Perspectives in Patents - **Authors:** Junaid Ahmed Ghauri, Eric Müller-Budack, Ralph Ewerth - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Digital Libraries (cs.DL); Information Retrieval (cs.IR); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2307.10471 - **Pdf link:** https://arxiv.org/pdf/2307.10471 - **Abstract** Due to the swift growth of patent applications each year, information and multimedia retrieval approaches that facilitate patent exploration and retrieval are of utmost importance. Different types of visualizations (e.g., graphs, technical drawings) and perspectives (e.g., side view, perspective) are used to visualize details of innovations in patents. The classification of these images enables a more efficient search and allows for further analysis. So far, datasets for image type classification miss some important visualization types for patents. Furthermore, related work does not make use of recent deep learning approaches including transformers. In this paper, we adopt state-of-the-art deep learning methods for the classification of visualization types and perspectives in patent images. We extend the CLEF-IP dataset for image type classification in patents to ten classes and provide manual ground truth annotations. In addition, we derive a set of hierarchical classes from a dataset that provides weakly-labeled data for image perspectives. Experimental results have demonstrated the feasibility of the proposed approaches. Source code, models, and dataset will be made publicly available. ### Event Blob Tracking: An Asynchronous Real-Time Algorithm - **Authors:** Ziwei Wang, Timothy Molloy, Pieter van Goor, Robert Mahony - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10593 - **Pdf link:** https://arxiv.org/pdf/2307.10593 - **Abstract** Event-based cameras have become increasingly popular for tracking fast-moving objects due to their high temporal resolution, low latency, and high dynamic range. In this paper, we propose a novel algorithm for tracking event blobs using raw events asynchronously in real time. We introduce the concept of an event blob as a spatio-temporal likelihood of event occurrence where the conditional spatial likelihood is blob-like. Many real-world objects generate event blob data, for example, flickering LEDs such as car headlights or any small foreground object moving against a static or slowly varying background. The proposed algorithm uses a nearest neighbour classifier with a dynamic threshold criteria for data association coupled with a Kalman filter to track the event blob state. Our algorithm achieves highly accurate tracking and event blob shape estimation even under challenging lighting conditions and high-speed motions. The microsecond time resolution achieved means that the filter output can be used to derive secondary information such as time-to-contact or range estimation, that will enable applications to real-world problems such as collision avoidance in autonomous driving. ### Meta-Transformer: A Unified Framework for Multimodal Learning - **Authors:** Yiyuan Zhang, Kaixiong Gong, Kaipeng Zhang, Hongsheng Li, Yu Qiao, Wanli Ouyang, Xiangyu Yue - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG); Multimedia (cs.MM) - **Arxiv link:** https://arxiv.org/abs/2307.10802 - **Pdf link:** https://arxiv.org/pdf/2307.10802 - **Abstract** Multimodal learning aims to build models that can process and relate information from multiple modalities. Despite years of development in this field, it still remains challenging to design a unified network for processing various modalities ($\textit{e.g.}$ natural language, 2D images, 3D point clouds, audio, video, time series, tabular data) due to the inherent gaps among them. In this work, we propose a framework, named Meta-Transformer, that leverages a $\textbf{frozen}$ encoder to perform multimodal perception without any paired multimodal training data. In Meta-Transformer, the raw input data from various modalities are mapped into a shared token space, allowing a subsequent encoder with frozen parameters to extract high-level semantic features of the input data. Composed of three main components: a unified data tokenizer, a modality-shared encoder, and task-specific heads for downstream tasks, Meta-Transformer is the first framework to perform unified learning across 12 modalities with unpaired data. Experiments on different benchmarks reveal that Meta-Transformer can handle a wide range of tasks including fundamental perception (text, image, point cloud, audio, video), practical application (X-Ray, infrared, hyperspectral, and IMU), and data mining (graph, tabular, and time-series). Meta-Transformer indicates a promising future for developing unified multimodal intelligence with transformers. Code will be available at https://github.com/invictus717/MetaTransformer ### Exploring Effective Priors and Efficient Models for Weakly-Supervised Change Detection - **Authors:** Zhenghui Zhao, Lixiang Ru, Chen Wu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.10853 - **Pdf link:** https://arxiv.org/pdf/2307.10853 - **Abstract** Weakly-supervised change detection (WSCD) aims to detect pixel-level changes with only image-level annotations. Owing to its label efficiency, WSCD is drawing increasing attention recently. However, current WSCD methods often encounter the challenge of change missing and fabricating, i.e., the inconsistency between image-level annotations and pixel-level predictions. Specifically, change missing refer to the situation that the WSCD model fails to predict any changed pixels, even though the image-level label indicates changed, and vice versa for change fabricating. To address this challenge, in this work, we leverage global-scale and local-scale priors in WSCD and propose two components: a Dilated Prior (DP) decoder and a Label Gated (LG) constraint. The DP decoder decodes samples with the changed image-level label, skips samples with the unchanged label, and replaces them with an all-unchanged pixel-level label. The LG constraint is derived from the correspondence between changed representations and image-level labels, penalizing the model when it mispredicts the change status. Additionally, we develop TransWCD, a simple yet powerful transformer-based model, showcasing the potential of weakly-supervised learning in change detection. By integrating the DP decoder and LG constraint into TransWCD, we form TransWCD-DL. Our proposed TransWCD and TransWCD-DL achieve significant +6.33% and +9.55% F1 score improvements over the state-of-the-art methods on the WHU-CD dataset, respectively. Some performance metrics even exceed several fully-supervised change detection (FSCD) competitors. Code will be available at https://github.com/zhenghuizhao/TransWCD. ### HRFNet: High-Resolution Forgery Network for Localizing Satellite Image Manipulation - **Authors:** Fahim Faisal Niloy, Kishor Kumar Bhaumik, Simon S. Woo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11052 - **Pdf link:** https://arxiv.org/pdf/2307.11052 - **Abstract** Existing high-resolution satellite image forgery localization methods rely on patch-based or downsampling-based training. Both of these training methods have major drawbacks, such as inaccurate boundaries between pristine and forged regions, the generation of unwanted artifacts, etc. To tackle the aforementioned challenges, inspired by the high-resolution image segmentation literature, we propose a novel model called HRFNet to enable satellite image forgery localization effectively. Specifically, equipped with shallow and deep branches, our model can successfully integrate RGB and resampling features in both global and local manners to localize forgery more accurately. We perform various experiments to demonstrate that our method achieves the best performance, while the memory requirement and processing speed are not compromised compared to existing methods. ## Keyword: raw image There is no result
process
new submissions for fri jul keyword events ethosight a joint embedding based system for nuanced perception using contextual label affinity metric and reasoning based iterative learning authors hugo latapie kristinn r thorisson shan yu vahagn petrosyan patrick hammer pei wang brandon kynoch hanning chen tangrui li subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract traditional computer vision models often require extensive manual effort for data acquisition and validation particularly when detecting subtle behavioral nuances or events the difficulty in distinguishing routine behaviors from potential risks in real world applications like differentiating routine shopping from potential shoplifting further complicates the process we present ethosight a novel zero shot computer vision algorithm ethosight eradicates the need for pre existing symbolic knowledge initiating from a clean slate based on user requirements and semantic knowledge of interest using localized label affinity calculations and a reasoning guided iterative learning loop ethosight infers scene details and iteratively refines the label set reasoning mechanisms can be derived from large language models like symbolic reasoners like opennars or hybrid systems ethosight further capitalizes on the capabilities of a pre trained multi modal model imagebind generating accurate semantic knowledge of images within a few cycles it successfully captures both explicit and nuanced elements efficiently we also introduce the implementation of korzybski s time binding concept in machines which allows for generational learning and knowledge sharing across deployments our evaluations demonstrate ethosight s efficacy across complex use cases it has exhibited an exceptional ability to discern new areas of interest consistently generating high affinity scores within the top five labels from a set of a thousand tests conducted across diverse environments attest to ethosight s robust performance detailed results and case studies within the main body of this paper and an appendix underscore a promising trajectory towards enhancing the adaptability and resilience of computer vision models in detecting and extracting subtle and nuanced behaviors event blob tracking an asynchronous real time algorithm authors ziwei wang timothy molloy pieter van goor robert mahony subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract event based cameras have become increasingly popular for tracking fast moving objects due to their high temporal resolution low latency and high dynamic range in this paper we propose a novel algorithm for tracking event blobs using raw events asynchronously in real time we introduce the concept of an event blob as a spatio temporal likelihood of event occurrence where the conditional spatial likelihood is blob like many real world objects generate event blob data for example flickering leds such as car headlights or any small foreground object moving against a static or slowly varying background the proposed algorithm uses a nearest neighbour classifier with a dynamic threshold criteria for data association coupled with a kalman filter to track the event blob state our algorithm achieves highly accurate tracking and event blob shape estimation even under challenging lighting conditions and high speed motions the microsecond time resolution achieved means that the filter output can be used to derive secondary information such as time to contact or range estimation that will enable applications to real world problems such as collision avoidance in autonomous driving keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb hrfnet high resolution forgery network for localizing satellite image manipulation authors fahim faisal niloy kishor kumar bhaumik simon s woo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract existing high resolution satellite image forgery localization methods rely on patch based or downsampling based training both of these training methods have major drawbacks such as inaccurate boundaries between pristine and forged regions the generation of unwanted artifacts etc to tackle the aforementioned challenges inspired by the high resolution image segmentation literature we propose a novel model called hrfnet to enable satellite image forgery localization effectively specifically equipped with shallow and deep branches our model can successfully integrate rgb and resampling features in both global and local manners to localize forgery more accurately we perform various experiments to demonstrate that our method achieves the best performance while the memory requirement and processing speed are not compromised compared to existing methods keyword isp tapestry of time and actions modeling human activity sequences using temporal point process flows authors vinayak gupta srikanta bedathur subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract human beings always engage in a vast range of activities and tasks that demonstrate their ability to adapt to different scenarios any human activity can be represented as a temporal sequence of actions performed to achieve a certain goal unlike the time series datasets extracted from electronics or machines these action sequences are highly disparate in their nature the time to finish a sequence of actions can vary between different persons therefore understanding the dynamics of these sequences is essential for many downstream tasks such as activity length prediction goal prediction next action recommendation etc existing neural network based approaches that learn a continuous time activity sequence or ctas are limited to the presence of only visual data or are designed specifically for a particular task i e limited to next action or goal prediction in this paper we present proactive a neural marked temporal point process mtpp framework for modeling the continuous time distribution of actions in an activity sequence while simultaneously addressing three high impact problems next action prediction sequence goal prediction and end to end sequence generation specifically we utilize a self attention module with temporal normalizing flows to model the influence and the inter arrival times between actions in a sequence in addition we propose a novel addition over the proactive model that can handle variations in the order of actions i e different methods of achieving a given goal we demonstrate that this variant can learn the order in which the person or actor prefers to do their actions extensive experiments on sequences derived from three activity recognition datasets show the significant accuracy boost of proactive over the state of the art in terms of action and goal prediction and the first ever application of end to end action sequence generation exploring effective priors and efficient models for weakly supervised change detection authors zhenghui zhao lixiang ru chen wu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract weakly supervised change detection wscd aims to detect pixel level changes with only image level annotations owing to its label efficiency wscd is drawing increasing attention recently however current wscd methods often encounter the challenge of change missing and fabricating i e the inconsistency between image level annotations and pixel level predictions specifically change missing refer to the situation that the wscd model fails to predict any changed pixels even though the image level label indicates changed and vice versa for change fabricating to address this challenge in this work we leverage global scale and local scale priors in wscd and propose two components a dilated prior dp decoder and a label gated lg constraint the dp decoder decodes samples with the changed image level label skips samples with the unchanged label and replaces them with an all unchanged pixel level label the lg constraint is derived from the correspondence between changed representations and image level labels penalizing the model when it mispredicts the change status additionally we develop transwcd a simple yet powerful transformer based model showcasing the potential of weakly supervised learning in change detection by integrating the dp decoder and lg constraint into transwcd we form transwcd dl our proposed transwcd and transwcd dl achieve significant and score improvements over the state of the art methods on the whu cd dataset respectively some performance metrics even exceed several fully supervised change detection fscd competitors code will be available at octran occupancy convolutional transformer network in unstructured traffic scenarios authors aditya nalgunda ganesh dhruval pobbathi badrinath harshith mohan kumar priya ss surabhi narayan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract modern approaches for vision centric environment perception for autonomous navigation make extensive use of self supervised monocular depth estimation algorithms that output disparity maps however when this disparity map is projected onto space the errors in disparity are magnified resulting in a depth estimation error that increases quadratically as the distance from the camera increases though light detection and ranging lidar can solve this issue it is expensive and not feasible for many applications to address the challenge of accurate ranging with low cost sensors we propose octran a transformer architecture that uses iterative attention to convert image features into occupancy features and makes use of convolution and transpose convolution to efficiently operate on spatial information we also develop a self supervised training pipeline to generalize the model to any scene by eliminating the need for lidar ground truth by substituting it with pseudo ground truth labels obtained from boosted monocular depth estimation keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw mitigating viewer impact from disturbing imagery using ai filters a user study authors ioannis sarridis jochen spangenberg olga papadopoulou symeon papadopoulos subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract exposure to disturbing imagery can significantly impact individuals especially professionals who encounter such content as part of their work this paper presents a user study involving participants predominantly journalists and human rights investigators that explores the capability of artificial intelligence ai based image filters to potentially mitigate the emotional impact of viewing such disturbing content we tested five different filter styles both traditional blurring and partial blurring and ai based drawing colored drawing and painting and measured their effectiveness in terms of conveying image information while reducing emotional distress our findings suggest that the ai based drawing style filter demonstrates the best performance offering a promising solution for reducing negative feelings while preserving the interpretability of the image despite the requirement for many professionals to eventually inspect the original images participants suggested potential strategies for integrating ai filters into their workflow such as using ai filters as an initial preparatory step before viewing the original image overall this paper contributes to the development of a more ethically considerate and effective visual environment for professionals routinely engaging with potentially disturbing imagery a step towards worldwide biodiversity assessment the bioscan insect dataset authors zahra gharaee zeming gong nicholas pellegrino iuliia zarubiieva joakim bruslund haurum scott c lowe jaclyn t a mckeown chris c y ho joschka mcleod yi yun c wei jireh agda sujeevan ratnasingham dirk steinke angel x chang graham w taylor paul fieguth subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract in an effort to catalog insect biodiversity we propose a new large dataset of hand labelled insect images the bioscan insect dataset each record is taxonomically classified by an expert and also has associated genetic information including raw nucleotide barcode sequences and assigned barcode index numbers which are genetically based proxies for species classification this paper presents a curated million image dataset primarily to train computer vision models capable of providing image based taxonomic assessment however the dataset also presents compelling characteristics the study of which would be of interest to the broader machine learning community driven by the biological nature inherent to the dataset a characteristic long tailed class imbalance distribution is exhibited furthermore taxonomic labelling is a hierarchical classification scheme presenting a highly fine grained classification problem at lower levels beyond spurring interest in biodiversity research within the machine learning community progress on creating an image based taxonomic classifier will also further the ultimate goal of all bioscan research to lay the foundation for a comprehensive survey of global biodiversity this paper introduces the dataset and explores the classification task through the implementation and analysis of a baseline classifier classification of visualization types and perspectives in patents authors junaid ahmed ghauri eric müller budack ralph ewerth subjects computer vision and pattern recognition cs cv artificial intelligence cs ai digital libraries cs dl information retrieval cs ir machine learning cs lg arxiv link pdf link abstract due to the swift growth of patent applications each year information and multimedia retrieval approaches that facilitate patent exploration and retrieval are of utmost importance different types of visualizations e g graphs technical drawings and perspectives e g side view perspective are used to visualize details of innovations in patents the classification of these images enables a more efficient search and allows for further analysis so far datasets for image type classification miss some important visualization types for patents furthermore related work does not make use of recent deep learning approaches including transformers in this paper we adopt state of the art deep learning methods for the classification of visualization types and perspectives in patent images we extend the clef ip dataset for image type classification in patents to ten classes and provide manual ground truth annotations in addition we derive a set of hierarchical classes from a dataset that provides weakly labeled data for image perspectives experimental results have demonstrated the feasibility of the proposed approaches source code models and dataset will be made publicly available event blob tracking an asynchronous real time algorithm authors ziwei wang timothy molloy pieter van goor robert mahony subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract event based cameras have become increasingly popular for tracking fast moving objects due to their high temporal resolution low latency and high dynamic range in this paper we propose a novel algorithm for tracking event blobs using raw events asynchronously in real time we introduce the concept of an event blob as a spatio temporal likelihood of event occurrence where the conditional spatial likelihood is blob like many real world objects generate event blob data for example flickering leds such as car headlights or any small foreground object moving against a static or slowly varying background the proposed algorithm uses a nearest neighbour classifier with a dynamic threshold criteria for data association coupled with a kalman filter to track the event blob state our algorithm achieves highly accurate tracking and event blob shape estimation even under challenging lighting conditions and high speed motions the microsecond time resolution achieved means that the filter output can be used to derive secondary information such as time to contact or range estimation that will enable applications to real world problems such as collision avoidance in autonomous driving meta transformer a unified framework for multimodal learning authors yiyuan zhang kaixiong gong kaipeng zhang hongsheng li yu qiao wanli ouyang xiangyu yue subjects computer vision and pattern recognition cs cv artificial intelligence cs ai computation and language cs cl machine learning cs lg multimedia cs mm arxiv link pdf link abstract multimodal learning aims to build models that can process and relate information from multiple modalities despite years of development in this field it still remains challenging to design a unified network for processing various modalities textit e g natural language images point clouds audio video time series tabular data due to the inherent gaps among them in this work we propose a framework named meta transformer that leverages a textbf frozen encoder to perform multimodal perception without any paired multimodal training data in meta transformer the raw input data from various modalities are mapped into a shared token space allowing a subsequent encoder with frozen parameters to extract high level semantic features of the input data composed of three main components a unified data tokenizer a modality shared encoder and task specific heads for downstream tasks meta transformer is the first framework to perform unified learning across modalities with unpaired data experiments on different benchmarks reveal that meta transformer can handle a wide range of tasks including fundamental perception text image point cloud audio video practical application x ray infrared hyperspectral and imu and data mining graph tabular and time series meta transformer indicates a promising future for developing unified multimodal intelligence with transformers code will be available at exploring effective priors and efficient models for weakly supervised change detection authors zhenghui zhao lixiang ru chen wu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract weakly supervised change detection wscd aims to detect pixel level changes with only image level annotations owing to its label efficiency wscd is drawing increasing attention recently however current wscd methods often encounter the challenge of change missing and fabricating i e the inconsistency between image level annotations and pixel level predictions specifically change missing refer to the situation that the wscd model fails to predict any changed pixels even though the image level label indicates changed and vice versa for change fabricating to address this challenge in this work we leverage global scale and local scale priors in wscd and propose two components a dilated prior dp decoder and a label gated lg constraint the dp decoder decodes samples with the changed image level label skips samples with the unchanged label and replaces them with an all unchanged pixel level label the lg constraint is derived from the correspondence between changed representations and image level labels penalizing the model when it mispredicts the change status additionally we develop transwcd a simple yet powerful transformer based model showcasing the potential of weakly supervised learning in change detection by integrating the dp decoder and lg constraint into transwcd we form transwcd dl our proposed transwcd and transwcd dl achieve significant and score improvements over the state of the art methods on the whu cd dataset respectively some performance metrics even exceed several fully supervised change detection fscd competitors code will be available at hrfnet high resolution forgery network for localizing satellite image manipulation authors fahim faisal niloy kishor kumar bhaumik simon s woo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract existing high resolution satellite image forgery localization methods rely on patch based or downsampling based training both of these training methods have major drawbacks such as inaccurate boundaries between pristine and forged regions the generation of unwanted artifacts etc to tackle the aforementioned challenges inspired by the high resolution image segmentation literature we propose a novel model called hrfnet to enable satellite image forgery localization effectively specifically equipped with shallow and deep branches our model can successfully integrate rgb and resampling features in both global and local manners to localize forgery more accurately we perform various experiments to demonstrate that our method achieves the best performance while the memory requirement and processing speed are not compromised compared to existing methods keyword raw image there is no result
1
87,707
10,553,410,982
IssuesEvent
2019-10-03 17:09:25
npgsql/npgsql
https://api.github.com/repos/npgsql/npgsql
closed
Automate documentation publishing
documentation infrastructure
Doc publishing is currently done completely manually. Ideally, once any update is done to master on any of the repos (npgsql, EF Core provider, EF6 provider) the change would automatically go live. This would mean two things: 1. Continuous deployment on each of the 3 repos which detects changes on the master branch, and updates the appropriate git submodule on [the doc repo](https://github.com/npgsql/doc). Pushing this would trigger: 2. Continuous deployment on https://github.com/npgsql/doc which automatically generates the site on each commit and pushes it to github (deploying live) We should wait and see if we're transitioning from Appveyor/Travis to Azure Pipelines before attacking this.
1.0
Automate documentation publishing - Doc publishing is currently done completely manually. Ideally, once any update is done to master on any of the repos (npgsql, EF Core provider, EF6 provider) the change would automatically go live. This would mean two things: 1. Continuous deployment on each of the 3 repos which detects changes on the master branch, and updates the appropriate git submodule on [the doc repo](https://github.com/npgsql/doc). Pushing this would trigger: 2. Continuous deployment on https://github.com/npgsql/doc which automatically generates the site on each commit and pushes it to github (deploying live) We should wait and see if we're transitioning from Appveyor/Travis to Azure Pipelines before attacking this.
non_process
automate documentation publishing doc publishing is currently done completely manually ideally once any update is done to master on any of the repos npgsql ef core provider provider the change would automatically go live this would mean two things continuous deployment on each of the repos which detects changes on the master branch and updates the appropriate git submodule on pushing this would trigger continuous deployment on which automatically generates the site on each commit and pushes it to github deploying live we should wait and see if we re transitioning from appveyor travis to azure pipelines before attacking this
0
227,514
17,388,318,003
IssuesEvent
2021-08-02 01:27:34
Requisitos-de-Software/2021.1-Ingresso.com
https://api.github.com/repos/Requisitos-de-Software/2021.1-Ingresso.com
closed
Criação da rich picture
Pré-Rastreabilidade documentation
### Descrição: <!-- Descrever de maneira clara e objetiva o propósito da issue. --> - Criação de uma rich picture inicial do projeto ### Tarefas: <!-- Checklist de ações que devem ser realizadas. --> - [x] Conter as versões iniciais da rich picture - [x] Breve explicação sobre esse artefato
1.0
Criação da rich picture - ### Descrição: <!-- Descrever de maneira clara e objetiva o propósito da issue. --> - Criação de uma rich picture inicial do projeto ### Tarefas: <!-- Checklist de ações que devem ser realizadas. --> - [x] Conter as versões iniciais da rich picture - [x] Breve explicação sobre esse artefato
non_process
criação da rich picture descrição criação de uma rich picture inicial do projeto tarefas conter as versões iniciais da rich picture breve explicação sobre esse artefato
0
21,816
3,923,430,249
IssuesEvent
2016-04-22 11:15:41
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
stress: failed test in cockroach/rpc/rpc.test: TestOffsetMeasurement
Robot test-failure
Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/7330af43411417dc60927b5a4cee0ee715b2cadc Stress build found a failed test: ``` === RUN TestOffsetMeasurement SIGABRT: abort PC=0x4612c9 m=0 goroutine 0 [idle]: runtime.epollwait(0x4, 0x7ffcc248cb10, 0xffffffff00000080, 0xda8c07, 0xffffffff, 0x198040, 0x33, 0x0, 0x0, 0x0, ...) /usr/local/go/src/runtime/sys_linux_amd64.s:440 +0x19 runtime.netpoll(0x1168901, 0x0) /usr/local/go/src/runtime/netpoll_epoll.go:67 +0x94 runtime.findrunnable(0xc820015500, 0x0) /usr/local/go/src/runtime/proc.go:1955 +0x62c runtime.schedule() /usr/local/go/src/runtime/proc.go:2072 +0x24f runtime.park_m(0xc820001c80) /usr/local/go/src/runtime/proc.go:2137 +0x18b runtime.mcall(0x7ffcc248d220) /usr/local/go/src/runtime/asm_amd64.s:233 +0x5b goroutine 1 [chan receive, 9 minutes]: testing.RunTests(0xcfd578, 0x1148e20, 0xb, 0xb, 0xb5be01) /usr/local/go/src/testing/testing.go:583 +0x8d2 testing.(*M).Run(0xc820042f08, 0xb5be20) /usr/local/go/src/testing/testing.go:515 +0x81 main.main() github.com/cockroachdb/cockroach/rpc/_test/_testmain.go:74 +0x117 goroutine 17 [syscall, 9 minutes, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 goroutine 34 [chan receive]: github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x11688a0) /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:1011 +0x64 created by github.com/cockroachdb/cockroach/util/log.init.1 /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:598 +0x8a goroutine 71 [semacquire, 9 minutes]: sync.runtime_Semacquire(0xc820149ecc) /usr/local/go/src/runtime/sema.go:47 +0x26 sync.(*Mutex).Lock(0xc820149ec8) /usr/local/go/src/sync/mutex.go:83 +0x1c4 github.com/cockroachdb/cockroach/util/hlc.(*Clock).MaxOffset(0xc820149ec0, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:135 +0x3a github.com/cockroachdb/cockroach/rpc.TestOffsetMeasurement(0xc820412000) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:129 +0x719 testing.tRunner(0xc820412000, 0x1148e98) /usr/local/go/src/testing/testing.go:473 +0x98 created by testing.RunTests /usr/local/go/src/testing/testing.go:582 +0x892 goroutine 24 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f0758050c08, 0x72, 0xc82041f800) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc820220060, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc820220060, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).Read(0xc820220000, 0xc82041f800, 0x400, 0x400, 0x0, 0x7f075808c050, 0xc82000e068) /usr/local/go/src/net/fd_unix.go:250 +0x23a net.(*conn).Read(0xc820144058, 0xc82041f800, 0x400, 0x400, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:172 +0xe4 crypto/tls.(*block).readFromUntil(0xc820afe000, 0x7f0758050f38, 0xc820144058, 0x5, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:460 +0xcc crypto/tls.(*Conn).readRecord(0xc820232000, 0xcfdd17, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:562 +0x2d1 crypto/tls.(*Conn).Read(0xc820232000, 0xc820676000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:939 +0x167 bufio.(*Reader).fill(0xc8202400c0) /usr/local/go/src/bufio/bufio.go:97 +0x1e9 bufio.(*Reader).Read(0xc8202400c0, 0xc820230038, 0x9, 0x9, 0xc81fff3e7a, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:207 +0x260 io.ReadAtLeast(0x7f0758094d90, 0xc8202400c0, 0xc820230038, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0) /usr/local/go/src/io/io.go:297 +0xe6 io.ReadFull(0x7f0758094d90, 0xc8202400c0, 0xc820230038, 0x9, 0x9, 0xc8201f02a8, 0x0, 0x0) /usr/local/go/src/io/io.go:315 +0x62 golang.org/x/net/http2.readFrameHeader(0xc820230038, 0x9, 0x9, 0x7f0758094d90, 0xc8202400c0, 0x0, 0xc800000000, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:236 +0xa5 golang.org/x/net/http2.(*Framer).ReadFrame(0xc820230000, 0x0, 0x0, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:463 +0x106 google.golang.org/grpc/transport.(*framer).readFrame(0xc8201f0180, 0x0, 0x0, 0x0, 0x0) /go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d google.golang.org/grpc/transport.(*http2Client).reader(0xc820440000) /go/src/google.golang.org/grpc/transport/http2_client.go:757 +0x109 created by google.golang.org/grpc/transport.newHTTP2Client /go/src/google.golang.org/grpc/transport/http2_client.go:200 +0x159a goroutine 8 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/rpc.NewContext.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:105 +0x57 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201d6000, 0xc820682040) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 9 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1() /go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201d6000, 0xc82012c4a0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 10 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f0758050cc8, 0x72, 0x0) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc8200633a0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc8200633a0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).accept(0xc820063340, 0x0, 0x7f0758090a40, 0xc82025b140) /usr/local/go/src/net/fd_unix.go:426 +0x27c net.(*TCPListener).AcceptTCP(0xc82012a0b8, 0xc820045ea8, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:254 +0x4d net.(*TCPListener).Accept(0xc82012a0b8, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:264 +0x3d google.golang.org/grpc.(*Server).Serve(0xc82009da00, 0x7f075804fd20, 0xc82012a0b8, 0x0, 0x0) /go/src/google.golang.org/grpc/server.go:252 +0x1b5 github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2() /go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201d6000, 0xc82012c4c0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 11 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/rpc.NewContext.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:105 +0x57 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201d6000, 0xc82012c640) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 12 [select, 9 minutes]: google.golang.org/grpc.(*Conn).transportMonitor(0xc820168540) /go/src/google.golang.org/grpc/clientconn.go:510 +0x1d3 google.golang.org/grpc.NewConn.func1(0xc820168540) /go/src/google.golang.org/grpc/clientconn.go:321 +0x1b5 created by google.golang.org/grpc.NewConn /go/src/google.golang.org/grpc/clientconn.go:322 +0x4dd goroutine 13 [semacquire, 9 minutes]: sync.runtime_Semacquire(0xc8201f05d4) /usr/local/go/src/runtime/sema.go:47 +0x26 sync.(*Mutex).Lock(0xc8201f05d0) /usr/local/go/src/sync/mutex.go:83 +0x1c4 github.com/cockroachdb/cockroach/rpc.(*AdvancingClock).UnixNano(0xc8201f05d0, 0x1) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:202 +0x30 github.com/cockroachdb/cockroach/rpc.(*AdvancingClock).UnixNano-fm(0x2) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:105 +0x20 github.com/cockroachdb/cockroach/util/hlc.(*Clock).getPhysicalClock(0xc820149ec0, 0xcfddf8) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:160 +0x25 github.com/cockroachdb/cockroach/util/hlc.(*Clock).PhysicalNow(0xc820149ec0, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:200 +0x7d github.com/cockroachdb/cockroach/util/hlc.(*Clock).PhysicalTime(0xc820149ec0, 0x0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:206 +0x35 github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8200633b0, 0xc820258280, 0xc82013e040, 0xf, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/rpc/context.go:191 +0x19c github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:173 +0x66 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201d6000, 0xc8201ea000) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 14 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f0758050b48, 0x72, 0xc820896800) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc820255870, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc820255870, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).Read(0xc820255810, 0xc820896800, 0x400, 0x400, 0x0, 0x7f075808c050, 0xc82000e068) /usr/local/go/src/net/fd_unix.go:250 +0x23a net.(*conn).Read(0xc82012a0c8, 0xc820896800, 0x400, 0x400, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:172 +0xe4 crypto/tls.(*block).readFromUntil(0xc8201ea120, 0x7f0758050f38, 0xc82012a0c8, 0x5, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:460 +0xcc crypto/tls.(*Conn).readRecord(0xc8201d0c00, 0xcfdd17, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:562 +0x2d1 crypto/tls.(*Conn).Read(0xc8201d0c00, 0xc820ac4000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:939 +0x167 bufio.(*Reader).fill(0xc8201e82a0) /usr/local/go/src/bufio/bufio.go:97 +0x1e9 bufio.(*Reader).Read(0xc8201e82a0, 0xc82046a038, 0x9, 0x9, 0x9, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:207 +0x260 io.ReadAtLeast(0x7f0758094d90, 0xc8201e82a0, 0xc82046a038, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0) /usr/local/go/src/io/io.go:297 +0xe6 io.ReadFull(0x7f0758094d90, 0xc8201e82a0, 0xc82046a038, 0x9, 0x9, 0xc2faf4d1f30f4901, 0x0, 0x0) /usr/local/go/src/io/io.go:315 +0x62 golang.org/x/net/http2.readFrameHeader(0xc82046a038, 0x9, 0x9, 0x7f0758094d90, 0xc8201e82a0, 0x58000000, 0xc800000000, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:236 +0xa5 golang.org/x/net/http2.(*Framer).ReadFrame(0xc82046a000, 0x0, 0x0, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:463 +0x106 google.golang.org/grpc/transport.(*framer).readFrame(0xc820182d50, 0x0, 0x0, 0x0, 0x0) /go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc820412090, 0xc820182de0) /go/src/google.golang.org/grpc/transport/http2_server.go:247 +0x646 google.golang.org/grpc.(*Server).serveStreams(0xc82009da00, 0x7f0758094de0, 0xc820412090) /go/src/google.golang.org/grpc/server.go:325 +0x159 google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc82009da00, 0x7f0758094cb8, 0xc8201d0c00, 0x7f0758094d18, 0xc820ac02c0) /go/src/google.golang.org/grpc/server.go:312 +0x49d google.golang.org/grpc.(*Server).handleRawConn(0xc82009da00, 0x7f0758050ed8, 0xc82012a0c8) /go/src/google.golang.org/grpc/server.go:289 +0x4ee created by google.golang.org/grpc.(*Server).Serve /go/src/google.golang.org/grpc/server.go:261 +0x372 goroutine 23 [select, 9 minutes]: google.golang.org/grpc/transport.(*http2Client).controller(0xc820440000) /go/src/google.golang.org/grpc/transport/http2_client.go:835 +0x5da created by google.golang.org/grpc/transport.newHTTP2Client /go/src/google.golang.org/grpc/transport/http2_client.go:194 +0x153b goroutine 58 [select, 9 minutes]: google.golang.org/grpc/transport.(*http2Server).controller(0xc820412090) /go/src/google.golang.org/grpc/transport/http2_server.go:620 +0x5da created by google.golang.org/grpc/transport.newHTTP2Server /go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x853 rax 0xfffffffffffffffc rbx 0xffffffff rcx 0x4612c9 rdx 0x80 rdi 0x4 rsi 0x7ffcc248cb10 rbp 0x11692e0 rsp 0x7ffcc248cad0 r8 0x11692e0 r9 0xc820428009 r10 0xffffffff r11 0x246 r12 0x853ce64 r13 0xa r14 0x7 r15 0x8 rip 0x4612c9 rflags 0x246 cs 0x33 fs 0x0 gs 0x0 ERROR: exit status 2 ``` Run Details: ``` 103 runs so far, 0 failures, over 5s 215 runs so far, 0 failures, over 10s 325 runs so far, 0 failures, over 15s 435 runs so far, 0 failures, over 20s 543 runs so far, 0 failures, over 25s 660 runs so far, 0 failures, over 30s 770 runs so far, 0 failures, over 35s 880 runs so far, 0 failures, over 40s 993 runs so far, 0 failures, over 45s 1103 runs so far, 0 failures, over 50s 1209 runs so far, 0 failures, over 55s 1321 runs so far, 0 failures, over 1m0s 1432 runs so far, 0 failures, over 1m5s 1540 runs so far, 0 failures, over 1m10s 1655 runs so far, 0 failures, over 1m15s 1769 runs so far, 0 failures, over 1m20s 1880 runs so far, 0 failures, over 1m25s 1988 runs so far, 0 failures, over 1m30s 2098 runs so far, 0 failures, over 1m35s 2208 runs so far, 0 failures, over 1m40s 2314 runs so far, 0 failures, over 1m45s 2426 runs so far, 0 failures, over 1m50s 2532 runs so far, 0 failures, over 1m55s 2641 runs so far, 0 failures, over 2m0s 2753 runs so far, 0 failures, over 2m5s 2861 runs so far, 0 failures, over 2m10s 2970 runs so far, 0 failures, over 2m15s 3082 runs so far, 0 failures, over 2m20s 3192 runs so far, 0 failures, over 2m25s 3302 runs so far, 0 failures, over 2m30s 3411 runs so far, 0 failures, over 2m35s 3523 runs so far, 0 failures, over 2m40s 3633 runs so far, 0 failures, over 2m45s 3739 runs so far, 0 failures, over 2m50s 3849 runs so far, 0 failures, over 2m55s 3954 runs so far, 0 failures, over 3m0s 4060 runs so far, 0 failures, over 3m5s 4171 runs so far, 0 failures, over 3m10s 4278 runs so far, 0 failures, over 3m15s 4385 runs so far, 0 failures, over 3m20s 4496 runs so far, 0 failures, over 3m25s 4602 runs so far, 0 failures, over 3m30s 4712 runs so far, 0 failures, over 3m35s 4816 runs so far, 0 failures, over 3m40s 4927 runs so far, 0 failures, over 3m45s 5036 runs so far, 0 failures, over 3m50s 5140 runs so far, 0 failures, over 3m55s 5247 runs so far, 0 failures, over 4m0s 5352 runs so far, 0 failures, over 4m5s 5456 runs so far, 0 failures, over 4m10s 5562 runs so far, 0 failures, over 4m15s 5670 runs so far, 0 failures, over 4m20s 5779 runs so far, 0 failures, over 4m25s 5887 runs so far, 0 failures, over 4m30s 5991 runs so far, 0 failures, over 4m35s 6098 runs so far, 0 failures, over 4m40s 6199 runs so far, 0 failures, over 4m45s 6306 runs so far, 0 failures, over 4m50s 6413 runs so far, 0 failures, over 4m55s 6521 runs so far, 0 failures, over 5m0s 6625 runs so far, 0 failures, over 5m5s 6729 runs so far, 0 failures, over 5m10s 6831 runs so far, 0 failures, over 5m15s 6937 runs so far, 0 failures, over 5m20s 7043 runs so far, 0 failures, over 5m25s 7148 runs so far, 0 failures, over 5m30s 7252 runs so far, 0 failures, over 5m35s 7356 runs so far, 0 failures, over 5m40s 7460 runs so far, 0 failures, over 5m45s 7569 runs so far, 0 failures, over 5m50s 7668 runs so far, 0 failures, over 5m55s 7773 runs so far, 0 failures, over 6m0s 7881 runs so far, 0 failures, over 6m5s 7982 runs so far, 0 failures, over 6m10s 8083 runs so far, 0 failures, over 6m15s 8182 runs so far, 0 failures, over 6m20s 8282 runs so far, 0 failures, over 6m25s 8384 runs so far, 0 failures, over 6m30s 8482 runs so far, 0 failures, over 6m35s 8584 runs so far, 0 failures, over 6m40s 8683 runs so far, 0 failures, over 6m45s 8786 runs so far, 0 failures, over 6m50s 8886 runs so far, 0 failures, over 6m55s 8990 runs so far, 0 failures, over 7m0s 9094 runs so far, 0 failures, over 7m5s 9194 runs so far, 0 failures, over 7m10s 9297 runs so far, 0 failures, over 7m15s 9396 runs so far, 0 failures, over 7m20s 9499 runs so far, 0 failures, over 7m25s 9601 runs so far, 0 failures, over 7m30s 9705 runs so far, 0 failures, over 7m35s 9806 runs so far, 0 failures, over 7m40s 9905 runs so far, 0 failures, over 7m45s 10005 runs so far, 0 failures, over 7m50s 10105 runs so far, 0 failures, over 7m55s 10207 runs so far, 0 failures, over 8m0s 10310 runs so far, 0 failures, over 8m5s 10411 runs so far, 0 failures, over 8m10s 10510 runs so far, 0 failures, over 8m15s 10608 runs so far, 0 failures, over 8m20s 10708 runs so far, 0 failures, over 8m25s 10810 runs so far, 0 failures, over 8m30s 10910 runs so far, 0 failures, over 8m35s 11007 runs so far, 0 failures, over 8m40s 11107 runs so far, 0 failures, over 8m45s 11205 runs so far, 0 failures, over 8m50s 11302 runs so far, 0 failures, over 8m55s 11404 runs so far, 0 failures, over 9m0s 11500 runs so far, 0 failures, over 9m5s 11601 runs so far, 0 failures, over 9m10s 11695 runs so far, 0 failures, over 9m15s 11797 runs so far, 0 failures, over 9m20s 11896 runs so far, 0 failures, over 9m25s 11997 runs so far, 0 failures, over 9m30s 12098 runs so far, 0 failures, over 9m35s 12199 runs so far, 0 failures, over 9m40s 12298 runs so far, 0 failures, over 9m45s 12396 runs so far, 0 failures, over 9m50s 12495 runs so far, 0 failures, over 9m55s 12597 runs so far, 0 failures, over 10m0s 12696 runs so far, 0 failures, over 10m5s 12792 runs so far, 0 failures, over 10m10s 12890 runs so far, 0 failures, over 10m15s 12991 runs so far, 0 failures, over 10m20s 13086 runs so far, 0 failures, over 10m25s 13185 runs so far, 0 failures, over 10m30s 13289 runs so far, 0 failures, over 10m35s 13386 runs so far, 0 failures, over 10m40s 13481 runs so far, 0 failures, over 10m45s 13575 runs so far, 0 failures, over 10m50s 13674 runs so far, 0 failures, over 10m55s 13765 runs so far, 0 failures, over 11m0s 13863 runs so far, 0 failures, over 11m5s 13956 runs so far, 0 failures, over 11m10s 14054 runs so far, 0 failures, over 11m15s 14156 runs so far, 0 failures, over 11m20s 14168 runs completed, 1 failures, over 11m21s FAIL ``` Please assign, take a look and update the issue accordingly.
1.0
stress: failed test in cockroach/rpc/rpc.test: TestOffsetMeasurement - Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/7330af43411417dc60927b5a4cee0ee715b2cadc Stress build found a failed test: ``` === RUN TestOffsetMeasurement SIGABRT: abort PC=0x4612c9 m=0 goroutine 0 [idle]: runtime.epollwait(0x4, 0x7ffcc248cb10, 0xffffffff00000080, 0xda8c07, 0xffffffff, 0x198040, 0x33, 0x0, 0x0, 0x0, ...) /usr/local/go/src/runtime/sys_linux_amd64.s:440 +0x19 runtime.netpoll(0x1168901, 0x0) /usr/local/go/src/runtime/netpoll_epoll.go:67 +0x94 runtime.findrunnable(0xc820015500, 0x0) /usr/local/go/src/runtime/proc.go:1955 +0x62c runtime.schedule() /usr/local/go/src/runtime/proc.go:2072 +0x24f runtime.park_m(0xc820001c80) /usr/local/go/src/runtime/proc.go:2137 +0x18b runtime.mcall(0x7ffcc248d220) /usr/local/go/src/runtime/asm_amd64.s:233 +0x5b goroutine 1 [chan receive, 9 minutes]: testing.RunTests(0xcfd578, 0x1148e20, 0xb, 0xb, 0xb5be01) /usr/local/go/src/testing/testing.go:583 +0x8d2 testing.(*M).Run(0xc820042f08, 0xb5be20) /usr/local/go/src/testing/testing.go:515 +0x81 main.main() github.com/cockroachdb/cockroach/rpc/_test/_testmain.go:74 +0x117 goroutine 17 [syscall, 9 minutes, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 goroutine 34 [chan receive]: github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x11688a0) /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:1011 +0x64 created by github.com/cockroachdb/cockroach/util/log.init.1 /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:598 +0x8a goroutine 71 [semacquire, 9 minutes]: sync.runtime_Semacquire(0xc820149ecc) /usr/local/go/src/runtime/sema.go:47 +0x26 sync.(*Mutex).Lock(0xc820149ec8) /usr/local/go/src/sync/mutex.go:83 +0x1c4 github.com/cockroachdb/cockroach/util/hlc.(*Clock).MaxOffset(0xc820149ec0, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:135 +0x3a github.com/cockroachdb/cockroach/rpc.TestOffsetMeasurement(0xc820412000) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:129 +0x719 testing.tRunner(0xc820412000, 0x1148e98) /usr/local/go/src/testing/testing.go:473 +0x98 created by testing.RunTests /usr/local/go/src/testing/testing.go:582 +0x892 goroutine 24 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f0758050c08, 0x72, 0xc82041f800) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc820220060, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc820220060, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).Read(0xc820220000, 0xc82041f800, 0x400, 0x400, 0x0, 0x7f075808c050, 0xc82000e068) /usr/local/go/src/net/fd_unix.go:250 +0x23a net.(*conn).Read(0xc820144058, 0xc82041f800, 0x400, 0x400, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:172 +0xe4 crypto/tls.(*block).readFromUntil(0xc820afe000, 0x7f0758050f38, 0xc820144058, 0x5, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:460 +0xcc crypto/tls.(*Conn).readRecord(0xc820232000, 0xcfdd17, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:562 +0x2d1 crypto/tls.(*Conn).Read(0xc820232000, 0xc820676000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:939 +0x167 bufio.(*Reader).fill(0xc8202400c0) /usr/local/go/src/bufio/bufio.go:97 +0x1e9 bufio.(*Reader).Read(0xc8202400c0, 0xc820230038, 0x9, 0x9, 0xc81fff3e7a, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:207 +0x260 io.ReadAtLeast(0x7f0758094d90, 0xc8202400c0, 0xc820230038, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0) /usr/local/go/src/io/io.go:297 +0xe6 io.ReadFull(0x7f0758094d90, 0xc8202400c0, 0xc820230038, 0x9, 0x9, 0xc8201f02a8, 0x0, 0x0) /usr/local/go/src/io/io.go:315 +0x62 golang.org/x/net/http2.readFrameHeader(0xc820230038, 0x9, 0x9, 0x7f0758094d90, 0xc8202400c0, 0x0, 0xc800000000, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:236 +0xa5 golang.org/x/net/http2.(*Framer).ReadFrame(0xc820230000, 0x0, 0x0, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:463 +0x106 google.golang.org/grpc/transport.(*framer).readFrame(0xc8201f0180, 0x0, 0x0, 0x0, 0x0) /go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d google.golang.org/grpc/transport.(*http2Client).reader(0xc820440000) /go/src/google.golang.org/grpc/transport/http2_client.go:757 +0x109 created by google.golang.org/grpc/transport.newHTTP2Client /go/src/google.golang.org/grpc/transport/http2_client.go:200 +0x159a goroutine 8 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/rpc.NewContext.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:105 +0x57 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201d6000, 0xc820682040) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 9 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1() /go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201d6000, 0xc82012c4a0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 10 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f0758050cc8, 0x72, 0x0) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc8200633a0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc8200633a0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).accept(0xc820063340, 0x0, 0x7f0758090a40, 0xc82025b140) /usr/local/go/src/net/fd_unix.go:426 +0x27c net.(*TCPListener).AcceptTCP(0xc82012a0b8, 0xc820045ea8, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:254 +0x4d net.(*TCPListener).Accept(0xc82012a0b8, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/net/tcpsock_posix.go:264 +0x3d google.golang.org/grpc.(*Server).Serve(0xc82009da00, 0x7f075804fd20, 0xc82012a0b8, 0x0, 0x0) /go/src/google.golang.org/grpc/server.go:252 +0x1b5 github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2() /go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201d6000, 0xc82012c4c0) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 11 [chan receive, 9 minutes]: github.com/cockroachdb/cockroach/rpc.NewContext.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:105 +0x57 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201d6000, 0xc82012c640) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 12 [select, 9 minutes]: google.golang.org/grpc.(*Conn).transportMonitor(0xc820168540) /go/src/google.golang.org/grpc/clientconn.go:510 +0x1d3 google.golang.org/grpc.NewConn.func1(0xc820168540) /go/src/google.golang.org/grpc/clientconn.go:321 +0x1b5 created by google.golang.org/grpc.NewConn /go/src/google.golang.org/grpc/clientconn.go:322 +0x4dd goroutine 13 [semacquire, 9 minutes]: sync.runtime_Semacquire(0xc8201f05d4) /usr/local/go/src/runtime/sema.go:47 +0x26 sync.(*Mutex).Lock(0xc8201f05d0) /usr/local/go/src/sync/mutex.go:83 +0x1c4 github.com/cockroachdb/cockroach/rpc.(*AdvancingClock).UnixNano(0xc8201f05d0, 0x1) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:202 +0x30 github.com/cockroachdb/cockroach/rpc.(*AdvancingClock).UnixNano-fm(0x2) /go/src/github.com/cockroachdb/cockroach/rpc/context_test.go:105 +0x20 github.com/cockroachdb/cockroach/util/hlc.(*Clock).getPhysicalClock(0xc820149ec0, 0xcfddf8) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:160 +0x25 github.com/cockroachdb/cockroach/util/hlc.(*Clock).PhysicalNow(0xc820149ec0, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:200 +0x7d github.com/cockroachdb/cockroach/util/hlc.(*Clock).PhysicalTime(0xc820149ec0, 0x0, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/util/hlc/hlc.go:206 +0x35 github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8200633b0, 0xc820258280, 0xc82013e040, 0xf, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/rpc/context.go:191 +0x19c github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1() /go/src/github.com/cockroachdb/cockroach/rpc/context.go:173 +0x66 github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201d6000, 0xc8201ea000) /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52 created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker /go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62 goroutine 14 [IO wait, 9 minutes]: net.runtime_pollWait(0x7f0758050b48, 0x72, 0xc820896800) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc820255870, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc820255870, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).Read(0xc820255810, 0xc820896800, 0x400, 0x400, 0x0, 0x7f075808c050, 0xc82000e068) /usr/local/go/src/net/fd_unix.go:250 +0x23a net.(*conn).Read(0xc82012a0c8, 0xc820896800, 0x400, 0x400, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:172 +0xe4 crypto/tls.(*block).readFromUntil(0xc8201ea120, 0x7f0758050f38, 0xc82012a0c8, 0x5, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:460 +0xcc crypto/tls.(*Conn).readRecord(0xc8201d0c00, 0xcfdd17, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:562 +0x2d1 crypto/tls.(*Conn).Read(0xc8201d0c00, 0xc820ac4000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:939 +0x167 bufio.(*Reader).fill(0xc8201e82a0) /usr/local/go/src/bufio/bufio.go:97 +0x1e9 bufio.(*Reader).Read(0xc8201e82a0, 0xc82046a038, 0x9, 0x9, 0x9, 0x0, 0x0) /usr/local/go/src/bufio/bufio.go:207 +0x260 io.ReadAtLeast(0x7f0758094d90, 0xc8201e82a0, 0xc82046a038, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0) /usr/local/go/src/io/io.go:297 +0xe6 io.ReadFull(0x7f0758094d90, 0xc8201e82a0, 0xc82046a038, 0x9, 0x9, 0xc2faf4d1f30f4901, 0x0, 0x0) /usr/local/go/src/io/io.go:315 +0x62 golang.org/x/net/http2.readFrameHeader(0xc82046a038, 0x9, 0x9, 0x7f0758094d90, 0xc8201e82a0, 0x58000000, 0xc800000000, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:236 +0xa5 golang.org/x/net/http2.(*Framer).ReadFrame(0xc82046a000, 0x0, 0x0, 0x0, 0x0) /go/src/golang.org/x/net/http2/frame.go:463 +0x106 google.golang.org/grpc/transport.(*framer).readFrame(0xc820182d50, 0x0, 0x0, 0x0, 0x0) /go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc820412090, 0xc820182de0) /go/src/google.golang.org/grpc/transport/http2_server.go:247 +0x646 google.golang.org/grpc.(*Server).serveStreams(0xc82009da00, 0x7f0758094de0, 0xc820412090) /go/src/google.golang.org/grpc/server.go:325 +0x159 google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc82009da00, 0x7f0758094cb8, 0xc8201d0c00, 0x7f0758094d18, 0xc820ac02c0) /go/src/google.golang.org/grpc/server.go:312 +0x49d google.golang.org/grpc.(*Server).handleRawConn(0xc82009da00, 0x7f0758050ed8, 0xc82012a0c8) /go/src/google.golang.org/grpc/server.go:289 +0x4ee created by google.golang.org/grpc.(*Server).Serve /go/src/google.golang.org/grpc/server.go:261 +0x372 goroutine 23 [select, 9 minutes]: google.golang.org/grpc/transport.(*http2Client).controller(0xc820440000) /go/src/google.golang.org/grpc/transport/http2_client.go:835 +0x5da created by google.golang.org/grpc/transport.newHTTP2Client /go/src/google.golang.org/grpc/transport/http2_client.go:194 +0x153b goroutine 58 [select, 9 minutes]: google.golang.org/grpc/transport.(*http2Server).controller(0xc820412090) /go/src/google.golang.org/grpc/transport/http2_server.go:620 +0x5da created by google.golang.org/grpc/transport.newHTTP2Server /go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x853 rax 0xfffffffffffffffc rbx 0xffffffff rcx 0x4612c9 rdx 0x80 rdi 0x4 rsi 0x7ffcc248cb10 rbp 0x11692e0 rsp 0x7ffcc248cad0 r8 0x11692e0 r9 0xc820428009 r10 0xffffffff r11 0x246 r12 0x853ce64 r13 0xa r14 0x7 r15 0x8 rip 0x4612c9 rflags 0x246 cs 0x33 fs 0x0 gs 0x0 ERROR: exit status 2 ``` Run Details: ``` 103 runs so far, 0 failures, over 5s 215 runs so far, 0 failures, over 10s 325 runs so far, 0 failures, over 15s 435 runs so far, 0 failures, over 20s 543 runs so far, 0 failures, over 25s 660 runs so far, 0 failures, over 30s 770 runs so far, 0 failures, over 35s 880 runs so far, 0 failures, over 40s 993 runs so far, 0 failures, over 45s 1103 runs so far, 0 failures, over 50s 1209 runs so far, 0 failures, over 55s 1321 runs so far, 0 failures, over 1m0s 1432 runs so far, 0 failures, over 1m5s 1540 runs so far, 0 failures, over 1m10s 1655 runs so far, 0 failures, over 1m15s 1769 runs so far, 0 failures, over 1m20s 1880 runs so far, 0 failures, over 1m25s 1988 runs so far, 0 failures, over 1m30s 2098 runs so far, 0 failures, over 1m35s 2208 runs so far, 0 failures, over 1m40s 2314 runs so far, 0 failures, over 1m45s 2426 runs so far, 0 failures, over 1m50s 2532 runs so far, 0 failures, over 1m55s 2641 runs so far, 0 failures, over 2m0s 2753 runs so far, 0 failures, over 2m5s 2861 runs so far, 0 failures, over 2m10s 2970 runs so far, 0 failures, over 2m15s 3082 runs so far, 0 failures, over 2m20s 3192 runs so far, 0 failures, over 2m25s 3302 runs so far, 0 failures, over 2m30s 3411 runs so far, 0 failures, over 2m35s 3523 runs so far, 0 failures, over 2m40s 3633 runs so far, 0 failures, over 2m45s 3739 runs so far, 0 failures, over 2m50s 3849 runs so far, 0 failures, over 2m55s 3954 runs so far, 0 failures, over 3m0s 4060 runs so far, 0 failures, over 3m5s 4171 runs so far, 0 failures, over 3m10s 4278 runs so far, 0 failures, over 3m15s 4385 runs so far, 0 failures, over 3m20s 4496 runs so far, 0 failures, over 3m25s 4602 runs so far, 0 failures, over 3m30s 4712 runs so far, 0 failures, over 3m35s 4816 runs so far, 0 failures, over 3m40s 4927 runs so far, 0 failures, over 3m45s 5036 runs so far, 0 failures, over 3m50s 5140 runs so far, 0 failures, over 3m55s 5247 runs so far, 0 failures, over 4m0s 5352 runs so far, 0 failures, over 4m5s 5456 runs so far, 0 failures, over 4m10s 5562 runs so far, 0 failures, over 4m15s 5670 runs so far, 0 failures, over 4m20s 5779 runs so far, 0 failures, over 4m25s 5887 runs so far, 0 failures, over 4m30s 5991 runs so far, 0 failures, over 4m35s 6098 runs so far, 0 failures, over 4m40s 6199 runs so far, 0 failures, over 4m45s 6306 runs so far, 0 failures, over 4m50s 6413 runs so far, 0 failures, over 4m55s 6521 runs so far, 0 failures, over 5m0s 6625 runs so far, 0 failures, over 5m5s 6729 runs so far, 0 failures, over 5m10s 6831 runs so far, 0 failures, over 5m15s 6937 runs so far, 0 failures, over 5m20s 7043 runs so far, 0 failures, over 5m25s 7148 runs so far, 0 failures, over 5m30s 7252 runs so far, 0 failures, over 5m35s 7356 runs so far, 0 failures, over 5m40s 7460 runs so far, 0 failures, over 5m45s 7569 runs so far, 0 failures, over 5m50s 7668 runs so far, 0 failures, over 5m55s 7773 runs so far, 0 failures, over 6m0s 7881 runs so far, 0 failures, over 6m5s 7982 runs so far, 0 failures, over 6m10s 8083 runs so far, 0 failures, over 6m15s 8182 runs so far, 0 failures, over 6m20s 8282 runs so far, 0 failures, over 6m25s 8384 runs so far, 0 failures, over 6m30s 8482 runs so far, 0 failures, over 6m35s 8584 runs so far, 0 failures, over 6m40s 8683 runs so far, 0 failures, over 6m45s 8786 runs so far, 0 failures, over 6m50s 8886 runs so far, 0 failures, over 6m55s 8990 runs so far, 0 failures, over 7m0s 9094 runs so far, 0 failures, over 7m5s 9194 runs so far, 0 failures, over 7m10s 9297 runs so far, 0 failures, over 7m15s 9396 runs so far, 0 failures, over 7m20s 9499 runs so far, 0 failures, over 7m25s 9601 runs so far, 0 failures, over 7m30s 9705 runs so far, 0 failures, over 7m35s 9806 runs so far, 0 failures, over 7m40s 9905 runs so far, 0 failures, over 7m45s 10005 runs so far, 0 failures, over 7m50s 10105 runs so far, 0 failures, over 7m55s 10207 runs so far, 0 failures, over 8m0s 10310 runs so far, 0 failures, over 8m5s 10411 runs so far, 0 failures, over 8m10s 10510 runs so far, 0 failures, over 8m15s 10608 runs so far, 0 failures, over 8m20s 10708 runs so far, 0 failures, over 8m25s 10810 runs so far, 0 failures, over 8m30s 10910 runs so far, 0 failures, over 8m35s 11007 runs so far, 0 failures, over 8m40s 11107 runs so far, 0 failures, over 8m45s 11205 runs so far, 0 failures, over 8m50s 11302 runs so far, 0 failures, over 8m55s 11404 runs so far, 0 failures, over 9m0s 11500 runs so far, 0 failures, over 9m5s 11601 runs so far, 0 failures, over 9m10s 11695 runs so far, 0 failures, over 9m15s 11797 runs so far, 0 failures, over 9m20s 11896 runs so far, 0 failures, over 9m25s 11997 runs so far, 0 failures, over 9m30s 12098 runs so far, 0 failures, over 9m35s 12199 runs so far, 0 failures, over 9m40s 12298 runs so far, 0 failures, over 9m45s 12396 runs so far, 0 failures, over 9m50s 12495 runs so far, 0 failures, over 9m55s 12597 runs so far, 0 failures, over 10m0s 12696 runs so far, 0 failures, over 10m5s 12792 runs so far, 0 failures, over 10m10s 12890 runs so far, 0 failures, over 10m15s 12991 runs so far, 0 failures, over 10m20s 13086 runs so far, 0 failures, over 10m25s 13185 runs so far, 0 failures, over 10m30s 13289 runs so far, 0 failures, over 10m35s 13386 runs so far, 0 failures, over 10m40s 13481 runs so far, 0 failures, over 10m45s 13575 runs so far, 0 failures, over 10m50s 13674 runs so far, 0 failures, over 10m55s 13765 runs so far, 0 failures, over 11m0s 13863 runs so far, 0 failures, over 11m5s 13956 runs so far, 0 failures, over 11m10s 14054 runs so far, 0 failures, over 11m15s 14156 runs so far, 0 failures, over 11m20s 14168 runs completed, 1 failures, over 11m21s FAIL ``` Please assign, take a look and update the issue accordingly.
non_process
stress failed test in cockroach rpc rpc test testoffsetmeasurement binary cockroach static tests tar gz sha stress build found a failed test run testoffsetmeasurement sigabrt abort pc m goroutine runtime epollwait usr local go src runtime sys linux s runtime netpoll usr local go src runtime netpoll epoll go runtime findrunnable usr local go src runtime proc go runtime schedule usr local go src runtime proc go runtime park m usr local go src runtime proc go runtime mcall usr local go src runtime asm s goroutine testing runtests usr local go src testing testing go testing m run usr local go src testing testing go main main github com cockroachdb cockroach rpc test testmain go goroutine runtime goexit usr local go src runtime asm s goroutine github com cockroachdb cockroach util log loggingt flushdaemon go src github com cockroachdb cockroach util log clog go created by github com cockroachdb cockroach util log init go src github com cockroachdb cockroach util log clog go goroutine sync runtime semacquire usr local go src runtime sema go sync mutex lock usr local go src sync mutex go github com cockroachdb cockroach util hlc clock maxoffset go src github com cockroachdb cockroach util hlc hlc go github com cockroachdb cockroach rpc testoffsetmeasurement go src github com cockroachdb cockroach rpc context test go testing trunner usr local go src testing testing go created by testing runtests usr local go src testing testing go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go crypto tls block readfromuntil usr local go src crypto tls conn go crypto tls conn readrecord usr local go src crypto tls conn go crypto tls conn read usr local go src crypto tls conn go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport reader go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine github com cockroachdb cockroach rpc newcontext go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd accept usr local go src net fd unix go net tcplistener accepttcp usr local go src net tcpsock posix go net tcplistener accept usr local go src net tcpsock posix go google golang org grpc server serve go src google golang org grpc server go github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach rpc newcontext go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc conn transportmonitor go src google golang org grpc clientconn go google golang org grpc newconn go src google golang org grpc clientconn go created by google golang org grpc newconn go src google golang org grpc clientconn go goroutine sync runtime semacquire usr local go src runtime sema go sync mutex lock usr local go src sync mutex go github com cockroachdb cockroach rpc advancingclock unixnano go src github com cockroachdb cockroach rpc context test go github com cockroachdb cockroach rpc advancingclock unixnano fm go src github com cockroachdb cockroach rpc context test go github com cockroachdb cockroach util hlc clock getphysicalclock go src github com cockroachdb cockroach util hlc hlc go github com cockroachdb cockroach util hlc clock physicalnow go src github com cockroachdb cockroach util hlc hlc go github com cockroachdb cockroach util hlc clock physicaltime go src github com cockroachdb cockroach util hlc hlc go github com cockroachdb cockroach rpc context runheartbeat go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach rpc context grpcdial go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go crypto tls block readfromuntil usr local go src crypto tls conn go crypto tls conn readrecord usr local go src crypto tls conn go crypto tls conn read usr local go src crypto tls conn go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport handlestreams go src google golang org grpc transport server go google golang org grpc server servestreams go src google golang org grpc server go google golang org grpc server go src google golang org grpc server go google golang org grpc server handlerawconn go src google golang org grpc server go created by google golang org grpc server serve go src google golang org grpc server go goroutine google golang org grpc transport controller go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine google golang org grpc transport controller go src google golang org grpc transport server go created by google golang org grpc transport go src google golang org grpc transport server go rax rbx rcx rdx rdi rsi rbp rsp rip rflags cs fs gs error exit status run details runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs completed failures over fail please assign take a look and update the issue accordingly
0
142,496
11,474,230,549
IssuesEvent
2020-02-10 03:21:12
timeforcamp/time-for-camp
https://api.github.com/repos/timeforcamp/time-for-camp
closed
STORE: USER VIEW - Items with options enabled all show OUT OF STOCK
bug ready to test
All items that are being tracked by Options show as out of stock to the end user even when there is inventory for those items. ![image](https://user-images.githubusercontent.com/13508954/74109962-b3d70680-4b4d-11ea-9179-b9840362c8d2.png) ![image](https://user-images.githubusercontent.com/13508954/74109964-c0f3f580-4b4d-11ea-9321-e384e22a2ab6.png)
1.0
STORE: USER VIEW - Items with options enabled all show OUT OF STOCK - All items that are being tracked by Options show as out of stock to the end user even when there is inventory for those items. ![image](https://user-images.githubusercontent.com/13508954/74109962-b3d70680-4b4d-11ea-9179-b9840362c8d2.png) ![image](https://user-images.githubusercontent.com/13508954/74109964-c0f3f580-4b4d-11ea-9321-e384e22a2ab6.png)
non_process
store user view items with options enabled all show out of stock all items that are being tracked by options show as out of stock to the end user even when there is inventory for those items
0
14,386
10,152,160,572
IssuesEvent
2019-08-05 22:32:30
cityofaustin/atd-knack-data-tracker
https://api.github.com/repos/cityofaustin/atd-knack-data-tracker
closed
Add Detection IP URL for ITS Cameras on Detection able
Service: Apps Type: Enhancement Workgroup: AMD
Kenny Moses TMC/AMD Would like Detector IP to be a clickable URL. ![image](https://user-images.githubusercontent.com/21112975/62486680-8ae04e00-b785-11e9-923c-69b278233b11.png) Has given me the [URL link](/svswm.html?port=1200) to create a new field and path. I will replace what is showing on the table so users can click on it. Filter for ITS cameras ![image](https://user-images.githubusercontent.com/21112975/62493352-7e182600-b796-11e9-8780-d63f8839a424.png) 1. `DETECTOR TYPE` is not "CBD/NO DETECTORS" 2. `DETECTOR TYPE` is "VIDEO" 3. `DETECTOR TYPE` is not "GRIDSMART" 4. `signal` is not blank 5. `DETECTOR IP` is not blank 6. `DETECTOR IP` is not 9999 7. `DETECTOR IP` is not 999 8. `DETECTOR IP` is not NA 9. `DETECTOR ID` is not NA
1.0
Add Detection IP URL for ITS Cameras on Detection able - Kenny Moses TMC/AMD Would like Detector IP to be a clickable URL. ![image](https://user-images.githubusercontent.com/21112975/62486680-8ae04e00-b785-11e9-923c-69b278233b11.png) Has given me the [URL link](/svswm.html?port=1200) to create a new field and path. I will replace what is showing on the table so users can click on it. Filter for ITS cameras ![image](https://user-images.githubusercontent.com/21112975/62493352-7e182600-b796-11e9-8780-d63f8839a424.png) 1. `DETECTOR TYPE` is not "CBD/NO DETECTORS" 2. `DETECTOR TYPE` is "VIDEO" 3. `DETECTOR TYPE` is not "GRIDSMART" 4. `signal` is not blank 5. `DETECTOR IP` is not blank 6. `DETECTOR IP` is not 9999 7. `DETECTOR IP` is not 999 8. `DETECTOR IP` is not NA 9. `DETECTOR ID` is not NA
non_process
add detection ip url for its cameras on detection able kenny moses tmc amd would like detector ip to be a clickable url has given me the svswm html port to create a new field and path i will replace what is showing on the table so users can click on it filter for its cameras detector type is not cbd no detectors detector type is video detector type is not gridsmart signal is not blank detector ip is not blank detector ip is not detector ip is not detector ip is not na detector id is not na
0
14,813
18,144,747,613
IssuesEvent
2021-09-25 08:06:05
lifthrasiir/roadroller
https://api.github.com/repos/lifthrasiir/roadroller
opened
Source map support
enhancement js preprocess
It turns out that we can put the source map comment into `eval`ed code and devtools do recognize it. This should not present in the final build, so I think the resulting code should be made obvious that the source map is in use, like this: ```javascript eval(Function(/* compressed data here */)(...)+'\n//# sourceMappingUrl=foo.js.map') ``` We would have to preserve the existing `sourceMappingUrl` comment and update it (and also `sourceURL`) if the JS preprocessing is in use.
1.0
Source map support - It turns out that we can put the source map comment into `eval`ed code and devtools do recognize it. This should not present in the final build, so I think the resulting code should be made obvious that the source map is in use, like this: ```javascript eval(Function(/* compressed data here */)(...)+'\n//# sourceMappingUrl=foo.js.map') ``` We would have to preserve the existing `sourceMappingUrl` comment and update it (and also `sourceURL`) if the JS preprocessing is in use.
process
source map support it turns out that we can put the source map comment into eval ed code and devtools do recognize it this should not present in the final build so i think the resulting code should be made obvious that the source map is in use like this javascript eval function compressed data here n sourcemappingurl foo js map we would have to preserve the existing sourcemappingurl comment and update it and also sourceurl if the js preprocessing is in use
1
16,683
21,785,611,674
IssuesEvent
2022-05-14 04:35:43
beyondhb1079/s4us
https://api.github.com/repos/beyondhb1079/s4us
closed
Upgrade firebase to v9
process
v9 is more modular and the packages are overall smaller. Also the API is more functional. This'll help speed up our page loads. https://firebase.google.com/docs/web/modular-upgrade
1.0
Upgrade firebase to v9 - v9 is more modular and the packages are overall smaller. Also the API is more functional. This'll help speed up our page loads. https://firebase.google.com/docs/web/modular-upgrade
process
upgrade firebase to is more modular and the packages are overall smaller also the api is more functional this ll help speed up our page loads
1
15,758
19,912,411,298
IssuesEvent
2022-01-25 18:32:54
google/shaka-player
https://api.github.com/repos/google/shaka-player
opened
demo: Deployment to appspot is missing fastestsmallesttextencoderdecoder/EncoderDecoderTogether.min.js
component: demo page type: process
Reported via Slack `fastestsmallesttextencoderdecoder/EncoderDecoderTogether.min.js` should have been deployed to appspot for all v3.1, v3.2, and v3.3 releases, but appears to be missing in all of them.
1.0
demo: Deployment to appspot is missing fastestsmallesttextencoderdecoder/EncoderDecoderTogether.min.js - Reported via Slack `fastestsmallesttextencoderdecoder/EncoderDecoderTogether.min.js` should have been deployed to appspot for all v3.1, v3.2, and v3.3 releases, but appears to be missing in all of them.
process
demo deployment to appspot is missing fastestsmallesttextencoderdecoder encoderdecodertogether min js reported via slack fastestsmallesttextencoderdecoder encoderdecodertogether min js should have been deployed to appspot for all and releases but appears to be missing in all of them
1
20,696
27,369,419,592
IssuesEvent
2023-02-27 21:58:54
parcel-bundler/parcel
https://api.github.com/repos/parcel-bundler/parcel
closed
@parcel/transformer-sass failed to compile mdc-checkbox
CSS Preprocessing
<!--- Thanks for filing an issue 😄 ! Before you submit, please read the following: Search open/closed issues before submitting since someone might have asked the same thing before! --> # 🐛 bug report I want to use Parcel to build a custom version of the MDC Checkbox components using SASS. `parcel build index.html` fails to compile sass. ## 🎛 Configuration (.babelrc, package.json, cli command) `babel.rc` - none `package.json` ```json { "name": "parcel-mdc-checkbox", "scripts": { "build": "parcel build index.html", "sass": "sass style.scss --load-path=node_modules" }, "devDependencies": { "@parcel/transformer-sass": "^2.8.3", "parcel": "^2.8.3" }, "dependencies": { "@material/checkbox": "^15.0.0-canary.684e33d25.0", "@material/form-field": "^15.0.0-canary.684e33d25.0" } } ``` ## 🤔 Expected Behavior Using `npm run build` I expect to compile the `style.scss` to `style.css` using parcel transformer ## 😯 Current Behavior `npm run build` or `parcel build index.html` fails `npm run sass` or `sass style.scss --load-path=node_module` works ``` > parcel build index.html 🚨 Build failed. @parcel/transformer-sass: expected "{". ╷ 23 │ export * from './adapter'; │ ^ ╵ node_modules\@material\checkbox\index.js 23:26 @use style.scss 1:1 root stylesheet Error: expected "{". ╷ 23 │ export * from './adapter'; │ ^ ╵ node_modules\@material\checkbox\index.js 23:26 @use style.scss 1:1 root stylesheet ``` ## 💻 Code Sample ```sh npm ci npm run build npm run sass ``` [parcel-mdc-checkbox.zip](https://github.com/parcel-bundler/parcel/files/10705809/parcel-mdc-checkbox.zip) ## 🌍 Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> | Software | Version(s) | | ---------------- | ---------- | | Parcel | 2.8.3| | Node | 18.13.0 | | npm | 8.19.3| | OS|Windows 11 or Debian 11 in WSL |
1.0
@parcel/transformer-sass failed to compile mdc-checkbox - <!--- Thanks for filing an issue 😄 ! Before you submit, please read the following: Search open/closed issues before submitting since someone might have asked the same thing before! --> # 🐛 bug report I want to use Parcel to build a custom version of the MDC Checkbox components using SASS. `parcel build index.html` fails to compile sass. ## 🎛 Configuration (.babelrc, package.json, cli command) `babel.rc` - none `package.json` ```json { "name": "parcel-mdc-checkbox", "scripts": { "build": "parcel build index.html", "sass": "sass style.scss --load-path=node_modules" }, "devDependencies": { "@parcel/transformer-sass": "^2.8.3", "parcel": "^2.8.3" }, "dependencies": { "@material/checkbox": "^15.0.0-canary.684e33d25.0", "@material/form-field": "^15.0.0-canary.684e33d25.0" } } ``` ## 🤔 Expected Behavior Using `npm run build` I expect to compile the `style.scss` to `style.css` using parcel transformer ## 😯 Current Behavior `npm run build` or `parcel build index.html` fails `npm run sass` or `sass style.scss --load-path=node_module` works ``` > parcel build index.html 🚨 Build failed. @parcel/transformer-sass: expected "{". ╷ 23 │ export * from './adapter'; │ ^ ╵ node_modules\@material\checkbox\index.js 23:26 @use style.scss 1:1 root stylesheet Error: expected "{". ╷ 23 │ export * from './adapter'; │ ^ ╵ node_modules\@material\checkbox\index.js 23:26 @use style.scss 1:1 root stylesheet ``` ## 💻 Code Sample ```sh npm ci npm run build npm run sass ``` [parcel-mdc-checkbox.zip](https://github.com/parcel-bundler/parcel/files/10705809/parcel-mdc-checkbox.zip) ## 🌍 Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> | Software | Version(s) | | ---------------- | ---------- | | Parcel | 2.8.3| | Node | 18.13.0 | | npm | 8.19.3| | OS|Windows 11 or Debian 11 in WSL |
process
parcel transformer sass failed to compile mdc checkbox thanks for filing an issue 😄 before you submit please read the following search open closed issues before submitting since someone might have asked the same thing before 🐛 bug report i want to use parcel to build a custom version of the mdc checkbox components using sass parcel build index html fails to compile sass 🎛 configuration babelrc package json cli command babel rc none package json json name parcel mdc checkbox scripts build parcel build index html sass sass style scss load path node modules devdependencies parcel transformer sass parcel dependencies material checkbox canary material form field canary 🤔 expected behavior using npm run build i expect to compile the style scss to style css using parcel transformer 😯 current behavior npm run build or parcel build index html fails npm run sass or sass style scss load path node module works parcel build index html 🚨 build failed parcel transformer sass expected ╷ │ export from adapter │ ╵ node modules material checkbox index js use style scss root stylesheet error expected ╷ │ export from adapter │ ╵ node modules material checkbox index js use style scss root stylesheet 💻 code sample sh npm ci npm run build npm run sass 🌍 your environment software version s parcel node npm os windows or debian in wsl
1
726,060
24,986,462,670
IssuesEvent
2022-11-02 15:24:27
nprapps/elections22
https://api.github.com/repos/nprapps/elections22
closed
State pages: Incorporate voter guide links?
Priority: Low WashDesk
https://www.npr.org/2022/11/01/1132846749/voter-guides-election-midterm-2022 Ask Ben and EAJ how/whether they want to do this.
1.0
State pages: Incorporate voter guide links? - https://www.npr.org/2022/11/01/1132846749/voter-guides-election-midterm-2022 Ask Ben and EAJ how/whether they want to do this.
non_process
state pages incorporate voter guide links ask ben and eaj how whether they want to do this
0
8,532
11,705,585,991
IssuesEvent
2020-03-07 16:47:42
Ghost-chu/QuickShop-Reremake
https://api.github.com/repos/Ghost-chu/QuickShop-Reremake
closed
[BUG]shops still exists even not seen in database
Bug In Process Priority:Major
we have several shops that we cannt find in database but Quickshop still see it as a shop, when admins open it and quickshop tells us that we have bypassed quickshop lock. /qs remove is not working for that block, we can create a shop but after we removed it and right click at the chest,quickshop again tells us that we have bypassed quickshop lock Steps to reproduce the behavior: we donnt know, but our server's players said that this bug appears after they used /qs remove on that shop. And more and more shops are failing like I said above Expected behavior shops should be removed Screenshots i typed /qs remove, then /qs create 1, and then /qs remove, finally right clicking on that chest ![image](https://user-images.githubusercontent.com/33673786/75966330-1d57f400-5f05-11ea-96b8-09ba8b463223.png) Paste link: - https://paste.enginehub.org/ak7zzSjj
1.0
[BUG]shops still exists even not seen in database - we have several shops that we cannt find in database but Quickshop still see it as a shop, when admins open it and quickshop tells us that we have bypassed quickshop lock. /qs remove is not working for that block, we can create a shop but after we removed it and right click at the chest,quickshop again tells us that we have bypassed quickshop lock Steps to reproduce the behavior: we donnt know, but our server's players said that this bug appears after they used /qs remove on that shop. And more and more shops are failing like I said above Expected behavior shops should be removed Screenshots i typed /qs remove, then /qs create 1, and then /qs remove, finally right clicking on that chest ![image](https://user-images.githubusercontent.com/33673786/75966330-1d57f400-5f05-11ea-96b8-09ba8b463223.png) Paste link: - https://paste.enginehub.org/ak7zzSjj
process
shops still exists even not seen in database we have several shops that we cannt find in database but quickshop still see it as a shop when admins open it and quickshop tells us that we have bypassed quickshop lock qs remove is not working for that block we can create a shop but after we removed it and right click at the chest quickshop again tells us that we have bypassed quickshop lock steps to reproduce the behavior we donnt know but our server s players said that this bug appears after they used qs remove on that shop and more and more shops are failing like i said above expected behavior shops should be removed screenshots i typed qs remove then qs create and then qs remove finally right clicking on that chest paste link
1
39,683
5,242,157,502
IssuesEvent
2017-01-31 17:20:52
phetsims/color-vision
https://api.github.com/repos/phetsims/color-vision
opened
Automated Testing Build Error (1/31/2017 10:00 AM)
type:automated-testing
From https://github.com/phetsims/color-vision/commit/40a93ad1d117fd9854865a24185472fc9b7ab7b0 ``` Running "eslint:allFiles" (eslint) task /home/mendeleev/git/color-vision/js/singlebulb/view/SingleBulbScreenView.js 25:3 error Mismatched require statement values, FlashlightWireNode !== StopSignNode require-statement-match ✖ 1 problem (1 error, 0 warnings) Warning: Task "eslint:allFiles" failed. Use --force to continue. Aborted due to warnings. ```
1.0
Automated Testing Build Error (1/31/2017 10:00 AM) - From https://github.com/phetsims/color-vision/commit/40a93ad1d117fd9854865a24185472fc9b7ab7b0 ``` Running "eslint:allFiles" (eslint) task /home/mendeleev/git/color-vision/js/singlebulb/view/SingleBulbScreenView.js 25:3 error Mismatched require statement values, FlashlightWireNode !== StopSignNode require-statement-match ✖ 1 problem (1 error, 0 warnings) Warning: Task "eslint:allFiles" failed. Use --force to continue. Aborted due to warnings. ```
non_process
automated testing build error am from running eslint allfiles eslint task home mendeleev git color vision js singlebulb view singlebulbscreenview js error mismatched require statement values flashlightwirenode stopsignnode require statement match ✖ problem error warnings warning task eslint allfiles failed use force to continue aborted due to warnings
0
15,281
19,271,450,607
IssuesEvent
2021-12-10 06:14:38
DSE511-Project3-Team/DSE511-Project-3-Code-Repo
https://api.github.com/repos/DSE511-Project3-Team/DSE511-Project-3-Code-Repo
closed
Preprocess the dataset, compress it, upload to GitHub
Preprocess
Russ is on this! I will have the final result either end of today or tomorrow morning and we can go over the changes I made and how to work with the compressed version. Our goal is to trim it down a lot from 1.3 Million observations to around 75 thousand. We will only use data from certain US Cities.
1.0
Preprocess the dataset, compress it, upload to GitHub - Russ is on this! I will have the final result either end of today or tomorrow morning and we can go over the changes I made and how to work with the compressed version. Our goal is to trim it down a lot from 1.3 Million observations to around 75 thousand. We will only use data from certain US Cities.
process
preprocess the dataset compress it upload to github russ is on this i will have the final result either end of today or tomorrow morning and we can go over the changes i made and how to work with the compressed version our goal is to trim it down a lot from million observations to around thousand we will only use data from certain us cities
1
582,201
17,355,794,180
IssuesEvent
2021-07-29 14:16:18
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
Move iOS recovery code option to Backup & Restore (Manage Your Wallet) area
OS/Desktop QA/Yes feature/rewards good first issue priority/P3 release-notes/include
## Description Since this applies to fewer and fewer users over time, this item is too upfront and should be moved out of the way (but retained). ![image](https://user-images.githubusercontent.com/11497541/125668487-1c28a1e2-c6e0-4f88-8778-031b62b989b9.png) ## Solution In the `Restore` tab in the Manage Your Wallet modal area, add a line that says: >**View QR Code** for iOS Rewards users transferring BAT. The "View QR code" will be a hyperlink that displays the QR code as it currently works. **Placement:** This line should go under the paragraph ending in "... before restoring" on a new paragraph line. ![image](https://user-images.githubusercontent.com/11497541/125668835-37188af9-8338-4002-a9fb-5c7525a26ffc.png) ## Discussion Please let me know if you think we should add a completely separate tab instead.
1.0
Move iOS recovery code option to Backup & Restore (Manage Your Wallet) area - ## Description Since this applies to fewer and fewer users over time, this item is too upfront and should be moved out of the way (but retained). ![image](https://user-images.githubusercontent.com/11497541/125668487-1c28a1e2-c6e0-4f88-8778-031b62b989b9.png) ## Solution In the `Restore` tab in the Manage Your Wallet modal area, add a line that says: >**View QR Code** for iOS Rewards users transferring BAT. The "View QR code" will be a hyperlink that displays the QR code as it currently works. **Placement:** This line should go under the paragraph ending in "... before restoring" on a new paragraph line. ![image](https://user-images.githubusercontent.com/11497541/125668835-37188af9-8338-4002-a9fb-5c7525a26ffc.png) ## Discussion Please let me know if you think we should add a completely separate tab instead.
non_process
move ios recovery code option to backup restore manage your wallet area description since this applies to fewer and fewer users over time this item is too upfront and should be moved out of the way but retained solution in the restore tab in the manage your wallet modal area add a line that says view qr code for ios rewards users transferring bat the view qr code will be a hyperlink that displays the qr code as it currently works placement this line should go under the paragraph ending in before restoring on a new paragraph line discussion please let me know if you think we should add a completely separate tab instead
0
113,446
11,802,927,416
IssuesEvent
2020-03-18 22:45:56
Matteas-Eden/roll-for-reaction
https://api.github.com/repos/Matteas-Eden/roll-for-reaction
closed
Add more information to README.md
documentation good first issue
**User Story** <!--As a [user role], I'd like to [do something], so that [some goal].--> As a repository maintainer, I'd like to have a well-written README which accurately describes the project and fairly credits contributors, so that visitors to the repository can better understand the aims and efforts of the project. **Acceptance Criteria** - The README contains all relevant information pertaining to the project - The README contains the full list of planned features - The README lists all contributors of the project **Notes** - The README needs to be updated separately for the `gh-pages` branch. - This is possibly a duplicate of #5
1.0
Add more information to README.md - **User Story** <!--As a [user role], I'd like to [do something], so that [some goal].--> As a repository maintainer, I'd like to have a well-written README which accurately describes the project and fairly credits contributors, so that visitors to the repository can better understand the aims and efforts of the project. **Acceptance Criteria** - The README contains all relevant information pertaining to the project - The README contains the full list of planned features - The README lists all contributors of the project **Notes** - The README needs to be updated separately for the `gh-pages` branch. - This is possibly a duplicate of #5
non_process
add more information to readme md user story as a repository maintainer i d like to have a well written readme which accurately describes the project and fairly credits contributors so that visitors to the repository can better understand the aims and efforts of the project acceptance criteria the readme contains all relevant information pertaining to the project the readme contains the full list of planned features the readme lists all contributors of the project notes the readme needs to be updated separately for the gh pages branch this is possibly a duplicate of
0
2,023
4,846,800,850
IssuesEvent
2016-11-10 13:04:20
raphym/Simulation-of-routing-problem-with-intelligent-agents
https://api.github.com/repos/raphym/Simulation-of-routing-problem-with-intelligent-agents
opened
Start the project
being processed
- Familiarity with the language C / C++ - Think about how to simulate the dynamism in a traffic city - Which Elements i have to create for the city -Which property the elements need to exist in the city
1.0
Start the project - - Familiarity with the language C / C++ - Think about how to simulate the dynamism in a traffic city - Which Elements i have to create for the city -Which property the elements need to exist in the city
process
start the project familiarity with the language c c think about how to simulate the dynamism in a traffic city which elements i have to create for the city which property the elements need to exist in the city
1
21,725
30,232,625,252
IssuesEvent
2023-07-06 08:04:27
UnitTestBot/UTBotJava
https://api.github.com/repos/UnitTestBot/UTBotJava
closed
Use appropriate database settings for Spring integration tests generation
ctg-enhancement comp-instrumented-process comp-spring
**Description** Consider generating integration tests for Spring project. Concrete execution is used, it requires to establish database connection for many projects Concrete database settings must be the following (h2 in-memory database is used): ``` spring: datasource: url: jdbc:h2:mem:testdb username: sa password: password jpa: hibernate: ddl-auto: create-drop show-sql: true generate-ddl: true ```
1.0
Use appropriate database settings for Spring integration tests generation - **Description** Consider generating integration tests for Spring project. Concrete execution is used, it requires to establish database connection for many projects Concrete database settings must be the following (h2 in-memory database is used): ``` spring: datasource: url: jdbc:h2:mem:testdb username: sa password: password jpa: hibernate: ddl-auto: create-drop show-sql: true generate-ddl: true ```
process
use appropriate database settings for spring integration tests generation description consider generating integration tests for spring project concrete execution is used it requires to establish database connection for many projects concrete database settings must be the following in memory database is used spring datasource url jdbc mem testdb username sa password password jpa hibernate ddl auto create drop show sql true generate ddl true
1
8,250
11,421,370,014
IssuesEvent
2020-02-03 12:02:33
parcel-bundler/parcel
https://api.github.com/repos/parcel-bundler/parcel
closed
PostHTML, posthtml-w3c not working!
:bug: Bug HTML Preprocessing Stale
Hello there! Trying to setup [posthtml-w3c](https://github.com/posthtml/posthtml-w3c) in Parcel project, so I installed it to my project `yarn add -D posthtml-w3c` than add `.posthtmlrc` **.posthtmlrc** ```json { "plugins": { "posthtml-w3c": {} } } ``` I definitely made some w3c mistakes in `index.html` and expect to see any warnings in my console, but it's seems not working. ```html <!DOCTYPE html> <html lang="ru"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Блог</title> <link rel="stylesheet" href="./assets/styles/index.scss"> </head> <body> <div class="container container--header"> <header class="header" custom-data="1"> <a href="#"> <h1 check-my-bad-attr>H1</h1> </a> </header> </div> </body> </html> ``` ![img-2019-05-25-19-20-35](https://user-images.githubusercontent.com/10127643/58372214-11291a00-7f23-11e9-9c84-e88b72b5c7a1.png) I tried to find the necessary documentation from both [Parcel transformation posthtml page](https://ru.parceljs.org/transforms.html#posthtml) and the [posthtml-w3c page](https://github.com/posthtml/posthtml-w3c), but it is scanty and it is not clear how to use everything together. There is only this example: ```json { "plugins": { "posthtml-img-autosize": { "root": "./images" } } } ``` And if you look at the _posthtml-w3c_ sources, you can see that the package does not use any options to configure it. ¯\\\_(ツ)_/¯ ![img-2019-05-25-19-41-03](https://user-images.githubusercontent.com/10127643/58372404-56e6e200-7f25-11e9-852a-a32f0bdf770b.png) Could you please help me? How to configure `.posthtmlrc` for that and another different plugins?
1.0
PostHTML, posthtml-w3c not working! - Hello there! Trying to setup [posthtml-w3c](https://github.com/posthtml/posthtml-w3c) in Parcel project, so I installed it to my project `yarn add -D posthtml-w3c` than add `.posthtmlrc` **.posthtmlrc** ```json { "plugins": { "posthtml-w3c": {} } } ``` I definitely made some w3c mistakes in `index.html` and expect to see any warnings in my console, but it's seems not working. ```html <!DOCTYPE html> <html lang="ru"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Блог</title> <link rel="stylesheet" href="./assets/styles/index.scss"> </head> <body> <div class="container container--header"> <header class="header" custom-data="1"> <a href="#"> <h1 check-my-bad-attr>H1</h1> </a> </header> </div> </body> </html> ``` ![img-2019-05-25-19-20-35](https://user-images.githubusercontent.com/10127643/58372214-11291a00-7f23-11e9-9c84-e88b72b5c7a1.png) I tried to find the necessary documentation from both [Parcel transformation posthtml page](https://ru.parceljs.org/transforms.html#posthtml) and the [posthtml-w3c page](https://github.com/posthtml/posthtml-w3c), but it is scanty and it is not clear how to use everything together. There is only this example: ```json { "plugins": { "posthtml-img-autosize": { "root": "./images" } } } ``` And if you look at the _posthtml-w3c_ sources, you can see that the package does not use any options to configure it. ¯\\\_(ツ)_/¯ ![img-2019-05-25-19-41-03](https://user-images.githubusercontent.com/10127643/58372404-56e6e200-7f25-11e9-852a-a32f0bdf770b.png) Could you please help me? How to configure `.posthtmlrc` for that and another different plugins?
process
posthtml posthtml not working hello there trying to setup in parcel project so i installed it to my project yarn add d posthtml than add posthtmlrc posthtmlrc json plugins posthtml i definitely made some mistakes in index html and expect to see any warnings in my console but it s seems not working html блог i tried to find the necessary documentation from both and the but it is scanty and it is not clear how to use everything together there is only this example json plugins posthtml img autosize root images and if you look at the posthtml sources you can see that the package does not use any options to configure it ¯ ツ ¯ could you please help me how to configure posthtmlrc for that and another different plugins
1
5,615
8,475,097,612
IssuesEvent
2018-10-24 17:58:19
easy-software-ufal/annotations_repos
https://api.github.com/repos/easy-software-ufal/annotations_repos
opened
Starcounter/Starcounter.Authorization System.InvalidCastException - class & handler without CheckPermission attribute
ADA C# test wrong processing
Issue: `https://github.com/Starcounter/Starcounter.Authorization/issues/25` PR: `https://github.com/Starcounter/Starcounter.Authorization/commit/e26ca80c0ef102295e054fceeb9accb7de69b050` Simulated by adding RequirePermissionAttribute.
1.0
Starcounter/Starcounter.Authorization System.InvalidCastException - class & handler without CheckPermission attribute - Issue: `https://github.com/Starcounter/Starcounter.Authorization/issues/25` PR: `https://github.com/Starcounter/Starcounter.Authorization/commit/e26ca80c0ef102295e054fceeb9accb7de69b050` Simulated by adding RequirePermissionAttribute.
process
starcounter starcounter authorization system invalidcastexception class handler without checkpermission attribute issue pr simulated by adding requirepermissionattribute
1
17,632
23,447,657,574
IssuesEvent
2022-08-15 21:28:06
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
reopened
[processor/k8sattributes] k8s.deployment.name attribute can be set to a wrong value
bug priority:p2 processor/k8sattributes
`k8s.deployment.name` attribute value is taken from the pod name using regexp. This is not a robust approach. In some scenarios it can set the attribute to a wrong value. For example a Stateful Set called `sts-test` will create a pod called `sts-test-1`, telemetry from that pod will get the attribute `k8s.deployment.name: sts` which is incorrect, `k8s.deployment.name` should not be set for such pod. There are two ways to solve the problem: 1. Make the [regexp](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/k8sattributesprocessor/internal/kube/client.go#L64) more strict. This will lower chances for getting a wrong attribute, but it's impossible to completely eliminate the chances with the regexp approach 2. Use the k8s API to fetch an owner of the pod, check if it's replicaset controlled by a deployment, then set the correct value for `k8s.deplyment.name` attribute. This is a robust approach, but we need to make sure it doesn't introduce a significant computational overhead and load on k8s API.
1.0
[processor/k8sattributes] k8s.deployment.name attribute can be set to a wrong value - `k8s.deployment.name` attribute value is taken from the pod name using regexp. This is not a robust approach. In some scenarios it can set the attribute to a wrong value. For example a Stateful Set called `sts-test` will create a pod called `sts-test-1`, telemetry from that pod will get the attribute `k8s.deployment.name: sts` which is incorrect, `k8s.deployment.name` should not be set for such pod. There are two ways to solve the problem: 1. Make the [regexp](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/k8sattributesprocessor/internal/kube/client.go#L64) more strict. This will lower chances for getting a wrong attribute, but it's impossible to completely eliminate the chances with the regexp approach 2. Use the k8s API to fetch an owner of the pod, check if it's replicaset controlled by a deployment, then set the correct value for `k8s.deplyment.name` attribute. This is a robust approach, but we need to make sure it doesn't introduce a significant computational overhead and load on k8s API.
process
deployment name attribute can be set to a wrong value deployment name attribute value is taken from the pod name using regexp this is not a robust approach in some scenarios it can set the attribute to a wrong value for example a stateful set called sts test will create a pod called sts test telemetry from that pod will get the attribute deployment name sts which is incorrect deployment name should not be set for such pod there are two ways to solve the problem make the more strict this will lower chances for getting a wrong attribute but it s impossible to completely eliminate the chances with the regexp approach use the api to fetch an owner of the pod check if it s replicaset controlled by a deployment then set the correct value for deplyment name attribute this is a robust approach but we need to make sure it doesn t introduce a significant computational overhead and load on api
1
15,533
19,703,296,808
IssuesEvent
2022-01-12 18:54:19
googleapis/python-ndb
https://api.github.com/repos/googleapis/python-ndb
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'python-ndb' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'python-ndb' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname python ndb invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
23,549
10,904,333,892
IssuesEvent
2019-11-20 08:29:34
NixOS/nixpkgs
https://api.github.com/repos/NixOS/nixpkgs
closed
Vulnerability roundup 71: libsndfile-1.0.28: 1 advisory
1.severity: security
[search](https://search.nix.gsc.io/?q=libsndfile&i=fosho&repos=nixos-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=libsndfile+in%3Apath&type=Code) * [ ] [CVE-2018-13419](https://nvd.nist.gov/vuln/detail/CVE-2018-13419) (nixos-19.03) Scanned versions: nixos-19.03: e0c7712eac6. May contain false positives.
True
Vulnerability roundup 71: libsndfile-1.0.28: 1 advisory - [search](https://search.nix.gsc.io/?q=libsndfile&i=fosho&repos=nixos-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=libsndfile+in%3Apath&type=Code) * [ ] [CVE-2018-13419](https://nvd.nist.gov/vuln/detail/CVE-2018-13419) (nixos-19.03) Scanned versions: nixos-19.03: e0c7712eac6. May contain false positives.
non_process
vulnerability roundup libsndfile advisory nixos scanned versions nixos may contain false positives
0
5,998
8,805,990,245
IssuesEvent
2018-12-27 00:00:44
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Reltables don't get processed if external conrefs are defined
bug needs reproduction preprocess/conref stale
I have a number of ditamaps each in a directory with its respective content. One map contains generic content that we share with other topics via conrefs. In one map, when the conrefs are active, its reltable content is not processed. If the conrefs are commented out, the reltables are processed as expected. The source topic IDs are in the file ./ADMINISTRATION/g_virtualChassis.dita. The destination topics are consumed by the file ./REFERENCE/API/c_aboutDDS.dita. We have DITA-OT 2.0.1. There are no errors thrown. The reltables just quietly fail to appear. The above scenario works as expected using the built-in DITA-OT 1.8 provided by OxygenXML.
1.0
Reltables don't get processed if external conrefs are defined - I have a number of ditamaps each in a directory with its respective content. One map contains generic content that we share with other topics via conrefs. In one map, when the conrefs are active, its reltable content is not processed. If the conrefs are commented out, the reltables are processed as expected. The source topic IDs are in the file ./ADMINISTRATION/g_virtualChassis.dita. The destination topics are consumed by the file ./REFERENCE/API/c_aboutDDS.dita. We have DITA-OT 2.0.1. There are no errors thrown. The reltables just quietly fail to appear. The above scenario works as expected using the built-in DITA-OT 1.8 provided by OxygenXML.
process
reltables don t get processed if external conrefs are defined i have a number of ditamaps each in a directory with its respective content one map contains generic content that we share with other topics via conrefs in one map when the conrefs are active its reltable content is not processed if the conrefs are commented out the reltables are processed as expected the source topic ids are in the file administration g virtualchassis dita the destination topics are consumed by the file reference api c aboutdds dita we have dita ot there are no errors thrown the reltables just quietly fail to appear the above scenario works as expected using the built in dita ot provided by oxygenxml
1
16,962
22,322,240,786
IssuesEvent
2022-06-14 07:35:56
quark-engine/quark-engine
https://api.github.com/repos/quark-engine/quark-engine
closed
Add API filter into Radiocontrast
issue-processing-state-06
When using Radiocontrast to generate rules, the API use count of API might be too much. That slow down the rule generation process and the memory consumption. So we should add API filter(second rule generation) into radiocontrast.
1.0
Add API filter into Radiocontrast - When using Radiocontrast to generate rules, the API use count of API might be too much. That slow down the rule generation process and the memory consumption. So we should add API filter(second rule generation) into radiocontrast.
process
add api filter into radiocontrast when using radiocontrast to generate rules the api use count of api might be too much that slow down the rule generation process and the memory consumption so we should add api filter second rule generation into radiocontrast
1
1,255
5,318,059,178
IssuesEvent
2017-02-14 00:33:41
diofant/diofant
https://api.github.com/repos/diofant/diofant
closed
Drop diofant/plotting/experimental_lambdify.py
maintainability plotting
This should be replaced with standard lambdify, maybe improved. See also sympy/sympy#11461, sympy/sympy#10925.
True
Drop diofant/plotting/experimental_lambdify.py - This should be replaced with standard lambdify, maybe improved. See also sympy/sympy#11461, sympy/sympy#10925.
non_process
drop diofant plotting experimental lambdify py this should be replaced with standard lambdify maybe improved see also sympy sympy sympy sympy
0
17,826
23,766,721,168
IssuesEvent
2022-09-01 13:21:21
firebase/firebase-cpp-sdk
https://api.github.com/repos/firebase/firebase-cpp-sdk
reopened
[C++] Nightly Integration Testing Report for Firestore
type: process nightly-testing
<hidden value="integration-test-status-comment"></hidden> ### ✅&nbsp; [build against repo] Integration test succeeded! Requested by @sunmou99 on commit 6872ba47417796c8c24c6eab435534ad34b7a843 Last updated: Wed Aug 31 05:26 PDT 2022 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/2962975852)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit 6872ba47417796c8c24c6eab435534ad34b7a843 Last updated: Wed Aug 31 07:37 PDT 2022 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/2963704785)**
1.0
[C++] Nightly Integration Testing Report for Firestore - <hidden value="integration-test-status-comment"></hidden> ### ✅&nbsp; [build against repo] Integration test succeeded! Requested by @sunmou99 on commit 6872ba47417796c8c24c6eab435534ad34b7a843 Last updated: Wed Aug 31 05:26 PDT 2022 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/2962975852)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit 6872ba47417796c8c24c6eab435534ad34b7a843 Last updated: Wed Aug 31 07:37 PDT 2022 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/2963704785)**
process
nightly integration testing report for firestore ✅ nbsp integration test succeeded requested by on commit last updated wed aug pdt ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated wed aug pdt
1
20,693
27,364,225,089
IssuesEvent
2023-02-27 17:54:25
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[Epic] Support Honey SQL 2 compilation for SQL drivers [46]
Querying/Processor .Backend .Epic
This should be fairly straightforward since I reworked `metabase.util.honeysql-extensions` to support targeting either Honey SQL 1 or Honey SQL 2 in #27619. Basically if `hx/*honey-sql-version*` is bound to `1` all of the `hx/` stuff targets Honey SQL 1, and if it's bound to 2 all of the `hx/` stuff targets Honey SQL 2. #27975 builds on this a bit and adds `hx/call` and `hx/raw` for cross-version compatibility as well So at this point we can start moving all over our SQL drivers over to Honey SQL 2. I think we should hold off on migrating at least one of our drivers to Honey SQL 1 until we remove support completely -- that way we can still test that we support Honey SQL 1 until we remove it entirely. `:sqlserver` seems like a good choice to me because it needs to do some weird stuff like implement its own behavior for `:limit`, so it should get good coverage. ```[tasklist] ### In 46: - [x] #28154 - [ ] #28155 - [ ] #28157 - [x] #28158 - [ ] #28159 - [x] #28160 - [ ] #28162 - [ ] #28163 - [ ] #28164 - [ ] #28166 - [x] #28167 - [ ] #28168 - [ ] #15342 - [x] Should we rename `metabase.util.honey-sql-2-extensions` to `metabase.util.honey-sql-2`? What's the point of `-extensions`? Also for Honey SQL 1. - [ ] Consider moving the `TypedHoneySQLForm` and `Identifier` classes back into `metabase.util.honeysql_extensions` to avoid breaking changes in the drivers ``` See #28684 for 47-49 follow-on work
1.0
[Epic] Support Honey SQL 2 compilation for SQL drivers [46] - This should be fairly straightforward since I reworked `metabase.util.honeysql-extensions` to support targeting either Honey SQL 1 or Honey SQL 2 in #27619. Basically if `hx/*honey-sql-version*` is bound to `1` all of the `hx/` stuff targets Honey SQL 1, and if it's bound to 2 all of the `hx/` stuff targets Honey SQL 2. #27975 builds on this a bit and adds `hx/call` and `hx/raw` for cross-version compatibility as well So at this point we can start moving all over our SQL drivers over to Honey SQL 2. I think we should hold off on migrating at least one of our drivers to Honey SQL 1 until we remove support completely -- that way we can still test that we support Honey SQL 1 until we remove it entirely. `:sqlserver` seems like a good choice to me because it needs to do some weird stuff like implement its own behavior for `:limit`, so it should get good coverage. ```[tasklist] ### In 46: - [x] #28154 - [ ] #28155 - [ ] #28157 - [x] #28158 - [ ] #28159 - [x] #28160 - [ ] #28162 - [ ] #28163 - [ ] #28164 - [ ] #28166 - [x] #28167 - [ ] #28168 - [ ] #15342 - [x] Should we rename `metabase.util.honey-sql-2-extensions` to `metabase.util.honey-sql-2`? What's the point of `-extensions`? Also for Honey SQL 1. - [ ] Consider moving the `TypedHoneySQLForm` and `Identifier` classes back into `metabase.util.honeysql_extensions` to avoid breaking changes in the drivers ``` See #28684 for 47-49 follow-on work
process
support honey sql compilation for sql drivers this should be fairly straightforward since i reworked metabase util honeysql extensions to support targeting either honey sql or honey sql in basically if hx honey sql version is bound to all of the hx stuff targets honey sql and if it s bound to all of the hx stuff targets honey sql builds on this a bit and adds hx call and hx raw for cross version compatibility as well so at this point we can start moving all over our sql drivers over to honey sql i think we should hold off on migrating at least one of our drivers to honey sql until we remove support completely that way we can still test that we support honey sql until we remove it entirely sqlserver seems like a good choice to me because it needs to do some weird stuff like implement its own behavior for limit so it should get good coverage in should we rename metabase util honey sql extensions to metabase util honey sql what s the point of extensions also for honey sql consider moving the typedhoneysqlform and identifier classes back into metabase util honeysql extensions to avoid breaking changes in the drivers see for follow on work
1
8,455
11,628,121,711
IssuesEvent
2020-02-27 17:43:16
Altinn/altinn-studio
https://api.github.com/repos/Altinn/altinn-studio
closed
AREA: process
area-overview area/process
A Service created in Altinn Studio will have a workflow that describe the process a instance of a service will go through from beginning to end. A workflow can have many steps of different types. - FormFilling - Sendin - Signing - Payment - ParalellSigning Workflow functionality spans both Designer and Runtime application **Related tasks** - [x] Define the workflow format #180 #83 - [x] Generate the standard workflow file #413 - [x] Add standard workflow to service repository #931 - [ ] Analyze workflow need in designer to Altinn Studio #604 - [x] Create a workflow designer #126 - [x] Add functionality to set workflow related configuration #25 - [x] Add workflow validation #130 - [x] Add workflow service to runtime to handle workflow (read and update state) in Altinn Studio - [x] Add workflow service to runtime to handle workflow (read and update state) in Altinn Studio Apps - [x] Update runtime to use workflow file for navigation
1.0
AREA: process - A Service created in Altinn Studio will have a workflow that describe the process a instance of a service will go through from beginning to end. A workflow can have many steps of different types. - FormFilling - Sendin - Signing - Payment - ParalellSigning Workflow functionality spans both Designer and Runtime application **Related tasks** - [x] Define the workflow format #180 #83 - [x] Generate the standard workflow file #413 - [x] Add standard workflow to service repository #931 - [ ] Analyze workflow need in designer to Altinn Studio #604 - [x] Create a workflow designer #126 - [x] Add functionality to set workflow related configuration #25 - [x] Add workflow validation #130 - [x] Add workflow service to runtime to handle workflow (read and update state) in Altinn Studio - [x] Add workflow service to runtime to handle workflow (read and update state) in Altinn Studio Apps - [x] Update runtime to use workflow file for navigation
process
area process a service created in altinn studio will have a workflow that describe the process a instance of a service will go through from beginning to end a workflow can have many steps of different types formfilling sendin signing payment paralellsigning workflow functionality spans both designer and runtime application related tasks define the workflow format generate the standard workflow file add standard workflow to service repository analyze workflow need in designer to altinn studio create a workflow designer add functionality to set workflow related configuration add workflow validation add workflow service to runtime to handle workflow read and update state in altinn studio add workflow service to runtime to handle workflow read and update state in altinn studio apps update runtime to use workflow file for navigation
1
1,188
3,689,044,422
IssuesEvent
2016-02-25 15:12:43
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
opened
Keyref processing overwrites user-authored file with clone
bug preprocess/keyref
[Fixtures](https://github.com/eerohele/dita-ot-issues/tree/master/fixtures/2240). Related to #2134 insofar as it's related to keyscopes and generating topic clones. Might even be the same root cause. Given: ```xml <!-- root.ditamap --> <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "map.dtd"> <map id="map" title="DITA Map"> <topicref href="topic1.dita"/> <topicref href="topic1.dita"> <topicref href="topic1-1.dita"/> </topicref> </map> <!-- topic1.dita --> <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd"> <topic id="topic1" xml:lang="en-us"> <title>Topic 1</title> <body> <p><keyword keyref="it-does-not-matter-whether-i-have-a-valid-definition"/></p> </body> </topic> <!-- topic1-1.dita --> <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd"> <topic id="topic1-1" xml:lang="en-us"> <title>Topic 1-1</title> <body> <p>Hello, world 1-1!</p> </body> </topic> ``` `topic1-1.dita` gets overwritten in preprocessing when topic clones are generated for key resolution purposes (see #2134). The effect is clearly visible in the PDF: <img width="547" alt="screen shot 2016-02-25 at 17 11 02" src="https://cloud.githubusercontent.com/assets/31859/13323590/d590bd68-dbe2-11e5-926f-5ec22ba0012a.png"> The last topic should be "Topic 1-1", not "Topic 1".
1.0
Keyref processing overwrites user-authored file with clone - [Fixtures](https://github.com/eerohele/dita-ot-issues/tree/master/fixtures/2240). Related to #2134 insofar as it's related to keyscopes and generating topic clones. Might even be the same root cause. Given: ```xml <!-- root.ditamap --> <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "map.dtd"> <map id="map" title="DITA Map"> <topicref href="topic1.dita"/> <topicref href="topic1.dita"> <topicref href="topic1-1.dita"/> </topicref> </map> <!-- topic1.dita --> <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd"> <topic id="topic1" xml:lang="en-us"> <title>Topic 1</title> <body> <p><keyword keyref="it-does-not-matter-whether-i-have-a-valid-definition"/></p> </body> </topic> <!-- topic1-1.dita --> <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd"> <topic id="topic1-1" xml:lang="en-us"> <title>Topic 1-1</title> <body> <p>Hello, world 1-1!</p> </body> </topic> ``` `topic1-1.dita` gets overwritten in preprocessing when topic clones are generated for key resolution purposes (see #2134). The effect is clearly visible in the PDF: <img width="547" alt="screen shot 2016-02-25 at 17 11 02" src="https://cloud.githubusercontent.com/assets/31859/13323590/d590bd68-dbe2-11e5-926f-5ec22ba0012a.png"> The last topic should be "Topic 1-1", not "Topic 1".
process
keyref processing overwrites user authored file with clone related to insofar as it s related to keyscopes and generating topic clones might even be the same root cause given xml topic topic hello world dita gets overwritten in preprocessing when topic clones are generated for key resolution purposes see the effect is clearly visible in the pdf img width alt screen shot at src the last topic should be topic not topic
1
22,842
7,237,796,908
IssuesEvent
2018-02-13 12:24:36
akka/akka
https://api.github.com/repos/akka/akka
closed
Access to repo.akka.io snapshot repo fails with NullPointerException with sbt 1.x
3 - in progress t:build
The reason is that sbt expects a `Content-Type` header when downloading artifacts and our snapshot repo on `repo.akka.io` didn't provide this header for `pom` files. https://github.com/sbt/sbt/issues/3941 https://github.com/sbt/librarymanagement/issues/180
1.0
Access to repo.akka.io snapshot repo fails with NullPointerException with sbt 1.x - The reason is that sbt expects a `Content-Type` header when downloading artifacts and our snapshot repo on `repo.akka.io` didn't provide this header for `pom` files. https://github.com/sbt/sbt/issues/3941 https://github.com/sbt/librarymanagement/issues/180
non_process
access to repo akka io snapshot repo fails with nullpointerexception with sbt x the reason is that sbt expects a content type header when downloading artifacts and our snapshot repo on repo akka io didn t provide this header for pom files
0
21,307
28,500,933,656
IssuesEvent
2023-04-18 17:16:36
pystatgen/sgkit
https://api.github.com/repos/pystatgen/sgkit
opened
Release 0.7.0
process + tools
I'd like to do a release in the next couple of weeks. Please add issues you'd like to see included to https://github.com/pystatgen/sgkit/milestone/5.
1.0
Release 0.7.0 - I'd like to do a release in the next couple of weeks. Please add issues you'd like to see included to https://github.com/pystatgen/sgkit/milestone/5.
process
release i d like to do a release in the next couple of weeks please add issues you d like to see included to
1
217,843
16,890,543,751
IssuesEvent
2021-06-23 08:43:45
k8-proxy/cloud-sdk-automation
https://api.github.com/repos/k8-proxy/cloud-sdk-automation
opened
To automate User can request analysis as zip file using binary data API
Automation test P1
Background: making sure SDK machine is running Given the SDK machine is running And Health Check shows status OK Scenario: User can request analysis as zip file using binary data Given a File has been processed When the Endpoint http://<ip_address>:8080/api/analyse/rebuild-zip-from-file Then the user receives a 200 OK response And the zip file's binary content is returned to the client.
1.0
To automate User can request analysis as zip file using binary data API - Background: making sure SDK machine is running Given the SDK machine is running And Health Check shows status OK Scenario: User can request analysis as zip file using binary data Given a File has been processed When the Endpoint http://<ip_address>:8080/api/analyse/rebuild-zip-from-file Then the user receives a 200 OK response And the zip file's binary content is returned to the client.
non_process
to automate user can request analysis as zip file using binary data api background making sure sdk machine is running given the sdk machine is running and health check shows status ok scenario user can request analysis as zip file using binary data given a file has been processed when the endpoint then the user receives a ok response and the zip file s binary content is returned to the client
0
13,085
15,434,976,815
IssuesEvent
2021-03-07 06:28:54
emily-writes-poems/emily-writes-poems-processing
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
closed
add confirmation that feature edit was saved
enhancement processing
Some confirmation messaging to make sure save button actually pressed and went through Related to what I did in https://github.com/emily-writes-poems/emily-writes-poems-processing/issues/6
1.0
add confirmation that feature edit was saved - Some confirmation messaging to make sure save button actually pressed and went through Related to what I did in https://github.com/emily-writes-poems/emily-writes-poems-processing/issues/6
process
add confirmation that feature edit was saved some confirmation messaging to make sure save button actually pressed and went through related to what i did in
1
15,800
19,987,152,076
IssuesEvent
2022-01-30 20:45:48
hoprnet/hoprnet
https://api.github.com/repos/hoprnet/hoprnet
opened
Create process for new bounty program
new issue processes
<!--- Please DO NOT remove the automatically added 'new issue' label --> <!--- Provide a general summary of the issue in the Title above --> Describe how the new bounty program will be run, how tech and community will be involved.
1.0
Create process for new bounty program - <!--- Please DO NOT remove the automatically added 'new issue' label --> <!--- Provide a general summary of the issue in the Title above --> Describe how the new bounty program will be run, how tech and community will be involved.
process
create process for new bounty program describe how the new bounty program will be run how tech and community will be involved
1
237,176
18,153,640,893
IssuesEvent
2021-09-26 17:52:12
girlscript/winter-of-contributing
https://api.github.com/repos/girlscript/winter-of-contributing
closed
Bootstrap 5
documentation GWOC21 Assigned Frontend Dev HTML/CSS/JS
### Description - what is bootstrap 5 - what is the difference between bootstrap 5 and bootstrap 4 - reference bootstrap 5 ### Domain Frontend Dev HTML/CSS/JS ### Type of Contribution Documentation ### Code of Conduct - [X] I follow [Contributing Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md) & [Code of conduct](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CODE_OF_CONDUCT.md) of this project.
1.0
Bootstrap 5 - ### Description - what is bootstrap 5 - what is the difference between bootstrap 5 and bootstrap 4 - reference bootstrap 5 ### Domain Frontend Dev HTML/CSS/JS ### Type of Contribution Documentation ### Code of Conduct - [X] I follow [Contributing Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md) & [Code of conduct](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CODE_OF_CONDUCT.md) of this project.
non_process
bootstrap description what is bootstrap what is the difference between bootstrap and bootstrap reference bootstrap domain frontend dev html css js type of contribution documentation code of conduct i follow of this project
0
162,345
25,522,401,059
IssuesEvent
2022-11-28 21:46:59
Regalis11/Barotrauma
https://api.github.com/repos/Regalis11/Barotrauma
closed
[Factions] Specific vending machine gets filled with outpost items in output
Bug Design Unstable
### Disclaimers - [X] I have searched the issue tracker to check if the issue has already been reported. - [ ] My issue happened while using mods. ### What happened? A very specific vending machinee, on a city module, will always have some kind of engineering item in the output. Specifically the first vending machine in engineeringmodule_01, as it uses the improper tags, being vendingmachine,locker,container,outpostengcab. Please note that this also happens in variants of the module, like the clown variant etc. ![image](https://user-images.githubusercontent.com/104232152/201721067-491d4e0a-6edb-4bf2-a073-fb25eb9e97b9.png) ### Reproduction steps 1.Spawn into a city outpost 2.Check the vending machine of engineeringmodule_01 ### Bug prevalence Happens every time I play ### Version Faction/endgame test branch ### - _No response_ ### Which operating system did you encounter this bug on? Windows ### Relevant error messages and crash reports _No response_
1.0
[Factions] Specific vending machine gets filled with outpost items in output - ### Disclaimers - [X] I have searched the issue tracker to check if the issue has already been reported. - [ ] My issue happened while using mods. ### What happened? A very specific vending machinee, on a city module, will always have some kind of engineering item in the output. Specifically the first vending machine in engineeringmodule_01, as it uses the improper tags, being vendingmachine,locker,container,outpostengcab. Please note that this also happens in variants of the module, like the clown variant etc. ![image](https://user-images.githubusercontent.com/104232152/201721067-491d4e0a-6edb-4bf2-a073-fb25eb9e97b9.png) ### Reproduction steps 1.Spawn into a city outpost 2.Check the vending machine of engineeringmodule_01 ### Bug prevalence Happens every time I play ### Version Faction/endgame test branch ### - _No response_ ### Which operating system did you encounter this bug on? Windows ### Relevant error messages and crash reports _No response_
non_process
specific vending machine gets filled with outpost items in output disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened a very specific vending machinee on a city module will always have some kind of engineering item in the output specifically the first vending machine in engineeringmodule as it uses the improper tags being vendingmachine locker container outpostengcab please note that this also happens in variants of the module like the clown variant etc reproduction steps spawn into a city outpost check the vending machine of engineeringmodule bug prevalence happens every time i play version faction endgame test branch no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response
0
176,815
21,443,066,493
IssuesEvent
2022-04-25 01:04:31
jgeraigery/spring-session
https://api.github.com/repos/jgeraigery/spring-session
closed
WS-2016-7107 (Medium) detected in spring-security-web-5.1.4.RELEASE.jar - autoclosed
security vulnerability
## WS-2016-7107 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-web-5.1.4.RELEASE.jar</b></p></summary> <p>spring-security-web</p> <p>Path to dependency file: spring-session</p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.springframework.security/spring-security-web/5.1.4.RELEASE/fc4d93db3b56e8a6ca11f4484a8bb6d328e7accb/spring-security-web-5.1.4.RELEASE.jar,/root/.gradle/caches/modules-2/files-2.1/org.springframework.security/spring-security-web/5.1.4.RELEASE/fc4d93db3b56e8a6ca11f4484a8bb6d328e7accb/spring-security-web-5.1.4.RELEASE.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-security-2.1.3.RELEASE.jar (Root Library) - :x: **spring-security-web-5.1.4.RELEASE.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> CSRF tokens in Spring Security through 5.4.5 are vulnerable to a breach attack. Spring Security always returns the same CSRF token to the browser. <p>Publish Date: 2016-08-02 <p>URL: <a href=https://github.com/spring-projects/spring-security/issues/4001>WS-2016-7107</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework.security","packageName":"spring-security-web","packageVersion":"5.1.4.RELEASE","packageFilePaths":["spring-session"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-security:2.1.3.RELEASE;org.springframework.security:spring-security-web:5.1.4.RELEASE","isMinimumFixVersionAvailable":false}],"baseBranches":[],"vulnerabilityIdentifier":"WS-2016-7107","vulnerabilityDetails":"CSRF tokens in Spring Security through 5.4.5 are vulnerable to a breach attack. Spring Security always returns the same CSRF token to the browser.","vulnerabilityUrl":"https://github.com/spring-projects/spring-security/issues/4001","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
WS-2016-7107 (Medium) detected in spring-security-web-5.1.4.RELEASE.jar - autoclosed - ## WS-2016-7107 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-web-5.1.4.RELEASE.jar</b></p></summary> <p>spring-security-web</p> <p>Path to dependency file: spring-session</p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.springframework.security/spring-security-web/5.1.4.RELEASE/fc4d93db3b56e8a6ca11f4484a8bb6d328e7accb/spring-security-web-5.1.4.RELEASE.jar,/root/.gradle/caches/modules-2/files-2.1/org.springframework.security/spring-security-web/5.1.4.RELEASE/fc4d93db3b56e8a6ca11f4484a8bb6d328e7accb/spring-security-web-5.1.4.RELEASE.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-security-2.1.3.RELEASE.jar (Root Library) - :x: **spring-security-web-5.1.4.RELEASE.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> CSRF tokens in Spring Security through 5.4.5 are vulnerable to a breach attack. Spring Security always returns the same CSRF token to the browser. <p>Publish Date: 2016-08-02 <p>URL: <a href=https://github.com/spring-projects/spring-security/issues/4001>WS-2016-7107</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework.security","packageName":"spring-security-web","packageVersion":"5.1.4.RELEASE","packageFilePaths":["spring-session"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-security:2.1.3.RELEASE;org.springframework.security:spring-security-web:5.1.4.RELEASE","isMinimumFixVersionAvailable":false}],"baseBranches":[],"vulnerabilityIdentifier":"WS-2016-7107","vulnerabilityDetails":"CSRF tokens in Spring Security through 5.4.5 are vulnerable to a breach attack. Spring Security always returns the same CSRF token to the browser.","vulnerabilityUrl":"https://github.com/spring-projects/spring-security/issues/4001","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
ws medium detected in spring security web release jar autoclosed ws medium severity vulnerability vulnerable library spring security web release jar spring security web path to dependency file spring session path to vulnerable library root gradle caches modules files org springframework security spring security web release spring security web release jar root gradle caches modules files org springframework security spring security web release spring security web release jar dependency hierarchy spring boot starter security release jar root library x spring security web release jar vulnerable library vulnerability details csrf tokens in spring security through are vulnerable to a breach attack spring security always returns the same csrf token to the browser publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter security release org springframework security spring security web release isminimumfixversionavailable false basebranches vulnerabilityidentifier ws vulnerabilitydetails csrf tokens in spring security through are vulnerable to a breach attack spring security always returns the same csrf token to the browser vulnerabilityurl
0
5,777
8,616,863,273
IssuesEvent
2018-11-20 02:18:40
mozilla-tw/ScreenshotGo
https://api.github.com/repos/mozilla-tw/ScreenshotGo
closed
Need to be able to get my FCM device token somewhere
4. feature-backlog F1-retention firebase process
Previously in Firefox Rocket, we can access that token if click "Share" in settings multiple times. We need this in ScreenshotGo so that we can test FCM before officially sent.
1.0
Need to be able to get my FCM device token somewhere - Previously in Firefox Rocket, we can access that token if click "Share" in settings multiple times. We need this in ScreenshotGo so that we can test FCM before officially sent.
process
need to be able to get my fcm device token somewhere previously in firefox rocket we can access that token if click share in settings multiple times we need this in screenshotgo so that we can test fcm before officially sent
1
161,117
6,109,626,638
IssuesEvent
2017-06-21 13:30:02
JacquesCarette/literate-scientific-software
https://api.github.com/repos/JacquesCarette/literate-scientific-software
closed
Abbreviations vs Symbols?
Low Priority question
Currently, in the stable and CaseStudies version of the Document for GlassBR, `LSF` (Load Share Factor), `GTF` (Glass Type Factor), and `NFL` (Non-Factored Load) are in the **Abbreviations and Acronyms** table; however, since the following Instance Model ![image](https://user-images.githubusercontent.com/28247301/27353102-e01e239c-55d0-11e7-80a6-90b56f008609.png) uses GTF, LSF, and NFL, should they perhaps be in the "Table of Symbols" (i.e. is it better to say we are using _symbols_ or _abbreviations_ when calculating a value)?
1.0
Abbreviations vs Symbols? - Currently, in the stable and CaseStudies version of the Document for GlassBR, `LSF` (Load Share Factor), `GTF` (Glass Type Factor), and `NFL` (Non-Factored Load) are in the **Abbreviations and Acronyms** table; however, since the following Instance Model ![image](https://user-images.githubusercontent.com/28247301/27353102-e01e239c-55d0-11e7-80a6-90b56f008609.png) uses GTF, LSF, and NFL, should they perhaps be in the "Table of Symbols" (i.e. is it better to say we are using _symbols_ or _abbreviations_ when calculating a value)?
non_process
abbreviations vs symbols currently in the stable and casestudies version of the document for glassbr lsf load share factor gtf glass type factor and nfl non factored load are in the abbreviations and acronyms table however since the following instance model uses gtf lsf and nfl should they perhaps be in the table of symbols i e is it better to say we are using symbols or abbreviations when calculating a value
0
4,492
7,346,175,335
IssuesEvent
2018-03-07 19:49:22
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Redirect Uri Field does not exist when registering new Application in AzureAD
active-directory cxp in-process triaged
Step 1 Register Application, Point 5 (create a new web application): The "Redirect Uri" does not exist in Azure AD. It's now labelled as "Sign-on URL" in Azure (5th of March 2018) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 0574fe6c-3e23-b803-f12b-aa698d238d5b * Version Independent ID: bf7ebc09-c16f-0b1e-53f8-35f305af0c26 * [Content](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-devquickstarts-angular) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/develop/active-directory-devquickstarts-angular.md) * Service: active-directory
1.0
Redirect Uri Field does not exist when registering new Application in AzureAD - Step 1 Register Application, Point 5 (create a new web application): The "Redirect Uri" does not exist in Azure AD. It's now labelled as "Sign-on URL" in Azure (5th of March 2018) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 0574fe6c-3e23-b803-f12b-aa698d238d5b * Version Independent ID: bf7ebc09-c16f-0b1e-53f8-35f305af0c26 * [Content](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-devquickstarts-angular) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/develop/active-directory-devquickstarts-angular.md) * Service: active-directory
process
redirect uri field does not exist when registering new application in azuread step register application point create a new web application the redirect uri does not exist in azure ad it s now labelled as sign on url in azure of march document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id service active directory
1
198,154
14,965,752,326
IssuesEvent
2021-01-27 13:47:19
openvinotoolkit/openvino
https://api.github.com/repos/openvinotoolkit/openvino
closed
[Bug]undefined symbol: _ZN3tbb8internal13numa_topology4fillEPi
category: IE Tests category: VPU support_request
##### System information (version) - OpenVINO => 2020.4.287(2021.2.185) - Operating System / Platform => linux ubuntu 64 Bit - Compiler =>Linux version 5.4.0-60-generic (buildd@lgw01-amd64-007) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) - Problem classification: Model Inference ##### Detailed description <!-- your description --> I have ssd_mobilenet_v2.xml, and then want to perform inference on ncs. The folllowing operations are conducted. `cd /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp` `bash build_samples.sh` `cd ~/inference_engine_cpp_samples_build/intel64/Release` ` sudo ./benchmark_app -m ssd_mobilenet_v2.xml -d MYRIAD -i $OPENVINO_HOME/demo/car.png` **generating the following error:** `./benchmark_app: symbol lookup error: /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libinference_engine.so: undefined symbol: _ZN3tbb8internal13numa_topology4fillEPi` But, when I use cpu device with command `./benchmark_app -m ssd_mobilenet_v2.xml -i $OPENVINO_HOME/demo/car.png`, it is ok. I try to solve the problem: **Firstly, I checked with`ldd benchmark_app` and everything seems to be ok:** ` linux-vdso.so.1 (0x00007fff7acde000) libinference_engine_legacy.so => /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libinference_engine_legacy.so (0x00007f6c38080000) libinference_engine.so => /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libinference_engine.so (0x00007f6c37e20000) libformat_reader.so => /home/maxiu/inference_engine_cpp_samples_build/intel64/Release/lib/libformat_reader.so (0x00007f6c37c14000) libopencv_core.so.4.4 => /opt/intel/openvino/opencv/lib/libopencv_core.so.4.4 (0x00007f6c36b1c000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6c36918000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f6c3658f000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f6c36377000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6c35f86000) libtbb.so.2 => /opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib/libtbb.so.2 (0x00007f6c35d1e000) libinference_engine_transformations.so => /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libinference_engine_transformations.so (0x00007f6c35a13000) libngraph.so => /opt/intel/openvino/deployment_tools/ngraph/lib/libngraph.so (0x00007f6c350cb000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f6c34d2d000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6c34b0e000) /lib64/ld-linux-x86-64.so.2 (0x00007f6c387a1000) libopencv_imgcodecs.so.4.4 => /opt/intel/openvino/opencv/lib/libopencv_imgcodecs.so.4.4 (0x00007f6c3485e000) libopencv_imgproc.so.4.4 => /opt/intel/openvino/opencv/lib/libopencv_imgproc.so.4.4 (0x00007f6c324d4000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6c322cc000) ` **Secondly, I checked the tbb.** `cd /` `sudo find -name "libtbb.so"` `./home/xiu/anaconda3/lib/libtbb.so` `find: ‘./run/user/1000/gvfs’: Permission denied` `find: ‘./run/user/121/gvfs’: Permission denied` `./opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/tbb/lib/libtbb.so` `./usr/lib/x86_64-linux-gnu/libtbb.so` `export LD_LIBRARY_PATH=/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/tbb/lib/:$LD_LIBRARY_PATH` After configuring, `sudo ./benchmark_app -m ssd_mobilenet_v2.xml -d MYRIAD -i $OPENVINO_HOME/demo/car.png` The above error also occurred. <!-- Describe your problem and steps you've done before you got to this point. to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file -->
1.0
[Bug]undefined symbol: _ZN3tbb8internal13numa_topology4fillEPi - ##### System information (version) - OpenVINO => 2020.4.287(2021.2.185) - Operating System / Platform => linux ubuntu 64 Bit - Compiler =>Linux version 5.4.0-60-generic (buildd@lgw01-amd64-007) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) - Problem classification: Model Inference ##### Detailed description <!-- your description --> I have ssd_mobilenet_v2.xml, and then want to perform inference on ncs. The folllowing operations are conducted. `cd /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp` `bash build_samples.sh` `cd ~/inference_engine_cpp_samples_build/intel64/Release` ` sudo ./benchmark_app -m ssd_mobilenet_v2.xml -d MYRIAD -i $OPENVINO_HOME/demo/car.png` **generating the following error:** `./benchmark_app: symbol lookup error: /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libinference_engine.so: undefined symbol: _ZN3tbb8internal13numa_topology4fillEPi` But, when I use cpu device with command `./benchmark_app -m ssd_mobilenet_v2.xml -i $OPENVINO_HOME/demo/car.png`, it is ok. I try to solve the problem: **Firstly, I checked with`ldd benchmark_app` and everything seems to be ok:** ` linux-vdso.so.1 (0x00007fff7acde000) libinference_engine_legacy.so => /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libinference_engine_legacy.so (0x00007f6c38080000) libinference_engine.so => /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libinference_engine.so (0x00007f6c37e20000) libformat_reader.so => /home/maxiu/inference_engine_cpp_samples_build/intel64/Release/lib/libformat_reader.so (0x00007f6c37c14000) libopencv_core.so.4.4 => /opt/intel/openvino/opencv/lib/libopencv_core.so.4.4 (0x00007f6c36b1c000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6c36918000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f6c3658f000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f6c36377000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6c35f86000) libtbb.so.2 => /opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib/libtbb.so.2 (0x00007f6c35d1e000) libinference_engine_transformations.so => /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libinference_engine_transformations.so (0x00007f6c35a13000) libngraph.so => /opt/intel/openvino/deployment_tools/ngraph/lib/libngraph.so (0x00007f6c350cb000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f6c34d2d000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6c34b0e000) /lib64/ld-linux-x86-64.so.2 (0x00007f6c387a1000) libopencv_imgcodecs.so.4.4 => /opt/intel/openvino/opencv/lib/libopencv_imgcodecs.so.4.4 (0x00007f6c3485e000) libopencv_imgproc.so.4.4 => /opt/intel/openvino/opencv/lib/libopencv_imgproc.so.4.4 (0x00007f6c324d4000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6c322cc000) ` **Secondly, I checked the tbb.** `cd /` `sudo find -name "libtbb.so"` `./home/xiu/anaconda3/lib/libtbb.so` `find: ‘./run/user/1000/gvfs’: Permission denied` `find: ‘./run/user/121/gvfs’: Permission denied` `./opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/tbb/lib/libtbb.so` `./usr/lib/x86_64-linux-gnu/libtbb.so` `export LD_LIBRARY_PATH=/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/tbb/lib/:$LD_LIBRARY_PATH` After configuring, `sudo ./benchmark_app -m ssd_mobilenet_v2.xml -d MYRIAD -i $OPENVINO_HOME/demo/car.png` The above error also occurred. <!-- Describe your problem and steps you've done before you got to this point. to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file -->
non_process
undefined symbol system information version openvino operating system platform linux ubuntu bit compiler linux version generic buildd gcc version ubuntu problem classification model inference detailed description i have ssd mobilenet xml and then want to perform inference on ncs the folllowing operations are conducted cd opt intel openvino deployment tools inference engine samples cpp bash build samples sh cd inference engine cpp samples build release sudo benchmark app m ssd mobilenet xml d myriad i openvino home demo car png generating the following error benchmark app symbol lookup error opt intel openvino deployment tools inference engine lib libinference engine so undefined symbol but when i use cpu device with command benchmark app m ssd mobilenet xml i openvino home demo car png it is ok i try to solve the problem firstly i checked with ldd benchmark app and everything seems to be ok linux vdso so libinference engine legacy so opt intel openvino deployment tools inference engine lib libinference engine legacy so libinference engine so opt intel openvino deployment tools inference engine lib libinference engine so libformat reader so home maxiu inference engine cpp samples build release lib libformat reader so libopencv core so opt intel openvino opencv lib libopencv core so libdl so lib linux gnu libdl so libstdc so usr lib linux gnu libstdc so libgcc s so lib linux gnu libgcc s so libc so lib linux gnu libc so libtbb so opt intel openvino deployment tools inference engine external tbb lib libtbb so libinference engine transformations so opt intel openvino deployment tools inference engine lib libinference engine transformations so libngraph so opt intel openvino deployment tools ngraph lib libngraph so libm so lib linux gnu libm so libpthread so lib linux gnu libpthread so ld linux so libopencv imgcodecs so opt intel openvino opencv lib libopencv imgcodecs so libopencv imgproc so opt intel openvino opencv lib libopencv imgproc so librt so lib linux gnu librt so secondly i checked the tbb cd sudo find name libtbb so home xiu lib libtbb so find ‘ run user gvfs’ permission denied find ‘ run user gvfs’ permission denied opt intel openvino deployment tools inference engine external tbb lib libtbb so usr lib linux gnu libtbb so export ld library path opt intel openvino deployment tools inference engine external tbb lib ld library path after configuring sudo benchmark app m ssd mobilenet xml d myriad i openvino home demo car png the above error also occurred describe your problem and steps you ve done before you got to this point to add code example fence it with triple backticks and optional file extension cpp c code example or attach as txt or zip file
0
1,560
4,160,238,709
IssuesEvent
2016-06-17 12:31:32
matz-e/lobster
https://api.github.com/repos/matz-e/lobster
closed
Optimize default task creation algorithm.
fix-ready processing
When creating tasks, we currently ([here](https://github.com/matz-e/lobster/blob/master/lobster/core/config.py#L94) and [here](https://github.com/matz-e/lobster/blob/master/lobster/core/source.py#L365)) cycle through the categories so that the smallest task limit is processed first. If no task limit is specified, the default sort is on the category name. I think we should consider alternate default sorting methods. For example, we could sort by number of units available such that we keep the categories processed as evenly possible. This has two advantages: 1) it is likely that different categories have different bottlenecks, so maximizing the mixing might minimize the per-category bottlenecks, and 2) for chained workflows with different categories at various steps, this would mean that halfway through processing you'd have half of your last step processed, which might be useful if you want to start using output before the entire project is finished. See, for example, my current situation, where I'm only running digi and reco tasks at a severe inefficiency because I have 513 fit workers but only 251 tasks running, but there are plenty of mAOD tasks that could be run. This could also be partially addressed by changes on the factory side which I believe @btovar is considering, although I think the points above would still stand. ``` (.lobster) [earth] ~/lobster-spring-16-scale-test/ttW >work_queue_status -A earth.crc.nd.edu 9001 CATEGORY RUNNING WAITING FIT-WORKERS MAX-CORES MAX-MEMORY MAX-DISK digi 247 1002 0 ~4 ~4400 ~22131 reco 4 0 513 ~5 ~4400 ~4504 ```
1.0
Optimize default task creation algorithm. - When creating tasks, we currently ([here](https://github.com/matz-e/lobster/blob/master/lobster/core/config.py#L94) and [here](https://github.com/matz-e/lobster/blob/master/lobster/core/source.py#L365)) cycle through the categories so that the smallest task limit is processed first. If no task limit is specified, the default sort is on the category name. I think we should consider alternate default sorting methods. For example, we could sort by number of units available such that we keep the categories processed as evenly possible. This has two advantages: 1) it is likely that different categories have different bottlenecks, so maximizing the mixing might minimize the per-category bottlenecks, and 2) for chained workflows with different categories at various steps, this would mean that halfway through processing you'd have half of your last step processed, which might be useful if you want to start using output before the entire project is finished. See, for example, my current situation, where I'm only running digi and reco tasks at a severe inefficiency because I have 513 fit workers but only 251 tasks running, but there are plenty of mAOD tasks that could be run. This could also be partially addressed by changes on the factory side which I believe @btovar is considering, although I think the points above would still stand. ``` (.lobster) [earth] ~/lobster-spring-16-scale-test/ttW >work_queue_status -A earth.crc.nd.edu 9001 CATEGORY RUNNING WAITING FIT-WORKERS MAX-CORES MAX-MEMORY MAX-DISK digi 247 1002 0 ~4 ~4400 ~22131 reco 4 0 513 ~5 ~4400 ~4504 ```
process
optimize default task creation algorithm when creating tasks we currently and cycle through the categories so that the smallest task limit is processed first if no task limit is specified the default sort is on the category name i think we should consider alternate default sorting methods for example we could sort by number of units available such that we keep the categories processed as evenly possible this has two advantages it is likely that different categories have different bottlenecks so maximizing the mixing might minimize the per category bottlenecks and for chained workflows with different categories at various steps this would mean that halfway through processing you d have half of your last step processed which might be useful if you want to start using output before the entire project is finished see for example my current situation where i m only running digi and reco tasks at a severe inefficiency because i have fit workers but only tasks running but there are plenty of maod tasks that could be run this could also be partially addressed by changes on the factory side which i believe btovar is considering although i think the points above would still stand lobster lobster spring scale test ttw work queue status a earth crc nd edu category running waiting fit workers max cores max memory max disk digi reco
1
266,243
28,310,133,264
IssuesEvent
2023-04-10 14:42:23
RG4421/openedr
https://api.github.com/repos/RG4421/openedr
closed
CVE-2020-8286 (High) detected in curlcurl-7_63_0 - autoclosed
Mend: dependency security vulnerability
## CVE-2020-8286 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>curlcurl-7_63_0</b></p></summary> <p> <p>A command line tool and library for transferring data with URL syntax, supporting HTTP, HTTPS, FTP, FTPS, GOPHER, TFTP, SCP, SFTP, SMB, TELNET, DICT, LDAP, LDAPS, FILE, IMAP, SMTP, POP3, RTSP and RTMP. libcurl offers a myriad of powerful features</p> <p>Library home page: <a href=https://github.com/curl/curl.git>https://github.com/curl/curl.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/RG4421/openedr/commit/f991dbd97bf34917a1d61c43ef4b41832708779c">f991dbd97bf34917a1d61c43ef4b41832708779c</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/edrav2/eprj/curl/lib/vtls/openssl.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/edrav2/eprj/curl/lib/vtls/openssl.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> curl 7.41.0 through 7.73.0 is vulnerable to an improper check for certificate revocation due to insufficient verification of the OCSP response. <p>Publish Date: 2020-12-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-8286>CVE-2020-8286</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8286">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8286</a></p> <p>Release Date: 2020-12-14</p> <p>Fix Resolution: 7.74.0</p> </p> </details> <p></p>
True
CVE-2020-8286 (High) detected in curlcurl-7_63_0 - autoclosed - ## CVE-2020-8286 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>curlcurl-7_63_0</b></p></summary> <p> <p>A command line tool and library for transferring data with URL syntax, supporting HTTP, HTTPS, FTP, FTPS, GOPHER, TFTP, SCP, SFTP, SMB, TELNET, DICT, LDAP, LDAPS, FILE, IMAP, SMTP, POP3, RTSP and RTMP. libcurl offers a myriad of powerful features</p> <p>Library home page: <a href=https://github.com/curl/curl.git>https://github.com/curl/curl.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/RG4421/openedr/commit/f991dbd97bf34917a1d61c43ef4b41832708779c">f991dbd97bf34917a1d61c43ef4b41832708779c</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/edrav2/eprj/curl/lib/vtls/openssl.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/edrav2/eprj/curl/lib/vtls/openssl.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> curl 7.41.0 through 7.73.0 is vulnerable to an improper check for certificate revocation due to insufficient verification of the OCSP response. <p>Publish Date: 2020-12-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-8286>CVE-2020-8286</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8286">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8286</a></p> <p>Release Date: 2020-12-14</p> <p>Fix Resolution: 7.74.0</p> </p> </details> <p></p>
non_process
cve high detected in curlcurl autoclosed cve high severity vulnerability vulnerable library curlcurl a command line tool and library for transferring data with url syntax supporting http https ftp ftps gopher tftp scp sftp smb telnet dict ldap ldaps file imap smtp rtsp and rtmp libcurl offers a myriad of powerful features library home page a href found in head commit a href found in base branch main vulnerable source files eprj curl lib vtls openssl c eprj curl lib vtls openssl c vulnerability details curl through is vulnerable to an improper check for certificate revocation due to insufficient verification of the ocsp response publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
316,824
23,650,500,995
IssuesEvent
2022-08-26 05:59:22
icssc/AntAlmanac
https://api.github.com/repos/icssc/AntAlmanac
opened
Help Page Modal
enhancement documentation
Something similar to the About page, except with a list of AntAlmanac's features and short descriptions on how to use each of them.
1.0
Help Page Modal - Something similar to the About page, except with a list of AntAlmanac's features and short descriptions on how to use each of them.
non_process
help page modal something similar to the about page except with a list of antalmanac s features and short descriptions on how to use each of them
0
1,507
4,099,571,536
IssuesEvent
2016-06-03 13:13:57
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
opened
DomProcessor._isOpenLinkInIframe work wrong with named top window
AREA: client AREA: server SYSTEM: URL processing TYPE: bug
Markup for reproduce: ```javascript <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Title</title> </head> <body> <script> window.name = 'main_window_name'; </script> <form id='form' target="main_window_name" action="/"> <input value="Text"> <input type="submit"> </form> </body> </html> ``` After proxing for `action` will have `http://<proxy-host-name>/<sessionId>!if/<siteUrl> value. It means that form is placed inside iframe. It is wrong. Url should contains only `f` resource type letter.
1.0
DomProcessor._isOpenLinkInIframe work wrong with named top window - Markup for reproduce: ```javascript <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Title</title> </head> <body> <script> window.name = 'main_window_name'; </script> <form id='form' target="main_window_name" action="/"> <input value="Text"> <input type="submit"> </form> </body> </html> ``` After proxing for `action` will have `http://<proxy-host-name>/<sessionId>!if/<siteUrl> value. It means that form is placed inside iframe. It is wrong. Url should contains only `f` resource type letter.
process
domprocessor isopenlinkiniframe work wrong with named top window markup for reproduce javascript title window name main window name after proxing for action will have value it means that form is placed inside iframe it is wrong url should contains only f resource type letter
1
16,812
22,060,910,391
IssuesEvent
2022-05-30 17:41:07
bitPogo/kmock
https://api.github.com/repos/bitPogo/kmock
closed
Refactor Type Name resolver when overloaded
enhancement kmock-processor kmock-gradle
## Description <!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug --> Currently the prefixes of type names are filtered instead of allow listed. This actually can cause unnecessarily boilerplate. The avoidance of collisions can be done when an error occurs by a consumer. Acceptance Criteria: 1. Add 2 new fields to the Extension - one as a feature flag/switch between the already implemented version and the new to avoid a breaking change. One for the allowList. 2. Deprecated the existing flag. 3. Processor implements the switch as well as the new behaviour for the FunctionGenerator/MethodGenerator
1.0
Refactor Type Name resolver when overloaded - ## Description <!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug --> Currently the prefixes of type names are filtered instead of allow listed. This actually can cause unnecessarily boilerplate. The avoidance of collisions can be done when an error occurs by a consumer. Acceptance Criteria: 1. Add 2 new fields to the Extension - one as a feature flag/switch between the already implemented version and the new to avoid a breaking change. One for the allowList. 2. Deprecated the existing flag. 3. Processor implements the switch as well as the new behaviour for the FunctionGenerator/MethodGenerator
process
refactor type name resolver when overloaded description currently the prefixes of type names are filtered instead of allow listed this actually can cause unnecessarily boilerplate the avoidance of collisions can be done when an error occurs by a consumer acceptance criteria add new fields to the extension one as a feature flag switch between the already implemented version and the new to avoid a breaking change one for the allowlist deprecated the existing flag processor implements the switch as well as the new behaviour for the functiongenerator methodgenerator
1
438,982
30,672,253,556
IssuesEvent
2023-07-26 00:07:32
intelops/tarian-detector
https://api.github.com/repos/intelops/tarian-detector
opened
Contributor Guidelines
documentation
should be included in the contributor guidelines to help potential contributors participate in the development process, submit improvements, and follow best practices Introduction to Contributing: - [ ] Welcome potential contributors and express appreciation for their interest in the project. Briefly explain the importance of community involvement and the benefits of contributing. Project Overview: - [ ] Provide an overview of the project's purpose, goals, and scope. Describe the technology stack, programming languages, and any relevant tools or frameworks used in the project. Getting Started: - [ ] Outline the steps for contributors to set up the development environment and install the project (refer to the installation guide). Explain how to obtain the source code and which branch to work on (e.g., development or feature branches). Code Style and Standards: - [ ] Define coding conventions, style guidelines, and formatting standards that contributors should adhere to. - [ ] Specify any specific patterns or practices frequently used in the project. Version Control and Branching: - [ ] Describe the version control system used (e.g., Git) and the branching strategy. Explain the workflow for creating branches, making commits, and pushing changes. Feature/bug fix contribution process: - [ ] Detail the steps to propose new features or bug fixes. - [ ] Provide guidance on how to create a pull request (PR) and link to any relevant PR templates. - [ ] Code Review Process: Explain the importance of code reviews in the project. Describe the review process, including how to address feedback and when a PR can be merged. Testing Guidelines: - [ ] Encourage contributors to write unit tests and integration tests for their code changes. Explain the preferred testing frameworks and practices. - [ ] Documentation Contribution: Emphasize the significance of clear and comprehensive documentation. Explain how contributors can improve existing documentation or add new content. Issue Tracking: - [ ] Describe how to find and pick up tasks from the issue tracker (e.g., GitHub issues). Encourage contributors to communicate openly and ask questions when unsure about tasks. - [ ] Code of Conduct: Include a link to the project's code of conduct and stress the importance of respectful and inclusive communication. Recognition and Credits: Explain how contributors will be recognized for their efforts (e.g., through contributor lists, badges, or mentions in the project). - [ ] Community Communication Channels: Provide information about the project's community forums, chat platforms, or mailing lists where contributors can seek help or engage with other community members. - [ ] License and Copyright: Clarify the licensing terms for contributions and how contributors retain copyright for their work.
1.0
Contributor Guidelines - should be included in the contributor guidelines to help potential contributors participate in the development process, submit improvements, and follow best practices Introduction to Contributing: - [ ] Welcome potential contributors and express appreciation for their interest in the project. Briefly explain the importance of community involvement and the benefits of contributing. Project Overview: - [ ] Provide an overview of the project's purpose, goals, and scope. Describe the technology stack, programming languages, and any relevant tools or frameworks used in the project. Getting Started: - [ ] Outline the steps for contributors to set up the development environment and install the project (refer to the installation guide). Explain how to obtain the source code and which branch to work on (e.g., development or feature branches). Code Style and Standards: - [ ] Define coding conventions, style guidelines, and formatting standards that contributors should adhere to. - [ ] Specify any specific patterns or practices frequently used in the project. Version Control and Branching: - [ ] Describe the version control system used (e.g., Git) and the branching strategy. Explain the workflow for creating branches, making commits, and pushing changes. Feature/bug fix contribution process: - [ ] Detail the steps to propose new features or bug fixes. - [ ] Provide guidance on how to create a pull request (PR) and link to any relevant PR templates. - [ ] Code Review Process: Explain the importance of code reviews in the project. Describe the review process, including how to address feedback and when a PR can be merged. Testing Guidelines: - [ ] Encourage contributors to write unit tests and integration tests for their code changes. Explain the preferred testing frameworks and practices. - [ ] Documentation Contribution: Emphasize the significance of clear and comprehensive documentation. Explain how contributors can improve existing documentation or add new content. Issue Tracking: - [ ] Describe how to find and pick up tasks from the issue tracker (e.g., GitHub issues). Encourage contributors to communicate openly and ask questions when unsure about tasks. - [ ] Code of Conduct: Include a link to the project's code of conduct and stress the importance of respectful and inclusive communication. Recognition and Credits: Explain how contributors will be recognized for their efforts (e.g., through contributor lists, badges, or mentions in the project). - [ ] Community Communication Channels: Provide information about the project's community forums, chat platforms, or mailing lists where contributors can seek help or engage with other community members. - [ ] License and Copyright: Clarify the licensing terms for contributions and how contributors retain copyright for their work.
non_process
contributor guidelines should be included in the contributor guidelines to help potential contributors participate in the development process submit improvements and follow best practices introduction to contributing welcome potential contributors and express appreciation for their interest in the project briefly explain the importance of community involvement and the benefits of contributing project overview provide an overview of the project s purpose goals and scope describe the technology stack programming languages and any relevant tools or frameworks used in the project getting started outline the steps for contributors to set up the development environment and install the project refer to the installation guide explain how to obtain the source code and which branch to work on e g development or feature branches code style and standards define coding conventions style guidelines and formatting standards that contributors should adhere to specify any specific patterns or practices frequently used in the project version control and branching describe the version control system used e g git and the branching strategy explain the workflow for creating branches making commits and pushing changes feature bug fix contribution process detail the steps to propose new features or bug fixes provide guidance on how to create a pull request pr and link to any relevant pr templates code review process explain the importance of code reviews in the project describe the review process including how to address feedback and when a pr can be merged testing guidelines encourage contributors to write unit tests and integration tests for their code changes explain the preferred testing frameworks and practices documentation contribution emphasize the significance of clear and comprehensive documentation explain how contributors can improve existing documentation or add new content issue tracking describe how to find and pick up tasks from the issue tracker e g github issues encourage contributors to communicate openly and ask questions when unsure about tasks code of conduct include a link to the project s code of conduct and stress the importance of respectful and inclusive communication recognition and credits explain how contributors will be recognized for their efforts e g through contributor lists badges or mentions in the project community communication channels provide information about the project s community forums chat platforms or mailing lists where contributors can seek help or engage with other community members license and copyright clarify the licensing terms for contributions and how contributors retain copyright for their work
0
20,039
26,522,055,460
IssuesEvent
2023-01-19 04:15:56
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
[Mirror] new jdk19 archives
P2 type: process team-OSS mirror request
### Please list the URLs of the archives you'd like to mirror: https://cdn.azul.com/zulu/bin/zulu19.32.13-ca-jdk19.0.2-linux_x64.tar.gz https://cdn.azul.com/zulu/bin/zulu19.32.13-ca-jdk19.0.2-linux_aarch64.tar.gz https://cdn.azul.com/zulu/bin/zulu19.32.13-ca-jdk19.0.2-win_x64.zip https://cdn.azul.com/zulu/bin/zulu19.32.13-ca-jdk19.0.2-macosx_x64.tar.gz https://cdn.azul.com/zulu/bin/zulu19.32.13-ca-jdk19.0.2-macosx_aarch64.tar.gz
1.0
[Mirror] new jdk19 archives - ### Please list the URLs of the archives you'd like to mirror: https://cdn.azul.com/zulu/bin/zulu19.32.13-ca-jdk19.0.2-linux_x64.tar.gz https://cdn.azul.com/zulu/bin/zulu19.32.13-ca-jdk19.0.2-linux_aarch64.tar.gz https://cdn.azul.com/zulu/bin/zulu19.32.13-ca-jdk19.0.2-win_x64.zip https://cdn.azul.com/zulu/bin/zulu19.32.13-ca-jdk19.0.2-macosx_x64.tar.gz https://cdn.azul.com/zulu/bin/zulu19.32.13-ca-jdk19.0.2-macosx_aarch64.tar.gz
process
new archives please list the urls of the archives you d like to mirror
1
15,570
19,703,505,530
IssuesEvent
2022-01-12 19:08:08
googleapis/nodejs-appengine-admin
https://api.github.com/repos/googleapis/nodejs-appengine-admin
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'appengine-admin' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'appengine-admin' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname appengine admin invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
274,368
8,560,077,774
IssuesEvent
2018-11-08 23:31:03
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.google.com - see bug description
browser-firefox-mobile priority-critical status-first-contact
<!-- @browser: Firefox Mobile 62.0 --> <!-- @ua_header: Mozilla/5.0 (Android 6.0.1; Mobile; rv:62.0) Gecko/62.0 Firefox/62.0 --> <!-- @reported_with: mobile-reporter --> **URL**: http://www.google.com/search?q=rt **Browser / Version**: Firefox Mobile 62.0 **Operating System**: Android 6.0.1 **Tested Another Browser**: Yes **Problem type**: Something else **Description**: google.com loads unecrypted (http) , allowing 3rd parties to hijack searches. This happens when navigating to google.com manually. If one types a search query in the address bar it will load encrypted (https) **Steps to Reproduce**: Navigating to google.com via the adress bar. *Note if you explicitly type "https://google.com" , the https version will load. [![Screenshot Description](https://webcompat.com/uploads/2018/5/bc962908-f5dc-4bfb-b40f-d4e277360f29-thumb.jpg)](https://webcompat.com/uploads/2018/5/bc962908-f5dc-4bfb-b40f-d4e277360f29.jpg) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.google.com - see bug description - <!-- @browser: Firefox Mobile 62.0 --> <!-- @ua_header: Mozilla/5.0 (Android 6.0.1; Mobile; rv:62.0) Gecko/62.0 Firefox/62.0 --> <!-- @reported_with: mobile-reporter --> **URL**: http://www.google.com/search?q=rt **Browser / Version**: Firefox Mobile 62.0 **Operating System**: Android 6.0.1 **Tested Another Browser**: Yes **Problem type**: Something else **Description**: google.com loads unecrypted (http) , allowing 3rd parties to hijack searches. This happens when navigating to google.com manually. If one types a search query in the address bar it will load encrypted (https) **Steps to Reproduce**: Navigating to google.com via the adress bar. *Note if you explicitly type "https://google.com" , the https version will load. [![Screenshot Description](https://webcompat.com/uploads/2018/5/bc962908-f5dc-4bfb-b40f-d4e277360f29-thumb.jpg)](https://webcompat.com/uploads/2018/5/bc962908-f5dc-4bfb-b40f-d4e277360f29.jpg) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description google com loads unecrypted http allowing parties to hijack searches this happens when navigating to google com manually if one types a search query in the address bar it will load encrypted https steps to reproduce navigating to google com via the adress bar note if you explicitly type the https version will load from with ❤️
0
9,691
8,690,957,850
IssuesEvent
2018-12-03 23:16:15
terraform-providers/terraform-provider-azurerm
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
closed
App Service Slot creation/swap/deletion causes terraform to think base app service needs to be rebuilt
question service/app-service waiting-response
### Terraform Version ``` Terraform v0.11.8 + provider.azurerm v1.16.0 + provider.random v2.0.0 ``` ### Abstract Provisioning `azurerm_app_service` as a new resource works just fine. The problem becomes when we use CI/CD process to build/deploy our web application using `az` (Azure-CLI). As part of our deployment, we create an ephemeral app service slot (named after first 7 characters of commit hash), deploy to that slot, run integration tests, then perform app service slot swap (with `production` slot). This causes terraform to think that the app service needs to be recreated because its properties are different. ### Steps to reproduce 1. Create simple `azurerm_app_service` .tf (let's name our app service = `appsvcbase` so we get something like `appsvcbase.azurewebsites.net`) 2. `terraform apply` to deploy resource(s) 3. Create simple "Hello World" web application (can be literally index.html for this PoC that displays "Hello World") 4. Use zip to zip-up the web application (just the index.html file) to create `package.zip` 5. Create deployment app slot via Azure CLI: `az webapp deployment slot create -g <name of resource group> -n appsvcbase -s <name of slot> --configuration-source appsvcbase --output table` (https://docs.microsoft.com/en-us/cli/azure/webapp/deployment/source?view=azure-cli-latest#az-webapp-deployment-source-config-zip) 6. Deploy web app package.zip to slot: `az webapp deployment source config-zip -g <resource group> -n appsvcbase -s <name of slot> --src package.zip --verbose --output table` 7. "Imagination here about testing the deployed app in slot" 8. Swap the deployed slot with `production` slot (basically the app service itself): `az webapp deployment slot swap -g <resource group> -n appsvcbase -s <slot name> --target-slot production --output table` 9. Delete the deployment slot: `az webapp deployment slot delete -g <resource group> -n appsvcbase -s <slot name> --output table` 10. Go back to terraform config files and run `terraform plan` 11. Observe now as terraform needs to re-create the app service: ``` -/+ azurerm_app_service.appsvcbase (new resource required) id: "/subscriptions/<subid>/resourceGroups/<rg name>/providers/Microsoft.Web/sites/appsvcbase" => <computed> (forces new resource) app_service_plan_id: "/subscriptions/<sub id>/resourceGroups/<rg name>/providers/Microsoft.Web/serverfarms/<app service plan name>" => "${data.azurerm_app_service_plan.appserviceplanname.id}" (forces new resource) <lots of appsettings redacted> client_affinity_enabled: "true" => <computed> connection_string.#: "0" => <computed> default_site_hostname: "appsvcbase.azurewebsites.net" => <computed> enabled: "true" => "true" https_only: "false" => "false" identity.#: "0" => <computed> location: "westus" => "${data.azurerm_app_service_plan.appsvcbase.location}" (forces new resource) name: "appsvcbase" => "appsvcbase" outbound_ip_addresses: "<info redacted>" => <computed> resource_group_name: "<info redacted>" => "<info redacted>" site_config.#: "1" => "1" site_config.0.always_on: "false" => "false" site_config.0.default_documents.#: "9" => "0" site_config.0.default_documents.0: "Default.htm" => "" site_config.0.default_documents.1: "Default.html" => "" site_config.0.default_documents.2: "Default.asp" => "" site_config.0.default_documents.3: "index.htm" => "" site_config.0.default_documents.4: "index.html" => "" site_config.0.default_documents.5: "iisstart.htm" => "" site_config.0.default_documents.6: "default.aspx" => "" site_config.0.default_documents.7: "index.php" => "" site_config.0.default_documents.8: "hostingstart.html" => "" site_config.0.dotnet_framework_version: "v4.0" => "v4.0" site_config.0.ftps_state: "AllAllowed" => <computed> site_config.0.http2_enabled: "false" => "true" site_config.0.ip_restriction.#: "0" => <computed> site_config.0.linux_fx_version: "" => <computed> site_config.0.local_mysql_enabled: "false" => <computed> site_config.0.min_tls_version: "1.2" => "1.2" site_config.0.php_version: "5.6" => "" site_config.0.remote_debugging_enabled: "false" => "false" site_config.0.scm_type: "None" => "None" site_config.0.use_32_bit_worker_process: "true" => <computed> site_config.0.virtual_network_name: "" => "nameofvnetalreadyexisting" site_config.0.websockets_enabled: "false" => <computed> site_credential.#: "1" => <computed> source_control.#: "1" => <computed> tags.%: "0" => <computed> Plan: 1 to add, 0 to change, 1 to destroy. ``` ### Post Scriptum Not sure what a workaround to this one is. We like this deployment model since we can only swap once our application has passed tests and we can generate these slots as needed for any quick demos or as developers need them. But unfortunately we also need terraform not to think that the appservice was somehow changed drastically on Azure side. Not sure what can be done in terraform configs about it.
2.0
App Service Slot creation/swap/deletion causes terraform to think base app service needs to be rebuilt - ### Terraform Version ``` Terraform v0.11.8 + provider.azurerm v1.16.0 + provider.random v2.0.0 ``` ### Abstract Provisioning `azurerm_app_service` as a new resource works just fine. The problem becomes when we use CI/CD process to build/deploy our web application using `az` (Azure-CLI). As part of our deployment, we create an ephemeral app service slot (named after first 7 characters of commit hash), deploy to that slot, run integration tests, then perform app service slot swap (with `production` slot). This causes terraform to think that the app service needs to be recreated because its properties are different. ### Steps to reproduce 1. Create simple `azurerm_app_service` .tf (let's name our app service = `appsvcbase` so we get something like `appsvcbase.azurewebsites.net`) 2. `terraform apply` to deploy resource(s) 3. Create simple "Hello World" web application (can be literally index.html for this PoC that displays "Hello World") 4. Use zip to zip-up the web application (just the index.html file) to create `package.zip` 5. Create deployment app slot via Azure CLI: `az webapp deployment slot create -g <name of resource group> -n appsvcbase -s <name of slot> --configuration-source appsvcbase --output table` (https://docs.microsoft.com/en-us/cli/azure/webapp/deployment/source?view=azure-cli-latest#az-webapp-deployment-source-config-zip) 6. Deploy web app package.zip to slot: `az webapp deployment source config-zip -g <resource group> -n appsvcbase -s <name of slot> --src package.zip --verbose --output table` 7. "Imagination here about testing the deployed app in slot" 8. Swap the deployed slot with `production` slot (basically the app service itself): `az webapp deployment slot swap -g <resource group> -n appsvcbase -s <slot name> --target-slot production --output table` 9. Delete the deployment slot: `az webapp deployment slot delete -g <resource group> -n appsvcbase -s <slot name> --output table` 10. Go back to terraform config files and run `terraform plan` 11. Observe now as terraform needs to re-create the app service: ``` -/+ azurerm_app_service.appsvcbase (new resource required) id: "/subscriptions/<subid>/resourceGroups/<rg name>/providers/Microsoft.Web/sites/appsvcbase" => <computed> (forces new resource) app_service_plan_id: "/subscriptions/<sub id>/resourceGroups/<rg name>/providers/Microsoft.Web/serverfarms/<app service plan name>" => "${data.azurerm_app_service_plan.appserviceplanname.id}" (forces new resource) <lots of appsettings redacted> client_affinity_enabled: "true" => <computed> connection_string.#: "0" => <computed> default_site_hostname: "appsvcbase.azurewebsites.net" => <computed> enabled: "true" => "true" https_only: "false" => "false" identity.#: "0" => <computed> location: "westus" => "${data.azurerm_app_service_plan.appsvcbase.location}" (forces new resource) name: "appsvcbase" => "appsvcbase" outbound_ip_addresses: "<info redacted>" => <computed> resource_group_name: "<info redacted>" => "<info redacted>" site_config.#: "1" => "1" site_config.0.always_on: "false" => "false" site_config.0.default_documents.#: "9" => "0" site_config.0.default_documents.0: "Default.htm" => "" site_config.0.default_documents.1: "Default.html" => "" site_config.0.default_documents.2: "Default.asp" => "" site_config.0.default_documents.3: "index.htm" => "" site_config.0.default_documents.4: "index.html" => "" site_config.0.default_documents.5: "iisstart.htm" => "" site_config.0.default_documents.6: "default.aspx" => "" site_config.0.default_documents.7: "index.php" => "" site_config.0.default_documents.8: "hostingstart.html" => "" site_config.0.dotnet_framework_version: "v4.0" => "v4.0" site_config.0.ftps_state: "AllAllowed" => <computed> site_config.0.http2_enabled: "false" => "true" site_config.0.ip_restriction.#: "0" => <computed> site_config.0.linux_fx_version: "" => <computed> site_config.0.local_mysql_enabled: "false" => <computed> site_config.0.min_tls_version: "1.2" => "1.2" site_config.0.php_version: "5.6" => "" site_config.0.remote_debugging_enabled: "false" => "false" site_config.0.scm_type: "None" => "None" site_config.0.use_32_bit_worker_process: "true" => <computed> site_config.0.virtual_network_name: "" => "nameofvnetalreadyexisting" site_config.0.websockets_enabled: "false" => <computed> site_credential.#: "1" => <computed> source_control.#: "1" => <computed> tags.%: "0" => <computed> Plan: 1 to add, 0 to change, 1 to destroy. ``` ### Post Scriptum Not sure what a workaround to this one is. We like this deployment model since we can only swap once our application has passed tests and we can generate these slots as needed for any quick demos or as developers need them. But unfortunately we also need terraform not to think that the appservice was somehow changed drastically on Azure side. Not sure what can be done in terraform configs about it.
non_process
app service slot creation swap deletion causes terraform to think base app service needs to be rebuilt terraform version terraform provider azurerm provider random abstract provisioning azurerm app service as a new resource works just fine the problem becomes when we use ci cd process to build deploy our web application using az azure cli as part of our deployment we create an ephemeral app service slot named after first characters of commit hash deploy to that slot run integration tests then perform app service slot swap with production slot this causes terraform to think that the app service needs to be recreated because its properties are different steps to reproduce create simple azurerm app service tf let s name our app service appsvcbase so we get something like appsvcbase azurewebsites net terraform apply to deploy resource s create simple hello world web application can be literally index html for this poc that displays hello world use zip to zip up the web application just the index html file to create package zip create deployment app slot via azure cli az webapp deployment slot create g n appsvcbase s configuration source appsvcbase output table deploy web app package zip to slot az webapp deployment source config zip g n appsvcbase s src package zip verbose output table imagination here about testing the deployed app in slot swap the deployed slot with production slot basically the app service itself az webapp deployment slot swap g n appsvcbase s target slot production output table delete the deployment slot az webapp deployment slot delete g n appsvcbase s output table go back to terraform config files and run terraform plan observe now as terraform needs to re create the app service azurerm app service appsvcbase new resource required id subscriptions resourcegroups providers microsoft web sites appsvcbase forces new resource app service plan id subscriptions resourcegroups providers microsoft web serverfarms data azurerm app service plan appserviceplanname id forces new resource client affinity enabled true connection string default site hostname appsvcbase azurewebsites net enabled true true https only false false identity location westus data azurerm app service plan appsvcbase location forces new resource name appsvcbase appsvcbase outbound ip addresses resource group name site config site config always on false false site config default documents site config default documents default htm site config default documents default html site config default documents default asp site config default documents index htm site config default documents index html site config default documents iisstart htm site config default documents default aspx site config default documents index php site config default documents hostingstart html site config dotnet framework version site config ftps state allallowed site config enabled false true site config ip restriction site config linux fx version site config local mysql enabled false site config min tls version site config php version site config remote debugging enabled false false site config scm type none none site config use bit worker process true site config virtual network name nameofvnetalreadyexisting site config websockets enabled false site credential source control tags plan to add to change to destroy post scriptum not sure what a workaround to this one is we like this deployment model since we can only swap once our application has passed tests and we can generate these slots as needed for any quick demos or as developers need them but unfortunately we also need terraform not to think that the appservice was somehow changed drastically on azure side not sure what can be done in terraform configs about it
0
117,199
17,439,360,555
IssuesEvent
2021-08-05 01:07:17
brogers588/Comcast.github.io
https://api.github.com/repos/brogers588/Comcast.github.io
opened
CVE-2021-32804 (High) detected in tar-4.4.8.tgz
security vulnerability
## CVE-2021-32804 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.8.tgz</b></p></summary> <p>tar for node</p> <p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.8.tgz">https://registry.npmjs.org/tar/-/tar-4.4.8.tgz</a></p> <p> Dependency Hierarchy: - cli-7.6.2.tgz (Root Library) - chokidar-2.1.8.tgz - fsevents-1.2.9.tgz - node-pre-gyp-0.12.0.tgz - :x: **tar-4.4.8.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The npm package "tar" (aka node-tar) before versions 6.1.1, 5.0.6, 4.4.14, and 3.3.2 has a arbitrary File Creation/Overwrite vulnerability due to insufficient absolute path sanitization. node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the `preservePaths` flag is not set to `true`. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example `/home/user/.bashrc` would turn into `home/user/.bashrc`. This logic was insufficient when file paths contained repeated path roots such as `////home/user/.bashrc`. `node-tar` would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. `///home/user/.bashrc`) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.2, 4.4.14, 5.0.6 and 6.1.1. Users may work around this vulnerability without upgrading by creating a custom `onentry` method which sanitizes the `entry.path` or a `filter` method which removes entries with absolute paths. See referenced GitHub Advisory for details. Be aware of CVE-2021-32803 which fixes a similar bug in later versions of tar. <p>Publish Date: 2021-08-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32804>CVE-2021-32804</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9">https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9</a></p> <p>Release Date: 2021-08-03</p> <p>Fix Resolution: tar - 3.2.2, 4.4.14, 5.0.6, 6.1.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"4.4.8","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"@babel/cli:7.6.2;chokidar:2.1.8;fsevents:1.2.9;node-pre-gyp:0.12.0;tar:4.4.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 3.2.2, 4.4.14, 5.0.6, 6.1.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-32804","vulnerabilityDetails":"The npm package \"tar\" (aka node-tar) before versions 6.1.1, 5.0.6, 4.4.14, and 3.3.2 has a arbitrary File Creation/Overwrite vulnerability due to insufficient absolute path sanitization. node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the `preservePaths` flag is not set to `true`. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example `/home/user/.bashrc` would turn into `home/user/.bashrc`. This logic was insufficient when file paths contained repeated path roots such as `////home/user/.bashrc`. `node-tar` would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. `///home/user/.bashrc`) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.2, 4.4.14, 5.0.6 and 6.1.1. Users may work around this vulnerability without upgrading by creating a custom `onentry` method which sanitizes the `entry.path` or a `filter` method which removes entries with absolute paths. See referenced GitHub Advisory for details. Be aware of CVE-2021-32803 which fixes a similar bug in later versions of tar.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32804","cvss3Severity":"high","cvss3Score":"8.2","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-32804 (High) detected in tar-4.4.8.tgz - ## CVE-2021-32804 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.8.tgz</b></p></summary> <p>tar for node</p> <p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.8.tgz">https://registry.npmjs.org/tar/-/tar-4.4.8.tgz</a></p> <p> Dependency Hierarchy: - cli-7.6.2.tgz (Root Library) - chokidar-2.1.8.tgz - fsevents-1.2.9.tgz - node-pre-gyp-0.12.0.tgz - :x: **tar-4.4.8.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The npm package "tar" (aka node-tar) before versions 6.1.1, 5.0.6, 4.4.14, and 3.3.2 has a arbitrary File Creation/Overwrite vulnerability due to insufficient absolute path sanitization. node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the `preservePaths` flag is not set to `true`. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example `/home/user/.bashrc` would turn into `home/user/.bashrc`. This logic was insufficient when file paths contained repeated path roots such as `////home/user/.bashrc`. `node-tar` would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. `///home/user/.bashrc`) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.2, 4.4.14, 5.0.6 and 6.1.1. Users may work around this vulnerability without upgrading by creating a custom `onentry` method which sanitizes the `entry.path` or a `filter` method which removes entries with absolute paths. See referenced GitHub Advisory for details. Be aware of CVE-2021-32803 which fixes a similar bug in later versions of tar. <p>Publish Date: 2021-08-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32804>CVE-2021-32804</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9">https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9</a></p> <p>Release Date: 2021-08-03</p> <p>Fix Resolution: tar - 3.2.2, 4.4.14, 5.0.6, 6.1.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"4.4.8","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"@babel/cli:7.6.2;chokidar:2.1.8;fsevents:1.2.9;node-pre-gyp:0.12.0;tar:4.4.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 3.2.2, 4.4.14, 5.0.6, 6.1.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-32804","vulnerabilityDetails":"The npm package \"tar\" (aka node-tar) before versions 6.1.1, 5.0.6, 4.4.14, and 3.3.2 has a arbitrary File Creation/Overwrite vulnerability due to insufficient absolute path sanitization. node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the `preservePaths` flag is not set to `true`. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example `/home/user/.bashrc` would turn into `home/user/.bashrc`. This logic was insufficient when file paths contained repeated path roots such as `////home/user/.bashrc`. `node-tar` would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. `///home/user/.bashrc`) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.2, 4.4.14, 5.0.6 and 6.1.1. Users may work around this vulnerability without upgrading by creating a custom `onentry` method which sanitizes the `entry.path` or a `filter` method which removes entries with absolute paths. See referenced GitHub Advisory for details. Be aware of CVE-2021-32803 which fixes a similar bug in later versions of tar.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32804","cvss3Severity":"high","cvss3Score":"8.2","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href dependency hierarchy cli tgz root library chokidar tgz fsevents tgz node pre gyp tgz x tar tgz vulnerable library found in base branch master vulnerability details the npm package tar aka node tar before versions and has a arbitrary file creation overwrite vulnerability due to insufficient absolute path sanitization node tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the preservepaths flag is not set to true this is achieved by stripping the absolute path root from any absolute file paths contained in a tar file for example home user bashrc would turn into home user bashrc this logic was insufficient when file paths contained repeated path roots such as home user bashrc node tar would only strip a single path root from such paths when given an absolute file path with repeating path roots the resulting path e g home user bashrc would still resolve to an absolute path thus allowing arbitrary file creation and overwrite this issue was addressed in releases and users may work around this vulnerability without upgrading by creating a custom onentry method which sanitizes the entry path or a filter method which removes entries with absolute paths see referenced github advisory for details be aware of cve which fixes a similar bug in later versions of tar publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree babel cli chokidar fsevents node pre gyp tar isminimumfixversionavailable true minimumfixversion tar basebranches vulnerabilityidentifier cve vulnerabilitydetails the npm package tar aka node tar before versions and has a arbitrary file creation overwrite vulnerability due to insufficient absolute path sanitization node tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the preservepaths flag is not set to true this is achieved by stripping the absolute path root from any absolute file paths contained in a tar file for example home user bashrc would turn into home user bashrc this logic was insufficient when file paths contained repeated path roots such as home user bashrc node tar would only strip a single path root from such paths when given an absolute file path with repeating path roots the resulting path e g home user bashrc would still resolve to an absolute path thus allowing arbitrary file creation and overwrite this issue was addressed in releases and users may work around this vulnerability without upgrading by creating a custom onentry method which sanitizes the entry path or a filter method which removes entries with absolute paths see referenced github advisory for details be aware of cve which fixes a similar bug in later versions of tar vulnerabilityurl
0
59,810
8,380,147,381
IssuesEvent
2018-10-07 11:46:30
alcarney/stylo
https://api.github.com/repos/alcarney/stylo
closed
Document the "code standards"
contributors documentation
e.g. - What linting is and what it looks for - How to run the linting tests.
1.0
Document the "code standards" - e.g. - What linting is and what it looks for - How to run the linting tests.
non_process
document the code standards e g what linting is and what it looks for how to run the linting tests
0
255,932
21,968,930,924
IssuesEvent
2022-05-25 00:36:03
ipfs/go-ipfs
https://api.github.com/repos/ipfs/go-ipfs
closed
Tests sometimes are timing out when testing `go-ipfs-as-a-library`
kind/bug topic/test failure kind/test P2 kind/maintenance
### Checklist - [X] This is a bug report, not a question. Ask questions on [discuss.ipfs.io](https://discuss.ipfs.io). - [X] I have searched on the [issue tracker](https://github.com/ipfs/go-ipfs/issues?q=is%3Aissue) for my bug. - [X] I am running the latest [go-ipfs version](https://dist.ipfs.io/#go-ipfs) or have an issue updating. ### Installation method built from source ### Version ```Text master branch ``` ### Config _No response_ ### Description Tests are timing out because `Republisher` never closes, it is always waiting for the current value to be published on `Republisher.WaitPub`
2.0
Tests sometimes are timing out when testing `go-ipfs-as-a-library` - ### Checklist - [X] This is a bug report, not a question. Ask questions on [discuss.ipfs.io](https://discuss.ipfs.io). - [X] I have searched on the [issue tracker](https://github.com/ipfs/go-ipfs/issues?q=is%3Aissue) for my bug. - [X] I am running the latest [go-ipfs version](https://dist.ipfs.io/#go-ipfs) or have an issue updating. ### Installation method built from source ### Version ```Text master branch ``` ### Config _No response_ ### Description Tests are timing out because `Republisher` never closes, it is always waiting for the current value to be published on `Republisher.WaitPub`
non_process
tests sometimes are timing out when testing go ipfs as a library checklist this is a bug report not a question ask questions on i have searched on the for my bug i am running the latest or have an issue updating installation method built from source version text master branch config no response description tests are timing out because republisher never closes it is always waiting for the current value to be published on republisher waitpub
0
7,551
18,236,869,739
IssuesEvent
2021-10-01 08:06:17
RasaHQ/rasa
https://api.github.com/repos/RasaHQ/rasa
opened
Revamp: Include component source code in fingerprint key.
feature:rasa-3.0/architecture
### Description Currently the fingerprint key for a node run does not take into account the implementation of the component, just the inputs and config. This means that during development if the code is updated we will get a cache hit and not re-process the data in subsequent trainings. We should include some aspect of the implementation e.g. a hash of the source code. ### Definition of Done - [ ] Nodes re-run after changing the code of the used component.
1.0
Revamp: Include component source code in fingerprint key. - ### Description Currently the fingerprint key for a node run does not take into account the implementation of the component, just the inputs and config. This means that during development if the code is updated we will get a cache hit and not re-process the data in subsequent trainings. We should include some aspect of the implementation e.g. a hash of the source code. ### Definition of Done - [ ] Nodes re-run after changing the code of the used component.
non_process
revamp include component source code in fingerprint key description currently the fingerprint key for a node run does not take into account the implementation of the component just the inputs and config this means that during development if the code is updated we will get a cache hit and not re process the data in subsequent trainings we should include some aspect of the implementation e g a hash of the source code definition of done nodes re run after changing the code of the used component
0
18,492
24,550,937,569
IssuesEvent
2022-10-12 12:33:55
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] [Angular Upgrade] Dashboard > Sites > Site participant registry > UI issue
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
UI issue is observed in 'Site participant registry' ![image](https://user-images.githubusercontent.com/86007179/178207416-6b009777-4471-4b1b-a5d1-069f9247a1ef.png)
3.0
[PM] [Angular Upgrade] Dashboard > Sites > Site participant registry > UI issue - UI issue is observed in 'Site participant registry' ![image](https://user-images.githubusercontent.com/86007179/178207416-6b009777-4471-4b1b-a5d1-069f9247a1ef.png)
process
dashboard sites site participant registry ui issue ui issue is observed in site participant registry
1
448,092
12,943,132,237
IssuesEvent
2020-07-18 05:21:32
cilium/cilium
https://api.github.com/repos/cilium/cilium
closed
Use UUID generation to allow correlating logs for same request
good-first-issue kind/enhancement priority/medium stale
- [ ] Integrate with a UUID library such as https://github.com/satori/go.uuid. NodeID must be set correctly. - [ ] Generate (timetamp based?) UUID for: - [ ] Endpoints
1.0
Use UUID generation to allow correlating logs for same request - - [ ] Integrate with a UUID library such as https://github.com/satori/go.uuid. NodeID must be set correctly. - [ ] Generate (timetamp based?) UUID for: - [ ] Endpoints
non_process
use uuid generation to allow correlating logs for same request integrate with a uuid library such as nodeid must be set correctly generate timetamp based uuid for endpoints
0
67,232
27,754,558,535
IssuesEvent
2023-03-16 00:39:00
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[REMOTO] FullStack Sênior na Thunders Tecnologia
PJ FULL-STACK SENIOR TESTE UNITARIO REMOTO ASP.NET .NET CORE MICROSERVICES HELP WANTED Stale
Você tem vontade de crescer junto com sua empresa, gosta de conquistar novos desafios e tem iniciativa para inovar? Essa oportunidade é para você! A Thunders Tecnologia é uma empresa de soluções integradas de software para o mercado de energia que é líder no segmento ETRM (Energy Trading and Risk Mangement) com apenas 5 anos de existência. Com um crescimento de 40% em 2021 em relação ao ano passado, buscam aumentar o time com profissionais qualificados que aceitem o desafio da Thunders de consolidar novos produtos no mercado, principalmente o de energia. Atualmente, buscamos uma pessoa de FullStack para o nosso time de Tecnologia focado em nosso SAAS especializados gestão de comercializadoras de energia, consumidores livres ou especiais, geradores, autoprodutores e distribuidoras e que vise desenvolver novos produtos os consolidando no mercado de energia. - Nível: Sênior - Modelo de contratação: PJ - Local de trabalho: Remoto - Principais requisitos: .NET Core, ASP .NET Core, testes unitários e microsserviços. Vem fazer parte do nosso time! :smile: Para inscrição e/ou mais informações: https://vagas-emb.byintera.com/4th ![THUNDERS](https://user-images.githubusercontent.com/50675770/130485367-ecba4df8-a819-4c13-851b-90d7927cb625.jpg)
1.0
[REMOTO] FullStack Sênior na Thunders Tecnologia - Você tem vontade de crescer junto com sua empresa, gosta de conquistar novos desafios e tem iniciativa para inovar? Essa oportunidade é para você! A Thunders Tecnologia é uma empresa de soluções integradas de software para o mercado de energia que é líder no segmento ETRM (Energy Trading and Risk Mangement) com apenas 5 anos de existência. Com um crescimento de 40% em 2021 em relação ao ano passado, buscam aumentar o time com profissionais qualificados que aceitem o desafio da Thunders de consolidar novos produtos no mercado, principalmente o de energia. Atualmente, buscamos uma pessoa de FullStack para o nosso time de Tecnologia focado em nosso SAAS especializados gestão de comercializadoras de energia, consumidores livres ou especiais, geradores, autoprodutores e distribuidoras e que vise desenvolver novos produtos os consolidando no mercado de energia. - Nível: Sênior - Modelo de contratação: PJ - Local de trabalho: Remoto - Principais requisitos: .NET Core, ASP .NET Core, testes unitários e microsserviços. Vem fazer parte do nosso time! :smile: Para inscrição e/ou mais informações: https://vagas-emb.byintera.com/4th ![THUNDERS](https://user-images.githubusercontent.com/50675770/130485367-ecba4df8-a819-4c13-851b-90d7927cb625.jpg)
non_process
fullstack sênior na thunders tecnologia você tem vontade de crescer junto com sua empresa gosta de conquistar novos desafios e tem iniciativa para inovar essa oportunidade é para você a thunders tecnologia é uma empresa de soluções integradas de software para o mercado de energia que é líder no segmento etrm energy trading and risk mangement com apenas anos de existência com um crescimento de em em relação ao ano passado buscam aumentar o time com profissionais qualificados que aceitem o desafio da thunders de consolidar novos produtos no mercado principalmente o de energia atualmente buscamos uma pessoa de fullstack para o nosso time de tecnologia focado em nosso saas especializados gestão de comercializadoras de energia consumidores livres ou especiais geradores autoprodutores e distribuidoras e que vise desenvolver novos produtos os consolidando no mercado de energia nível sênior modelo de contratação pj local de trabalho remoto principais requisitos net core asp net core testes unitários e microsserviços vem fazer parte do nosso time smile para inscrição e ou mais informações
0
14,339
17,368,226,868
IssuesEvent
2021-07-30 10:17:02
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Process.GetCurrentProcess().ProcessName is throwing InvalidOperationException for Blazor Webassembly project
arch-wasm area-System.Diagnostics.Process untriaged
I have created a simple Blazor Webassembly application in which Target Framework : .NET Standard 2.1 Contents of Counter.razor file ``` @page "/counter" <h1>Counter</h1> <p>Current count: @currentCount</p> <button class="btn btn-primary" @onclick="IncrementCount">Click me</button> @code { private int currentCount = 0; private void IncrementCount() { var CurrentProcess = System.Diagnostics.Process.GetCurrentProcess(); var pname = CurrentProcess.ProcessName; currentCount++; } } ``` Upon running this application, a browser window opens, when i press click me button, i end up getting this exception. crit: Microsoft.AspNetCore.Components.WebAssembly.Rendering.WebAssemblyRenderer[100] Unhandled exception rendering component: Process has exited or is inaccessible, so the requested information is not available. System.InvalidOperationException: Process has exited or is inaccessible, so the requested information is not available. at System.Diagnostics.Process.get_ProcessName () <0x2d34700 + 0x0004a> in :0 at WebApplication1.Pages.Counter.IncrementCount () <0x2d33d80 + 0x0000c> in :0 at Microsoft.AspNetCore.Components.EventCallbackWorkItem.InvokeAsync[T] (System.MulticastDelegate delegate, T arg) <0x2d07e80 + 0x0005e> in :0 at Microsoft.AspNetCore.Components.EventCallbackWorkItem.InvokeAsync (System.Object arg) <0x2d07bc0 + 0x0000a> in :0 at Microsoft.AspNetCore.Components.ComponentBase.Microsoft.AspNetCore.Components.IHandleEvent.HandleEventAsync (Microsoft.AspNetCore.Components.EventCallbackWorkItem callback, System.Object arg) <0x2d07b28 + 0x0000a> in :0 at Microsoft.AspNetCore.Components.EventCallback.InvokeAsync (System.Object arg) <0x2d076f0 + 0x00040> in :0 at Microsoft.AspNetCore.Components.RenderTree.Renderer.DispatchEventAsync (System.UInt64 eventHandlerId, Microsoft.AspNetCore.Components.RenderTree.EventFieldInfo fieldInfo, System.EventArgs eventArgs) <0x2d06ce0 + 0x000a8> in :0 Can someone assist me why i am getting this error ? Is Process.GetCurrentProcess().ProcessName API not supported with Blazor webassembly app ?
1.0
Process.GetCurrentProcess().ProcessName is throwing InvalidOperationException for Blazor Webassembly project - I have created a simple Blazor Webassembly application in which Target Framework : .NET Standard 2.1 Contents of Counter.razor file ``` @page "/counter" <h1>Counter</h1> <p>Current count: @currentCount</p> <button class="btn btn-primary" @onclick="IncrementCount">Click me</button> @code { private int currentCount = 0; private void IncrementCount() { var CurrentProcess = System.Diagnostics.Process.GetCurrentProcess(); var pname = CurrentProcess.ProcessName; currentCount++; } } ``` Upon running this application, a browser window opens, when i press click me button, i end up getting this exception. crit: Microsoft.AspNetCore.Components.WebAssembly.Rendering.WebAssemblyRenderer[100] Unhandled exception rendering component: Process has exited or is inaccessible, so the requested information is not available. System.InvalidOperationException: Process has exited or is inaccessible, so the requested information is not available. at System.Diagnostics.Process.get_ProcessName () <0x2d34700 + 0x0004a> in :0 at WebApplication1.Pages.Counter.IncrementCount () <0x2d33d80 + 0x0000c> in :0 at Microsoft.AspNetCore.Components.EventCallbackWorkItem.InvokeAsync[T] (System.MulticastDelegate delegate, T arg) <0x2d07e80 + 0x0005e> in :0 at Microsoft.AspNetCore.Components.EventCallbackWorkItem.InvokeAsync (System.Object arg) <0x2d07bc0 + 0x0000a> in :0 at Microsoft.AspNetCore.Components.ComponentBase.Microsoft.AspNetCore.Components.IHandleEvent.HandleEventAsync (Microsoft.AspNetCore.Components.EventCallbackWorkItem callback, System.Object arg) <0x2d07b28 + 0x0000a> in :0 at Microsoft.AspNetCore.Components.EventCallback.InvokeAsync (System.Object arg) <0x2d076f0 + 0x00040> in :0 at Microsoft.AspNetCore.Components.RenderTree.Renderer.DispatchEventAsync (System.UInt64 eventHandlerId, Microsoft.AspNetCore.Components.RenderTree.EventFieldInfo fieldInfo, System.EventArgs eventArgs) <0x2d06ce0 + 0x000a8> in :0 Can someone assist me why i am getting this error ? Is Process.GetCurrentProcess().ProcessName API not supported with Blazor webassembly app ?
process
process getcurrentprocess processname is throwing invalidoperationexception for blazor webassembly project i have created a simple blazor webassembly application in which target framework net standard contents of counter razor file page counter counter current count currentcount click me code private int currentcount private void incrementcount var currentprocess system diagnostics process getcurrentprocess var pname currentprocess processname currentcount upon running this application a browser window opens when i press click me button i end up getting this exception crit microsoft aspnetcore components webassembly rendering webassemblyrenderer unhandled exception rendering component process has exited or is inaccessible so the requested information is not available system invalidoperationexception process has exited or is inaccessible so the requested information is not available at system diagnostics process get processname in at pages counter incrementcount in at microsoft aspnetcore components eventcallbackworkitem invokeasync system multicastdelegate delegate t arg in at microsoft aspnetcore components eventcallbackworkitem invokeasync system object arg in at microsoft aspnetcore components componentbase microsoft aspnetcore components ihandleevent handleeventasync microsoft aspnetcore components eventcallbackworkitem callback system object arg in at microsoft aspnetcore components eventcallback invokeasync system object arg in at microsoft aspnetcore components rendertree renderer dispatcheventasync system eventhandlerid microsoft aspnetcore components rendertree eventfieldinfo fieldinfo system eventargs eventargs in can someone assist me why i am getting this error is process getcurrentprocess processname api not supported with blazor webassembly app
1